How to identify duplicate rows between tables

27 次查看(过去 30 天)
I'm using R2020b, and I want to set up a master table for appending new data to - and as part of this I want to identify any duplicate rows in the new, incoming table to filter them out before appending. Ideally, the master table will live in a related directory in a .mat file, and the new data will be read in directly from a set-name, set-location .csv using e.g.
fullname = fullfile('relativepath','newdata.csv');
% grab column headers from input sheet
opts = detectImportOptions(fullname);
% set all variable types to categorical
opts.VariableTypes(:) = {'categorical'};
% read in new data
T = readtable(fullname,opts);
% make any modifications to new data headers to match old data
T = renamevars(T,"NewLabel","OldLabel");
% clean new table headers to match originally-wizard-imported headers (I'd ask why these exhibit different behaviour, but that's a separate tragedy, and this current fix works - I think)
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ' ', '');
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, '(', '');
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ')', '');
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, '_', '');
I found the solution suggested here: https://au.mathworks.com/matlabcentral/answers/514921-finding-identical-rows-in-2-tables, but having done a quick test via:
foo = T(str2double(string(T.Year))<1943,:); % not my actual query, but structurally the same; this gave me ~40% of my original data
bar = T(str2double(string(T.Year))>1941,:); % similar, gave me ~70% of the original data
baz = ismember(foo,bar); % similar, gives the overlap for 1 particular year (should be about 14% of my original data)
blah = T(str2double(string(T.Year))==1942,:); % to directly extract the number of rows I am looking for
sum(baz) % What I expect here is the number of rows in the overlap
ans =
0
I found that ismember was not finding any duplicates (which were there by construction).
Note: due to categorical data I actually used T(str2double(string(T.Year))...)
Replacing
baz = ismember(foo,bar,'rows');
sum(baz)
ans =
0
results in the same not finding any duplicates. Using double quotes "rows" does not change the behaviour.
On the other hand, using the function to assess single variables gives the expected behaviour (to some degree):
testest = ismember(foo.var1,bar.var1)
sum(testest)
The sum is now non-zero, and (because single variables are repeated more often than their combinations) gives more like 30% of the original data, which seems reasonable (the number of unique entries in the original set in that variable was about 40% of the total).
I guess I could create a logical index based on the product of multiple calls of this kind, but that seems rather... inefficient... and sensitive to the exact construction of the table/variables used in the filter. I'd rather have a generic solution for full table rows that will be robust if the overall table changes over the long term (or if/when I functionalise the code and use it for other work). Whilst most of the time, a couple of key variables can be used to identify unique rows, occasionally more information is required to distinguish pathological cases. I will probably use this approach if a more elegant solution doesn't appear, though, and put some thought into which groups of variables are 100% correlated (and therefore useless for this distinction) to cut down the Boolean product.
I could also throw good coding practice to the winds and just write two nested loops (one for rows, one for variables) and exhaustively test every combination, but I suspect that would be even less efficient (although I wonder whether the scaling order would be the same given the nature of the comparisons required).
If it is pertinent, I imported all (>25) data columns from a .csv file as categorical variables. The original data before that were a mix of number and general columns from an Excel sheet; I could have used any or all of {double,string,categorical,datetime} to store the various variables, but there are some data which are best stored as categorical to avoid character trimming and consequent data cleaning / returning to original state steps.
Digging further, I also found this: https://au.mathworks.com/matlabcentral/answers/1775400-how-do-i-find-all-indexes-of-duplicate-names-in-a-table-column-then-compare-the-row-values-for-each which appears to imply that ismember should have the functionality I need here.
size(unique([foo;bar],'rows'),1) == size(foo,1)+size(bar,1)
ans =
logical
1
instead of the expected 0 due to the lower amount of actual full-row matches. (Same for "rows" again.)
I've also looked into outerjoin/join/innerjoin, but those don't seem to remove duplicates like I need.
  7 个评论
dpb
dpb 2024-8-28
I agree that an enhancement (or different function to ensure don't break compatability) would be an option that allowed missing values to be compared positionally. That NaN doesn't compare is part of IEEE-754, no way/no how is it feasible to change it; that <missing> is NaN for double was an obvious choice but has the observed ramifications. There is no standard definition that is different and unique.
Daniel
Daniel 2024-8-29
编辑:Daniel 2024-8-29
Agreed, having looked into it a little bit I can see why IEEE-754 is set up the way it is. There are other ways around it, e.g. posit schemes, such as Gustafson's unum representation (which even had a MATLAB library developed around it), but I'm not familiar enough with them to know what other issues that would cause if the missing standard was changed. That scheme had a representation of not a real NaR, with the property NaR = NaR, which would fit my use case.
Like I said, I suspect a specific exception within the ismember function would be one way to handle it (if one wanted to), and even then, well-documented and only at the user's insistence for that one call. Or, potentially, a specific exception for a given array to use NaR for <missing> for doubles, in addition to allowing ismember to interpret that as I described. If wishes were fishes... and I absolutely appreciate Chesterton's Fence with regard to this.
Alternatively, as you say, development of a specifically positional <missing> comparison function, and the ability to either run that in tandem with handling the nonmissing data, or call within existing functions (e.g. ismember) might be a better way to handle it. Given the propensity of users to call eval() though, in spite of all the threads about that, I can understand some reluctance on the dev team's part.
I have the fillmissing() workaround in the meantime, and I'll write some code to verify that the categorical used there is nowhere present in any incoming new data to be checked before I run fillmissing() over it in advance of the ismember() call, with an error thrown if it is. That should suffice as a control over the vetting/cleaning/appending process.
@dpb, if you want to summarise this thread as an answer, I'd be very happy to accept/upvote it, and remove mine, to recognise your contribution.

请先登录,再进行评论。

采纳的回答

Daniel
Daniel 2024-8-28
编辑:Daniel 2024-8-28
The fundamental issue experienced was due to missing data.
Use of fillmissing to replace <undefined> with specific phrasing not used anywhere in the data led to the desired behaviour from ismember() to compare the table data.
Thanks to @Divyajyoti Nayak and @dpb for their comments and discussion.

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Matrices and Arrays 的更多信息

产品


版本

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by