Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: identifying duplicate records


From   "Dimitriy V. Masterov" <dvmaster@gmail.com>
To   statalist@hsphsun2.harvard.edu
Subject   Re: st: identifying duplicate records
Date   Fri, 10 Feb 2012 13:33:05 -0500

Nick is right. Aggregating several pairwise uses of duplicates will
not always work, even for tagging the problematic cases. Stgroup is a
better way.

Here's some sample code with comments:

#delimit;
clear all;
set more off;

/* Fake Data */
input
dob nhs str10 surname;
1979 1234 "Cox";
1979 1234 "Coxx";
1997 1234 "Cox";
1997 1243 "Cox";
1979 5417 "Box";
1979 4517 "Box";
1822 1234 "Galton";
1822 1234 "Galton";
1979 5768 "Masterov";
1997 5786 "Masterob";
2011 9999 "Singleton";
end;

/* (1) Failed Way */
// This will only tag all problematic observations, with no way to
group them into clusters */
// It will miss Mastero?
duplicates tag dob nhs surname, gen(dups123);
duplicates tag dob nhs, gen(dups12);
duplicates tag nhs surname, gen(dups23);
duplicates tag dob surname, gen(dups13);

egen possible_dups=rowmax(dups*);

/* (2) Better Way: adjust matching threshold based on the level of
mistakes in your data */
gen new_id=string(dob) + "-" + string(nhs) + "-" + surname;
strgroup new_id, gen(group) threshold(.35);
sort group;
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index