When a regulation asserts that some data is
'anonymous', it is disconnected from the best theories in computer science
As I write this, the European Parliament is involved in a world-beatingly gnarly wrangle over the new General Data Protection Regulation. At stake are the future rules for online privacy, data mining, big data, targeted advertising, data-driven social science, governmental spying (by proxy), and a thousand other activities that are at the heart of many of the internet's largest companies, and our politicians' darkest and most uncontrolled ambitions.

The lobbyists are out in force. The activists I know who go to Brussels say they've never seen the like: it's a veritable feeding frenzy of lobbying. Hundreds of amendments and proposals are on the table – some good, some bad, and just making sense of them is a full-time job.
As complicated as the proposals are, there's one rule of thumb that should be borne in mind whenever any data-protection proposals are on the table: any time someone speaks of relaxing the rules on sharing data that has been "anonymised" (had identifying information removed) or "pseudonymised" (had identifiers replaced with pseudonyms), you should assume until proven otherwise that he or she is talking rubbish.
This is a kind of "iron law of privacy," that can be used to quickly weed out nonsensical ideas. What remains might be good ideas or bad ones, but at least they won't be grounded in near-impossibility.
Anonymising data is a very, very difficult business. When it comes to anonymising, there are three high-profile failures that get widely cited: AOL's 2006 release of anonymous search data; the State of Massachusetts's Group Insurance Commission release of anonymised health records; and Netflix's 2006 release of 100m video-rental records.
In each case, researchers showed how relatively simple techniques could be used to re-identify the data in these sets, usually picking out the elements of each record that made them unique. There are lots of smokers in the health records, but once you narrow it down to an anonymous male black smoker born in 1965 who presented at the emergency room with aching joints, it's actually pretty simple to merge the "anonymous" record with a different "anonymised" database and out pops the near-certain identity of the patient.
De-anonymising
Since the mid-noughties, de-anonymising has become a kind of full-contact sport for computer scientists, who keep blowing anonymisation schemes out of the water with clever re-identifying tricks. A recent paper in Nature Scientific Reports showed how the "anonymised" data from a European phone company (likely one in Belgium) could be re-identified with 95% accuracy, given only four points of data about each person (with only two data-points, more than half the users in the set could be re-identified).
Some will say this doesn't matter. They'll say that privacy is dead, or irrelevant, or unimportant. If you agree, remember this: the reason anonymisation and pseudonymisation are being contemplated in the General Data Protection Regulation is because its authors say that privacy is important, and worth preserving. They are talking about anonymising data-sets because they believe that anonymisation will protect privacy – and that means that they're saying, implicitly, privacy is worth preserving. If that's policy's goal, then the policy should pursue it in ways that conform to reality as we understand it.
Indeed, the whole premise of "Big Data" is at odds with the idea that data can be anonymised. After all, Big Data promises that with very large data-sets, subtle relationships can be teased out. In the world of re-identifying, they talk about "sparse data" approaches to de-anonymisation. Though most of your personal traits are shared with many others, there are some things about you that are less commonly represented in the set – maybe the confluence of your reading habits and your address; maybe your city of birth in combination with your choice of cars.
These rarities practically leap out of the data and point straight at you, just as the other Big Data conclusions are meant to. If Big Data can find the combination of subtle environmental factors shared by all the victims of a rare disease, it can also find the combination of subtle identifiers shared by all the different data-sets in which you are present, merge them together, and make your identity public.
Lobbying frenzy
The EU is being lobbied like never before on this one. EU Commissioner Viviane Reding says:
"I have not seen such a heavy lobbying operation." It's working, too.
Great chunks of lobbyist-authored text is finding its way into MEP's amendments. The lobbyists have become de facto legislators, only they get paid more and don't have to go to all those boring meetings.
Clause four of the General Data Protection Regulation contains the definitions used in the document, and it is one of the key battlegrounds. It establishes the idea that there is such a thing as "anonymous" data and exempts it from regulation, and creates a second category of "pseudonymous" information that can be handled with fewer restrictions than are placed on "personally identifying information."
I asked two of my favourite computer scientists what they thought of the plausibility of anonymising or pseudonymising data sets. Seth David Schoen (a staff technologist at the Electronic Frontier Foundation) told me: "Researchers have shown that anonymisation is much harder than it looks. Just because something seems anonymous at first glance, doesn't mean it really is – both because of the mathematics of individual distinctiveness and because of the huge number of databases that are becoming available. That means we have to be extremely careful about whether things are truly anonymous, and not rely on our intuition alone."
Princeton's Ed Felten – formerly of the US Federal Trade Commission –said: "A decade of computer science research shows that many data sets can be re-identified. Removing obvious identifiers is not enough to prevent re-identification. Removing all data about individuals may not be enough. Even data sets consisting entirely of aggregate information can be used to infer information about specific individuals in some realistic cases.
"But to say that de-identification is utterly hopeless would go too far. There is an emerging science of privacy-preserving data analysis which can be applied in some settings. As a general rule, data derived from the characteristics of individuals, including behavioral data, will likely convey information about individuals, absent some rigorous technical basis for believing otherwise.
"The trend is toward treating this like cryptography, where 'I scrambled up the data a bunch' is not a valid argument and 'I can't think of an attack' is not a valid argument – you have to have a technically rigorous argument that no attack is possible."
As you can see, both were careful not to rule out the possibility that someone might some day come up with an anonymisation scheme, but neither were bullish on the creation of a regulatory category of "anonymous" data that can be treated as though it held no risks for the people from whom it was collected.
I asked both for further reading. Felten suggested Arvind Narayanan and Vitaly Shmatikov's Privacy and Security Myths and Fallacies of 'Personally Identifiable Information, an excellent primer on the technical issues from the June 2010 Communications of the Association for Computing Machinery. Schoen recommended Paul Ohm's Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, a comprehensive legal history of the idea of anonymisation in regulation published in a 2010 edition of the UCLA Law Review.
For my part, I recommend On the Feasibility of User De-Anonymization from Shared Mobile Sensor Data, a fantastic (if somewhat technical) look at the re-identifying inferences that can be drawn from the seemingly innocuous sensor-data coming off our mobile phones, from the 2012 Proceedings of the Third International Workshop on Sensing Applications on Mobile Phones.
Differential privacy
Microsoft has pushed for an approach they call "differential privacy," and it sounds like it may hold promise. As Schoen describes it, "researchers pose research questions to the original data controller, which returns intentionally fuzzy/corrupted answers, and you can allegedly mathematically quantify how much privacy harm was done in the process and then debate whether it was worthwhile in light of the benefits of the research."
But this is all conjecture: though the amount of "fuzzing" done in the data is a quantitative matter, the degree to which your privacy is protected by the fuzziness is ultimately a personal question, turning on how you feel about disclosure and its consequences. As is so often the case, this technical solution embodies a bunch of assumptions about questions that are ultimately social, and are hotly contested. You can't settle the argument about whether your privacy is or isn't violated with maths alone.
It's all fascinating to think about, but the larger point is this: when a regulation breezily asserts that some data is "anonymous" or even "pseudonymous," that regulation is violently disconnected from the best theories we have in computer science. Where you find this in a regulation, you know that its author was either not serious about protecting privacy, or not qualified to draft a regulation. Either way, it's cause for alarm.
'anonymous', it is disconnected from the best theories in computer science
As I write this, the European Parliament is involved in a world-beatingly gnarly wrangle over the new General Data Protection Regulation. At stake are the future rules for online privacy, data mining, big data, targeted advertising, data-driven social science, governmental spying (by proxy), and a thousand other activities that are at the heart of many of the internet's largest companies, and our politicians' darkest and most uncontrolled ambitions.
The lobbyists are out in force. The activists I know who go to Brussels say they've never seen the like: it's a veritable feeding frenzy of lobbying. Hundreds of amendments and proposals are on the table – some good, some bad, and just making sense of them is a full-time job.
As complicated as the proposals are, there's one rule of thumb that should be borne in mind whenever any data-protection proposals are on the table: any time someone speaks of relaxing the rules on sharing data that has been "anonymised" (had identifying information removed) or "pseudonymised" (had identifiers replaced with pseudonyms), you should assume until proven otherwise that he or she is talking rubbish.
This is a kind of "iron law of privacy," that can be used to quickly weed out nonsensical ideas. What remains might be good ideas or bad ones, but at least they won't be grounded in near-impossibility.
Anonymising data is a very, very difficult business. When it comes to anonymising, there are three high-profile failures that get widely cited: AOL's 2006 release of anonymous search data; the State of Massachusetts's Group Insurance Commission release of anonymised health records; and Netflix's 2006 release of 100m video-rental records.
In each case, researchers showed how relatively simple techniques could be used to re-identify the data in these sets, usually picking out the elements of each record that made them unique. There are lots of smokers in the health records, but once you narrow it down to an anonymous male black smoker born in 1965 who presented at the emergency room with aching joints, it's actually pretty simple to merge the "anonymous" record with a different "anonymised" database and out pops the near-certain identity of the patient.
De-anonymising
Since the mid-noughties, de-anonymising has become a kind of full-contact sport for computer scientists, who keep blowing anonymisation schemes out of the water with clever re-identifying tricks. A recent paper in Nature Scientific Reports showed how the "anonymised" data from a European phone company (likely one in Belgium) could be re-identified with 95% accuracy, given only four points of data about each person (with only two data-points, more than half the users in the set could be re-identified).
Some will say this doesn't matter. They'll say that privacy is dead, or irrelevant, or unimportant. If you agree, remember this: the reason anonymisation and pseudonymisation are being contemplated in the General Data Protection Regulation is because its authors say that privacy is important, and worth preserving. They are talking about anonymising data-sets because they believe that anonymisation will protect privacy – and that means that they're saying, implicitly, privacy is worth preserving. If that's policy's goal, then the policy should pursue it in ways that conform to reality as we understand it.
Indeed, the whole premise of "Big Data" is at odds with the idea that data can be anonymised. After all, Big Data promises that with very large data-sets, subtle relationships can be teased out. In the world of re-identifying, they talk about "sparse data" approaches to de-anonymisation. Though most of your personal traits are shared with many others, there are some things about you that are less commonly represented in the set – maybe the confluence of your reading habits and your address; maybe your city of birth in combination with your choice of cars.
These rarities practically leap out of the data and point straight at you, just as the other Big Data conclusions are meant to. If Big Data can find the combination of subtle environmental factors shared by all the victims of a rare disease, it can also find the combination of subtle identifiers shared by all the different data-sets in which you are present, merge them together, and make your identity public.
Lobbying frenzy
The EU is being lobbied like never before on this one. EU Commissioner Viviane Reding says:
"I have not seen such a heavy lobbying operation." It's working, too.
Great chunks of lobbyist-authored text is finding its way into MEP's amendments. The lobbyists have become de facto legislators, only they get paid more and don't have to go to all those boring meetings.
Clause four of the General Data Protection Regulation contains the definitions used in the document, and it is one of the key battlegrounds. It establishes the idea that there is such a thing as "anonymous" data and exempts it from regulation, and creates a second category of "pseudonymous" information that can be handled with fewer restrictions than are placed on "personally identifying information."
I asked two of my favourite computer scientists what they thought of the plausibility of anonymising or pseudonymising data sets. Seth David Schoen (a staff technologist at the Electronic Frontier Foundation) told me: "Researchers have shown that anonymisation is much harder than it looks. Just because something seems anonymous at first glance, doesn't mean it really is – both because of the mathematics of individual distinctiveness and because of the huge number of databases that are becoming available. That means we have to be extremely careful about whether things are truly anonymous, and not rely on our intuition alone."
Princeton's Ed Felten – formerly of the US Federal Trade Commission –said: "A decade of computer science research shows that many data sets can be re-identified. Removing obvious identifiers is not enough to prevent re-identification. Removing all data about individuals may not be enough. Even data sets consisting entirely of aggregate information can be used to infer information about specific individuals in some realistic cases.
"But to say that de-identification is utterly hopeless would go too far. There is an emerging science of privacy-preserving data analysis which can be applied in some settings. As a general rule, data derived from the characteristics of individuals, including behavioral data, will likely convey information about individuals, absent some rigorous technical basis for believing otherwise.
"The trend is toward treating this like cryptography, where 'I scrambled up the data a bunch' is not a valid argument and 'I can't think of an attack' is not a valid argument – you have to have a technically rigorous argument that no attack is possible."
As you can see, both were careful not to rule out the possibility that someone might some day come up with an anonymisation scheme, but neither were bullish on the creation of a regulatory category of "anonymous" data that can be treated as though it held no risks for the people from whom it was collected.
I asked both for further reading. Felten suggested Arvind Narayanan and Vitaly Shmatikov's Privacy and Security Myths and Fallacies of 'Personally Identifiable Information, an excellent primer on the technical issues from the June 2010 Communications of the Association for Computing Machinery. Schoen recommended Paul Ohm's Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, a comprehensive legal history of the idea of anonymisation in regulation published in a 2010 edition of the UCLA Law Review.
For my part, I recommend On the Feasibility of User De-Anonymization from Shared Mobile Sensor Data, a fantastic (if somewhat technical) look at the re-identifying inferences that can be drawn from the seemingly innocuous sensor-data coming off our mobile phones, from the 2012 Proceedings of the Third International Workshop on Sensing Applications on Mobile Phones.
Differential privacy
Microsoft has pushed for an approach they call "differential privacy," and it sounds like it may hold promise. As Schoen describes it, "researchers pose research questions to the original data controller, which returns intentionally fuzzy/corrupted answers, and you can allegedly mathematically quantify how much privacy harm was done in the process and then debate whether it was worthwhile in light of the benefits of the research."
But this is all conjecture: though the amount of "fuzzing" done in the data is a quantitative matter, the degree to which your privacy is protected by the fuzziness is ultimately a personal question, turning on how you feel about disclosure and its consequences. As is so often the case, this technical solution embodies a bunch of assumptions about questions that are ultimately social, and are hotly contested. You can't settle the argument about whether your privacy is or isn't violated with maths alone.
It's all fascinating to think about, but the larger point is this: when a regulation breezily asserts that some data is "anonymous" or even "pseudonymous," that regulation is violently disconnected from the best theories we have in computer science. Where you find this in a regulation, you know that its author was either not serious about protecting privacy, or not qualified to draft a regulation. Either way, it's cause for alarm.
Show Konversi KodeHide Konversi Kode Show EmoticonHide Emoticon