De-anonymisation is always possible, but what are the consequences?

avatar  Niko Härting

A lawyer in his 40s in Berlin who is also a professor with a strong interest in IT law. Combine this with a couple of other bits of information – yes, it is me. Therefore, it is not at all surprising that there are studies that show that de-anonymising anonymous data is possible and – in many cases – very easy („Mobile Location Data ‚present anonymity risk“, BBC News v. 25.3.2013).

Trojan Horses?

But what does this mean? Does the new study „Unique in the Crowd: The privacy bounds of human mobility“ (prepared by de Montjoye/Hidalgo/Verleysen/Blondel published at Science Reports 3, Articel no. 1376, on 25 March 2013) prove that anonymous or pseudonymous data are the „Trojan horses“ of data protection, as Viviane Reding recently put it („Revising the Data Protection directive“, New Europe v. 8.3.2013)?

Certainly not:

  • Inevitable Concerns: Nobody will deny that there are privacy issues when information refers to a person. Privacy risks exist and are not just theoretical even when information is pseudonymous or anonymous. When smart meters measure our habits of using electricity, we have privacy concerns even though data is anonymous and nobody at the elictricity company is likely to know our name, address and identity.
  • Tools for Protection: At the same time, nobody will deny that anonymity and pseudonymity are tools that protect privacy. Anoymous information is much less of an intrusion than non-anonymous data. Anoymity and pseudonymity are protective covers of privacy.
  • Sliding scale: From the perspective of an organisation that processes information, there should be incentives to keep information anonymous. Therefore, the rules for anoynmous information should not be as strict as for non-anonymous information. The strictness of rules must depend on the degree of privacy risks („privacy-based approach“).
  • Effect of Legality Principle: When information is anonymous, the legality principle does not make sense. It is counter-productive. An organisiation that needs „explicit, informed consent“ for anyonymous or pseudonymous data, will always be tempted to go all the way: If it is pre-requisite to ask a customer for consent, you may as well extend the request of consent to non-anonymous uses.
  • Accountability Revisited: As far as anonymous and pseudonymous data is concerned, the concept of „accountability“ is worth re-visiting. Organisations that indend to proceed such data should be obliged to design processes in a privacy-friendly way. It should, however, be left to the organisations‘ discretion to choose the methods and practices to reach privacy goals (see Galway project: Data Protection Accountability: The Essential Elements, A Document for Discussion prepared by the Centre for Information Policy Leadership at Secretariat to the Galway Project, October 2009; and – in German – my previous blogpost: Härting, „EU-Datenschutz: Accountability und der risikoorientierte Ansatz“, CRonline Blog v. 25.3.2013). The most important privacy goal in connection with anonymous and pseudonymous information needs to be efficient precautions against de-anomynisation.



Mehr zum Autor: RA Prof. Niko Härting ist namensgebender Partner von HÄRTING Rechtsanwälte, Berlin. Er ist Mitglied der Schriftleitung Computer und Recht (CR) und ständiger Mitarbeiter vom IT-Rechtsberater (ITRB) und vom IP-Rechtsberater (IPRB). Er hat das Standardwerk zum Internetrecht, 6. Aufl. 2017, verfasst und betreut den Webdesign-Vertrag in Redeker (Hrsg.), Handbuch der IT-Verträge (Loseblatt). Zuletzt erschienen: "Datenschutz-Grundverordnung".

Schreiben Sie einen Kommentar

Sie müssen sich einloggen um einen Kommentar schreiben zu können.