Confusion on profiling: What regulation do we need?

avatar  Niko Härting

Viviane Reding thinks that the existing rules for “automated decision making” should be extended to profiling in general. Are her arguments convincing? Do we need the same rules for targeted marketing as for credit scoring? Does the EU need to protect consumers from cancer drug ads in the same way as it protects us from credit refusals?


“Automated Decisions” – The Present

In the current discussion about a reform of European data protection law, there is a considerable degree of confusion about profiling. Profiling is generally viewed as a topic similar to “automated individual decisions” as regulated in Article 15 of the 1995 EU Data Protection Directive (Directive 95/46/EC), and it is unclear if and to what extent the rules on “automated individual decisions” can be extended to profiling.

According to Article 15 (1) Directive 95/46/EC, every person is to be granted the right

“not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc”.
[see OJ, No. L 281, 23.11.1995, p. 31 (p. 43)]

Article 15 (2) Directive 95/46/EC allows exceptions:

 “2. Subject to the other Articles of this Directive, Member States shall provide that a person may be subjected to a decision of the kind referred to in paragraph 1 if that decision:

(a) is taken in the course of the entering into or performance of a contract, provided the request for the entering into or the performance of the contract, lodged by the data subject, has been satisfied or that there are suitable measures to safeguard his legitimate interests, such as arrangements allowing him to put his point of view; or

(b) is authorized by a law which also lays down measures to safeguard the data subject’s legitimate interests.”

[see OJ, No. L 281, 23.11.1995, p. 31 (p. 43)]

As a rule, “legal effects” should not be based “solely” on the “automatic processing of data”. Credit decisions by banks and recruitment decisions by employers figure prominently amongst the examples that can be found in literature of the 1990s.


“Automatic Measures” – The EU Comm Proposal

 Article 20 (1) of the EU Comm draft of a General Data Protection Regulation extends the scope of the ban on automated “decisions” to automatic “measures” with the aim of extending the ban to profiling in general:

 “1. Every natural person shall have the right not to be subject to a measure which produces legal effects concerning this natural person or significantly affects this natural person, and which is based solely on automated processing intended to evaluate certain personal aspects relating to this natural person or to analyse or predict in particular the natural person’s performance at work, economic situation, location, health, personal preferences, reliability or behaviour.”
(EU Comm draftof a General Data Protection Regulation, 25 January 2012, p. 54)

EU Commissioner Viviane Reding recently explained the proposed extension of the ban by citing the example of targeted marketing for cancer drugs:

 “The Commission has already proposed to expand the scope of the protection. Individuals will not only be protected against formal ‘decisions’ but also against ‘measures’ producing legal effects or significantly affecting them. For instance, the targeted marketing of specific medical products against cancer based on the search made by an individual on the internet would fall under this concept of ‘measure’.”
(EU Commission, „The EU data protection Regulation: Promoting technological innovation and safeguarding citizens‘ rights“, SPEECH/14/175, 4 March 2014)

“Automated Decisions” – Dignity Concerns

 The logic of the existing limitations on “automated decision making” is simple and convincing. Decisions on credits, insurances, recruitments or dismissals can have serious impact on a consumer’s life. It would be worrying if a banker, insurer or employer could leave such a decision to algorithms without looking at the individuals affected. The issue is dignity rather than privacy. Most Europeans would, however, agree that there need to be limitations to “machines ruling over individuals”.


Where Viviane Reding Goes Wrong

 When it comes to profiling, it is not the outcome of profiling that is offensive but the collection of information. In Viviane Reding’s example, the issue is not the drug ads but the information collected in order to produce the ads. Most Europeans will prefer targeted, “taylor made” ads to random ads: When I have to live with online advertisements, I will prefer ads possibly providing useful information to ads for products that I am not interested in. For example: I would rather see ads for privacy books than ads for breast implants.


Profiling – Privacy Concerns

 Targeted ads do, however, cause privacy concerns of a different kind: The fact that my visits to internet sites, my online purchases and my Google search words are collected and analysed may cause a “diffusely threatening feeling of being observed” (a quote from the landmark decision of the German Federal Constitutional Court on Data Retention: BVerfG, Urt. v. 2.3.2010 – 1 BvR 256/08, 1 BvR 263/08 u. 1 BvR 586/08, CR 2010, p. 232 (p. 235 at IV.4.a)  m. Anm. Heun): While I do not know what information Google has collected on me, I know that Google has collected a lot. Therefore, there are valid grounds for a right to know.

When information is collected on a grand scale, collection must not be secret, and the collector must be obliged to provide extensive information on the nature of the data that is stored. Transparency must be priority number one. And there should be rights to have information deleted and the right to opt out of collection (“Do Not Track”).


Why a Ban on “Measures” Does Not Make Sense

 Having said that, any ban on “measures … based solely on automated processing” does not make sense. On the contrary: Any such ban would limit the consumer’s right to use a service and impose state control without justification.

“Automated measure making” is everywhere:  My music app suggests tunes to me on the basis of an “automatic” analysis of the music I have previously listened to. My running app proposes jogging routes as a result of an “automatic” analysis of past routes. My traffic app may come up with suggestions for my route and for the speed of my car after “automatically” analysing the traffic, my style of driving and the state of my car.

All these “automatic” processes qualify as profiling. And while I will certainly want to know what information is stored in the process, I will not be inclined to demand a “human touch”. While it is reasonable and convincing to make sure that an employer takes a look at an applicant and not only at some data, nobody will seriously consider demanding Google to provide for an employee access to data on a user for the purpose of sending cancer drug ads to the user.



Mehr zum Autor: RA Prof. Niko Härting ist namensgebender Partner von HÄRTING Rechtsanwälte, Berlin. Er ist Mitglied der Schriftleitung Computer und Recht (CR) und ständiger Mitarbeiter vom IT-Rechtsberater (ITRB) und vom IP-Rechtsberater (IPRB). Er hat das Standardwerk zum Internetrecht, 6. Aufl. 2017, verfasst und betreut den Webdesign-Vertrag in Redeker (Hrsg.), Handbuch der IT-Verträge (Loseblatt). Zuletzt erschienen: "Datenschutz-Grundverordnung".

Schreiben Sie einen Kommentar

Sie müssen sich einloggen um einen Kommentar schreiben zu können.