New in CRi

Demystifying the Draft EU Artificial Intelligence Act (Veale/Zuiderveen Borgesius, CRi 2021, 97)

In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act. We present an overview of the Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades. Aspects of the AI Act, such as different rules for different risk-levels of AI, make sense. But we also find that some provisions of the Draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals. Several overarching aspects, including the enforcement regime and the risks of maximum harmonisation pre-empting legitimate national AI policy, engender significant concern. These issues should be addressed as a priority in the legislative process.

Analysing the good, the bad, and the unclear elements of the proposed approach

Table of Contents:

I. Introduction

1. Context

2. Structure and Approach

II. Title II: Unacceptable Risks

1. Manipulative Systems

a) The Harm Requirement

b) Comparison to Existing Union Law

c) Ineffectiveness

2. Social Scoring

a) Scope of Same Context

b) Allocation of Responsibility

3. Biometric Systems

a) Three Shortcomings

b) Need for Pre-Authorisation of “Individual Use”

III. Title III Regime: High-Risk Systems

1. Scope

2. The Draft AI Act in the Context of the New Legislative Framework (NLF)

3. Essential Requirements and Obligations

4. Conformity Assessment and Presumption

a) Harmonised Standards & European Standardisation Organisations

b) Controversies of Harmonised Standards

c) Self-Assessment and the (Limited) Role of Notified Bodies

IV. Title IV: Specific Transparency Obligations

1. ‘Bot’ Disclosure

2. Emotion Recognition and Biometric Categorisation Disclosure

3. Synthetic Content (‘Deep Fake’) Disclosure

V. Harmonisation and Pre-Emption

1. Marketing

2. Use

a) Material Scope

b) The CJEU Approach

c) Fragmentation and the Cliff-Edge of Uncertainty

VI. Post-Marketing Controls and Enforcement

1. Notification Obligations and Complaints

a) Nor Rights for AI-System-Subjects

b) Incoherence of the Enforcement System

2. Database of Standalone High-Risk AI Systems

VII. Concluding Remarks


 

 

1

I. Introduction

On 21 April 2021, the European Commission presented a proposal for a Regulation concerning artificial intelligence (AI), – the AI Act, for short.1 This Draft AI Act seeks to lay down harmonised rules for the development, placement on the market and use of AI systems which vary by characteristic and risk, including prohibitions and a conformity assessment system adapted from EU product safety law.

2

In this paper, we analyse the initial Commission proposal – the first stage in a potentially long law-making process.2 The Draft AI Act is sufficiently complex to prevent us from summarising it exhaustively. We instead aim to contextualise and critique it, and increase accessibility of the debate to stakeholders who may struggle to apply their expertise and experience to what at times can be an arcane proposal.

 

1. Context

3

The first public indication of regulatory action of the type proposed in the Draft AI Act were a cryptic few sentences found in the previous European Commission’s contribution to the Sibiu EU27 leader’s meeting in 2019.3 Subsequently, then-President-Elect von der Leyen’s political guidelines for the Commission indicated an intention to ‘put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence’4 – the spark that the Draft AI Act acknowledges as its genesis.5 The proposed Regulation is part of a tranche of proposals which must be understood in tandem, including:

 

  • the draft Digital Services Act (with provisions on recommenders and research data access);6
  • the draft Digital Markets Act (with provisions on AI-relevant hardware, operating systems and software distribution);7
  • the draft Machinery Regulation8 (revising the Machinery Directive in relation to AI, health and safety, and machinery);
  • announced product liability revision relating to AI;9
  • the draft Data Governance Act (concerning data sharing frameworks).10

 

2. Structure and Approach

4

The ‘Act’ is a regulation based on Article 114 of the Treaty on the Functioning of the European Union (TFEU), which concerns the approximation of laws to improve the functioning of the internal market. The proposal mixes reduction of trade barriers with broad fundamental rights concerns in a structure unfamiliar to many information lawyers, and with significant consequences on the space for Member State action which we discuss further below. While it may look new, much of the Draft AI Act’s wording is drawn from a 2008 Decision establishing a framework for certain regulations concerning product safety, used in a wide array of subsequent legislation.11 The main enforcement bodies of the proposed AI Act, ‘market surveillance authorities’ (MSAs), are also common in EU product law. All this brings a range of novelties and tensions we will explore.

5

The Commission distinguishes different risk levels regarding AI practices, which we adapt to analyse in four categories: i) unacceptable risks (Title II); ii) high risks (Title III); iii) limited risks (Title IV); iv) minimal risks (Title IX). We cover each in turn, except for minimal risks, where Member States and the Commission merely ‘encourage’ and ‘facilitate’ voluntary codes of conduct.12 We finally look at broader themes raised by the Draft AI Act, in particular the important question of pre-emption and residual competences of Member States, and enforcement.

 

II. Title II: Unacceptable Risks

6

Unacceptable risks attract outright or qualified prohibitions in the Draft AI Act. Whether the AI Act would contain prohibited practices has been a matter of controversy. In 2018, the Commission set up a ’High-Level Expert Group on AI’ to advise on its AI strategy. Members soon described industry pressure that led to the group dropping terms including ‘red lines’ and ‘non-negotiable’ from their policy recommendations.13 A leaked version of the Commission ​White Paper on Artificial Intelligence contained a moratorium on facial recognition, controversially later expunged from the final version.14

7

The Commission’s proposal contains four prohibited categories, three prohibited in their entirety (two on manipulation, one on social scoring); and the last, ‘real-time’ and ‘remote’ biometric identification systems prohibited except for specific law enforcement purposes if accompanied by an independent authorisation regime.

 

1. Manipulative Systems

8

Two prohibited practices claim to regulate manipulation.15

 

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their agephysical or mental disabilityin order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; (emphases added)

9

In briefings on the prohibitions, the Commission has presented an example for each. They border on the fantastical. A cross-over episode of Black Mirror and the Working Time Directive exemplifies the first: ’[a]n inaudible sound [played] in truck drivers’ cabins to push them to drive longer than healthy and safe [where] AI is used to find the frequency maximising this effect on drivers’. The second is a ’[a] doll with integrated voice assistant [which] encourages a minor to engage in progressively dangerous behavior or challenges in the guise of a fun or cool game’.16

10

These provisions jar with a common understanding of manipulation. Manipulation can be understood through four necessary, cumulative conditions: the manipulator wants to intentionally but covertly make use of another’s decision-making to further their own ends through exploiting some vulnerability (understood broadly).17 The Draft AI Act’s provisions echo some of these conditions. The Draft AI Act requires intent (‘in order to’). It is limited to certain vulnerabilities, either caused by ‘age, physical and mental disability’ or exposed through ‘subliminal techniques’. If reliant on subliminal techniques, they must be covert (‘beyond a person’s consciousness’). However, a final trigger is not whether a would-be manipulator’s own ends are furthered, but instead on whether the activity ‘causes or is likely to cause that person or another person physical or psychological harm’. This heavily limits the provision’s scope.

 

a) The Harm Requirement

11

Manipulative AI systems appear permitted insofar as they are unlikely to cause an individual (not a collective) ‘harm’. This harm requirement entails a range of problematic loopholes. A cynic might feel the Commission is more interested in prohibitions’ rhetorical value than practical effect.

12

In real life, harm can accumulate without a single event tripping a threshold of seriousness, leaving it difficult to prove. (...)

Continue reading in juris PartnerModul IT-Recht



Verlag Dr. Otto Schmidt vom 13.08.2021 11:13

zurück zur vorherigen Seite