The Draft of the EU Artificial Intelligence Act
— a perspective of AI and Legal Tech Entrepreneurs

Adrian Locher
merantix
Published in
11 min readMar 3, 2022

Martin Köhler (Attorney), Adrian Locher (CEO of Merantix, AI Campus Berlin)
Berlin, February 13, 2022

With the AI White Paper¹ and the Draft of the Artificial Intelligence Act², the EU Commission has laid the foundation for a European AI strategy. It is the Commission’s response to the repeatedly expressed calls from the European Parliament and the European Council to ensure a well-functioning internal market for AI systems, where both benefits and risks of AI are adequately addressed at European level.

While national standardisation bodies have intensively been working on the development of standards³ regarding the implementation of the Draft AI Act’s requirements for several months⁴, the EU Parliament only recently started the negotiation on the Draft⁵.

This article shall give an outline of the objectives of the Draft AI Act (Section I.) and the Commission’s approach (Section II.) as well as the authors’ perspective on challenges arising from this Act — notably regarding startups with a focus on AI applications (Section III.).

I. Objectives of the Draft AI Act

The AI White Paper, published in February 2020, set out the policy options on how to achieve the twin objective of promoting the uptake of AI and addressing the risks associated with certain uses of such technology. According to the Commission, the paper is based on EU values and fundamental rights and aims to give users the confidence to embrace AI-based solutions, while encouraging businesses to develop them. Based on the White Paper, the Commission put forward the Draft AI Act as a proposal for a regulatory framework on Artificial Intelligence in April 2021, which is intended to support “the objective of the Union being a global leader in the development of secure, trustworthy and ethical artificial intelligence as stated by the European Council and ensures the protection of ethical principles”.

The Draft AI Act is in many respects reminiscent of the EU General Data Protection Regulation (GDPR)⁶, which has become effective about four years earlier. In both cases, the Commission has truly set noble goals, namely, to protect EU citizens from risks typical for the post-industrial time of the 21st century. However, there are fundamental differences in regulatory approaches: While the GDPR is a prohibition law with reservation of permissions⁷, in principle forbidding any processing of personal data unless the relevant measures are expressly permitted, the Draft AI Act only prohibits certain scenarios and makes the use of AI systems in other areas of application dependent on certain prerequisites. However, it remains to be seen whether the AI Act, once in force, will prove well-balanced — without too far-reaching prerequisites for AI applications usually regarded as “ordinary” and leaving enough room for innovations.

II. Regulatory Approach

In general, regarding any new disruptive technology, which potentially bears severe risks, there are two extreme approaches to dealing with potential threats: Either, totally banning the technology’s development; or taking the stand “if we don’t get hold of it and use it, others will⁸”. In case of the Draft AI Act, the EU Commission has apparently tried to find the golden mean between these approaches by choosing a risk-based approach: The prohibitions and regulations set forth by the Draft AI Act are linked to the risks that are believed to be caused by the respective AI system. For this purpose, a distinction is made between “unacceptable risks”, “high risks” and “low or minimal risks”; the higher the risk, the more far-reaching the requirements or prohibitions.

  1. AI Systems, that are believed to present unacceptable risks, are strictly prohibited.

According to the Draft AI Resolution (Art. 5) this shall apply to

  • subliminal influence on the behaviour of a person or the exploitation of a person’s vulnerability due to age, physical or mental disability, in either case in order to materially distort the person’s behaviour in a manner that is likely to cause physical or psychological harm,
  • the use of AI systems by public authorities for a classification leading to certain detrimental or unfavourable treatment, and
  • real-time biometric identification in public spaces, unless necessary for victim search, prevention of attacks and prosecution of a perpetrator or suspect of a criminal offence.

2. High-risk AI Systems must meet strict requirements before put into service.

The Draft AI Act names several products and systems that shall be considered high-risk AI systems.

  • On the one hand, this shall apply to AI systems that are products covered by certain other EU harmonisation regulations or intended as safety components for such products — for instance lifts, medical devices or civil aviation security (Art. 6 (a), Annex II).
  • On the other hand, this applies to AI systems intended to be used for the areas listed in the Draft AI Act itself (Art. 6 (b), Annex III), namely for biometric identification and categorisation of persons, for critical infrastructures — as for instances traffic control –, for educational and vocational training, for recruitment, promotion, task allocation and performance evaluation in work relationships, for the evaluation of creditworthiness, the eligibility for public benefits or priority concerning emergency services, for law enforcement and assistance to judicial authority, and for migration, asylum, or boarder control.

For such systems the Draft AI Act requires a whole series of legal obligations, namely the implementation of a risk management system (Art. 9), a data governance with regard to training, validation and testing data, especially high data quality — which shall in particular prevent a discriminatory output of the AI System — (Art. 10) as well as a technical documentation of the AI System (Art. 11), including i.a. the documentation of the intended purpose (cf. Annex IV), and an automatic record-keeping of the events (‘logs’) during the system’s operation, especially to ensure traceability of the system’s functioning throughout its lifecycle (Art. 12). Furthermore, the Draft Resolution calls for a human oversight (Art. 14), and “an appropriate level” of accuracy, robustness and cybersecurity throughout the system’s lifecycle (Art. 15)⁹. Also, the Draft Act requires that the AI System’s operation is sufficiently transparent to enable users to interpret the system’s output and that certain information are provided to the user, e.g. the intended purpose and the level of said of accuracy, robustness and cybersecurity (Art. 13).

3. Finally, the Draft AI Act stipulates AI systems with “low or minimal risks”.

For these the Draft AI Act establishes transparency obligations (in particular, the providers of specific AI systems must inform the users of their interaction with the AI system, cf. Art. 52) or imposes no restrictions.

Besides, providers of non-high-risk AI systems are encouraged to create codes of conduct intended to foster the voluntary application of requirements applicable to high-risk AI systems or additional requirements related, for example, to environmental sustainability and diversity of the development teams.

III. Challenges

As the outlined prerequisites and obligations are both abstract terms, it is obvious that these will require interpretation by practitioners and courts, which bears a risk for the developer of an AI system or its “provider” (in terms of the Draft AI Act). This creates considerable legal uncertainty for the developer of these systems. Until a prevailing opinion regarding these terms has been established, in some cases it might even be difficult to classify the respective AI system as high- or low-risk systems. In case of doubt, the developer will have to act under the assumption of a high-risk system in order to be on the safe side and avoid a potential violation.

Even if these requirements are clarified, their implementation will pose major challenges for companies. In particular, one can expect that it will take great efforts to meet the strict requirements for high-risk AI systems (which according to Prof. P. Glauner, THG, are “comparable to the operation of nuclear power plants”¹⁰). This is especially true for small and medium-sized companies with limited resources for research and development, which often operate under strong competitive pressure on a global level. Especially for startups, which must be very cost-effective and whose “raison d’être” in comparison to long-established corporates is a rapid product development, the documentation standards (Art. 11) as well as the registration and reporting requirements (Art. 60, 62) might result in a considerable competitive disadvantage. For startups that are in the evaluation or testing phase and pursue an agile development style, it will be particularly challenging — even where a high-risk AI system is concerned — to document for every version of the AI system (once put into service for the first use by the user or the provider’s own use) how the system might (i) interact with hardware and other software, (ii) be placed on the marked or (iii) put into service, and what “the key design choices including rationale and assumptions made” are¹¹.

Furthermore, the fact that it is difficult to ensure that an AI algorithm is not somehow biased (also unconsciously and unintentionally), causes a major problem for a developer of AI systems under the AI Act. Biased training data can quickly lead to a discrimination of certain users or user groups, for instance a group of persons who belong to a certain social group in regard to their age, gender or skin colour. This is especially true in the context of reinforcement learning as the most advanced AI concept¹². Since such a discrimination is per se prohibited under the Draft AI Act, the developer would commit a violation.

A way out might be the so-called regulatory sandboxes, which shall establish a “controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service” (Art. 53). The Draft AI Act “encourages national competent authorities to set up regulatory sandboxes” as one of the “[m]easures in support of innovation” (Title V of the Act)¹³. However, the sandbox procedure shall take place under the direct supervision and guidance of the national authorities (which in turn shall coordinate their activities with the European Artificial Intelligence Board). This appears to suggest that both the AI and the data sets — and consequently the intellectual property –, must be shared and discussed with the relevant authorities, which is, in any case, likely to require a rather big investment in time and personnel on the part of the developer; these are both resources a startup typically is short of. Moreover, while the authorities’ supervision shall ensure the compliance with the requirements of the AI Act (cf. Art. 51 point 1), the participants in the regulatory sandbox shall remain liable for any harm inflicted on third parties as a result from the experimentation taking place in the sandbox.

Considering that the EU Commission has chosen an extremely wide definition of “AI systems” as the basis for the Draft AI Act, the aforesaid would not only apply to startups whose clear aim is to develop an AI system. The draft defines AI as software that is developed using “statistical approaches, Bayesian estimation, search and optimization methods“¹⁴, many systems which according to common sense do not seem artificially intelligent, would fall into the scope of the Act — such as for instance a rather simple product recommendation algorithm within a shopping portal, using the technology of the late 90ies. Perhaps, the EU Commission should examine whether a threat for EU citizens originates from the application of such systems rather than from the consequences imposed on the citizens by large companies (and governments) from outside of the EU, for instance the U.S. and China, ultimately winning the AI race.

We can conclude that the EU Commission has been pursuing honourable goals when drafting the AI Act; but unless further adjustments are made, the AI Act will cause a tremendous disadvantage for startups compared to “grown-up” corporates and especially the “big players” in the area of the Resolution’s application; and it will in particular cause a massive competitive disadvantage in comparison to startups which develop and test their AI systems outside of the Act’s area of application, namely startups in the US or China.

In this regard, we don’t see how the Draft AI Act will meet the objective to encourage businesses to develop AI systems. Rather, from the perspective of European AI tech companies and especially Startups, the Act in its current version adds further uncertainty to an — in itself — already highly dynamic and challenging technology cluster. As a result, the achievement of the European Union’s overall goal to become a global leader in the development of artificial intelligence seems rather doubtful.

References
[1] European Commission, White Paper on Artificial Intelligence — A European approach to excellence and trust, COM(2020) 65 final, 2020.

[2] European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence, COM(2021) 206 final.

[3] In particular based on Art. 40 of the Draft AI Act: “High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those standards cover those requirements.”

[4] In Germany, the community body of the German Institute for Standardisation (DIN) and German Commission for Electrical, Electronic & Information Technologies (DKE) has officially declared the second stage of the standardization roadmap open on 20 January 2022), cf. https://din.one/pages/viewpage.action?pageId=33620030.

[5] After a competence dispute had blocked the start of the negotiations within the European Parliament, the lead committees IMCO (Internal Market and Consumer Protection) and LIBE (Civil Liberties, Justice and Home Affairs) held a first joint meeting on 25 January 2022; for details cf. https://table.media/europe/en/news-en/european-parliament-starts-negotiations-on-ai-regulation/.

[6] Regulation of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), REGULATION (EU) 2016/679

[7] In German: Verbotsgesetz mit Erlaubnisvorbehalt

[8] We are aware of the fact that this argument was brought for the construction of the atomic bomb, in particular by the physicists of the US Manhattan Project under the stewardship of the former German scientist Oppenheimer. We also did not come up with this argument for the field of AI; this argument has been brought by the so-called Omega Team, according to Max Tagmark in his bestseller “Life 3.0”, 2018, p. 3, one of the earlier books on the Age of Artificial Intelligence.

[9] According to the Draft Act, robustness — which quickly has become some sort of buzz word –means that the AI System (namely High-Risk AI systems) shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems; which may be achieved through technical redundancy solutions, which may include backup or fail-safe plans (cf. Art. 15 point 3).

[10] Glauner, Written statement for the Joint Meeting of the Committees for European Union Affairs of the German Bundestag and the French National Assembly on AI Regulation to be held on 6.5.2021, https://www.glauner.info/expert-evidence

[11] Cf. Article 11 point 1 in connection with Annex IV.

[12] In Case of unsupervised machine learning — unlike in supervised machine learning — the algorithm is not trained on the basis of a training data set with so-called labelled examples; rather, the algorithm looks for hidden patterns within unlabelled examples in a data set to cluster or otherwise model the distribution of the provided data.

[13] See Section 5.2.4 of the Draft Act’s Explanatory Memorandum.

[14] Cf. lit © of Annex I (“Artificial Intelligence Technique and Approaches”), to which point 1 of Art. 3 (“Definitions”) of the AI Draft Act refers.

--

--