EU AI Law may return Europe to the Middle Ages

Last Wednesday, June 14, the European Parliament approved the initial draft of the future IA Law of the European Union (EU) (1,2), which at the end of the year must be finally ratified by the triad of the European Council, the European Commission and the European Parliament itself without expecting substantial changes to the legislative body (3). There is much talk about this Law about its zeal in defense of the rights and values ​​of the EU regarding the potential threats posed by the development of Artificial Intelligence (AI), especially in terms of human supervision, security, privacy, transparency, discrimination, and social and environmental well-being, which represents a commendable legislative initiative. Although, beyond its inspiring humanist spirit, it should be noted that all that glitters is not gold (at least in practical terms). Let me explain:

In the first place, it is obligatory to point out that the AI ​​Law of the EU is not the first in the world, as we European ethnocentrists boast when presenting the new Law to public opinion, since both China (4) and Russia (5) they already have their own laws in AI, to give some examples. Another thing is that its regulations defend some rights and values ​​that are antagonistic to the European community, since as we know Ethics is not universal but geographical and cultural.

And secondly, although I have previously referred to the new Law in relation to the real capacity of the EU itself to impose it on the rest of the world (6) (alleged capacity that I seriously doubt due to the consequent technological vacuum that the AI companies will foreseeably end up practicing against the European continent in defiance of a restrictive community regulation, which will imply a technological and consequently economic impoverishment of the euro zone compared to the rest of the planet developed in AI), on this occasion I would like to reflect on a aspect of the Law that is rarely mentioned and that I consider essential for the future development of the EU in terms of AI: its fierce battle, rather than suspicion, against the work methodology known as Open Source.

Let’s put ourselves in a situation: The new AI Law of the EU, in its desire to guarantee transparency and security in terms of AI development, establishes a regulation on the obligation to create controlled environments (or, what is the same, non-open) to test AI before its implementation, which is directly inspired by the predecessor EU cybersecurity standards to ensure more secure hardware and software products (7), and which is known as the Cyber ​​Resilience Law (8) of the year 2022. A regulation established in article 28b of the new AI Law (9, 10) that obliges providers of AI systems, among others, to the following requirements:

-Carry out a prior assessment of the impact on fundamental rights.

-Guarantee the quality and representativeness of the data used to train and validate the systems.

-Ensure proper traceability and documentation of processes and results.

-Implement technical measures to guarantee the robustness, precision and security of the systems.

-Provide users with sufficient information about the characteristics, capabilities and limitations of the system.

-Enabling mechanisms to supervise, control and intervene in the operation of the system.

-Comply with the rules on civil liability for damages caused by a defective system.

-Etc.

At first glance, these requirements may seem not only acceptable but also enforceable, but the truth is that in practice they are as abstract and subjective as they are difficult to meet, since the innovation and development processes in AI tend increasingly more to be executed worldwide under the Open Source regime. As the 2022 study “The state of business Open Source” (11) makes it relevant, in which it is stated that Open Source is currently used by 71 percent by the AI ​​industry, as well as that the 82 percent of the leading companies in the sector choose suppliers that work with Open Source to the detriment of those third parties that do not.

As an explanatory note for those who are not familiar with the matter, it should be noted that Open Source is a working method that allows free and collaborative access to the source code of a computer program, allowing its modification, improvement and distribution by anyone. In fact, Open Source represents one of the most important movements in contemporary global technology, as it accelerates the development of AI innovation at a multisectoral level, by promoting the public exchange of research and existing codes that allows the rapid expansion of knowledge on the matter of AI globally, which enhances what I particularly like to call Collective Intelligence (12). So much so, that a large part of the codes in use by companies in AI, where large technology companies are no exception, are obtained from public repositories (such as public code libraries). That is, it evolves from codes already created by others previously.

Let’s see an example of inconsistency and incompatibility between the requirements of the new EU AI Law and the standardized use of Open Source in the AI ​​sector. If we take one of the previously related rules as a case study, such as “Comply with the rules on civil liability for damages caused by a defective system”, we can deduce that an Open Source provider (which can be OpenAI, Microsoft or IBM , among others less well-known) will from now on be the potential civil liability (with fines of up to 30 million euros) for the damages caused by a third party that incorrectly uses the source system by action or omission regarding the indications for the proper functioning of the system same. Which, as seems evident, discourages the bravest in the use of Open Source from the start.

But the thing does not stop here, since the new AI Law of the EU expressly prohibits the use of any Open Source generative system in European territory. That is to say, that the new Law is not only contrary to the Open Source, but even more so it goes against the very evolutionary Principle of Reality of the AI. Or, put in other words, the AI ​​Law has the potential to return Europe to the Middle Ages -in the sense of taking us back to a dark age for knowledge- by comparative contrast with the rest of the technological world that allows itself to evolve collaboratively in terms of AI. Since prohibiting the practice of Open Source is comparable to prohibiting academic publications of scientific discoveries in any area of ​​study for the benefit of the research community as a whole.

The pertinent question cannot be other than the reason for this community regulation contrary to Open Source. Personally, I only find one logical answer: the ignorance of European legislators in considering, erroneously, that current AI software is proprietary and not open (a theory that has been discarded since the 1970s), which is therefore unchangeable and has a specific and identifiable person in charge. Nothing further from reality. Well, otherwise we would think that said regulation is the result of a deliberate action to expel all Open Source projects from the European market under pro-private biases. However, and regardless of the real motivation for this nonsense, the truth is that this regulatory stumbling block against the practice and use of Open Source -whose regulation, although necessary, is not at all well resolved- is on its way to converting the old continent in a desert in AI technology, due to a ban on development for their own and a ban on entry for others. And if not, time to time in the absence of rectification.

Yes, as one of the rapporteurs for the new EU AI Law rightly said, after the approval of the new community regulation in the middle of this month: “today all eyes are on us” (Brando Benifei, from S&D, Italy). Although, server, I would like to add that the world is looking at us with wide-eyed eyes for observing incredulously how Europe is set on fire in the purest style of Nero.

References

(1) Press conference by Roberta Metsola, President of the EP, Brando Benifei and Dragoş Tudorache, speakers on the plenary vote on the AI ​​Law. Multimedia Center, European Parliament, June 14, 2023 https://acortar.link/nxRmlK

(2) MEPs are ready to negotiate the first rules for a safe and transparent AI. European Parliament, June 14, 2023 https://acortar.link/a9Os2l

(3) AI Act: one step closer to the first rules on Artificial Intelligence. European Parliament, May 11, 2023 https://acortar.link/y1QvC7

(4) Administrative Measures for Generative Artificial Intelligence Services. State Internet Information Office. Government of China, April 11, 2023 https://acortar.link/4DEcRU

(5) Basic Principles of the Development and Use of AI. Decree of the President of the Russian Federation, October 28, 2019 https://acortar.link/N9Gmzu

(6) Can Europe force its new AI law on the rest of the world?. Jesús A. Mármol. Medium, May 17, 2023 https://acortar.link/bi6cLD

(7) EU cybersecurity standards to ensure more secure hardware and software products. European Commission, 15 September 2022 https://acortar.link/D5gkoD

(8) Cyber ​​Resilience Law. European Commission, 15 September 2022 https://acortar.link/p5VJWS

(9) EU IA Law-Obligations of the provider of a foundation model. Art 28b, page 39. European Parliament, May 16, 2023 https://acortar.link/CG9qqS

(10) Law IA EU-Amendments Approved by the European Parliament. European Parliament, June 14, 2023 https://acortar.link/IJLESo

(11) The state of enterprise open source. Red Hat, February 22, 2022 https://acortar.link/pwWUse

(12) Collective intelligence creates millions of combinations of best possible realities. Jesús A. Mármol. A Seeker’s Log, June 15, 2015 https://acortar.link/jigrbq

--

--