EU AI Act: victory for fearfulness

Europe celebrates being the first continent to regulate AI. A lot of technology fears are packed into the EU AI Act. It is difficult to see the framework conditions for a positive development of Europe as a technology driver.

Michael Hafner
11 min readDec 10, 2023

It could have been an exciting story — but nobody wanted to make it exciting. The discussion about the EU AI Act dragged on for a long time, new aspects kept popping up and then last year’s AI hype seemed to throw everything out of kilter.

Lobbyists lined up, Google tried to protect the European Union from making critical mistakes (at least that’s what Google lobbyists wrote in memos addressed to internal audiences), and Integrity Watch lists a total of over 210 lobbyist meetings in the EU Parliament regarding the AI Act.
What was at stake, how should things proceed, what was actually still missing for a decision? I asked the rapporteur Dragos Tudorache these questions a few months ago and secretly hoped for an exciting story about conflicting interests, new aspects and political disputes with complex, still-developing technology.

However, Tudorache’s answer could not have been more boring and evasive. The AI Act is in the process of EU legislation, will be discussed further and voted on as soon as possible. Period.
Now the agreement is apparently on the table. And what was so difficult about it?

At first glance, the results are a little confusing. The most relevant aspects, which are now also referred to in most media reports, concern the protection of citizens from surveillance and manipulation — topics that we have been familiar with since data retention, GDPR and WhatsApp encryption and decryption discussions. Biometric identification systems should therefore not be able to be used without restrictions. Social scoring remains undesirable in the EU. Citizens will be able to complain.
And what else?
I have leafed through the Act; here are a few points that particularly caught my eye. The AI Act highlights many of the problems with technology legislation in general.

Law describes abstractions, technology establishes facts

Legislation is geared towards desired conditions and aims to create framework conditions that make these conditions possible. Technology, on the other hand, creates facts. The latter can happen intentionally or unintentionally. Sometimes projects are conceived and implemented. Sometimes technology, sometimes a by-product of technical developments, creates the conditions that set other processes in motion or produce results that can be used in other contexts. The best encryption systems, for example, make life easier for security authorities if they use these tools themselves. But they make their lives hell when they are used by others. And the same applies to decryption systems. Where should legislation start here?

This ambiguity runs through the AI Act. Developers of high-risk AI applications should exclude the possibility that these can be misused. How can this be determined? Who is responsible for misuse? Who detects misuse of a solution that is useful in other contexts? These demarcations make no sense if the relationship between technology and society is not clarified. Does technology set developments in motion that progress inexorably? Does technology determine society, culture and nature? Or is technology the result of social processes and serves to solve problems that are socially relevant? One can argue about technical and social determinism for a long time. Recently, the arguments of techno-determinists have gone somewhat out of fashion; technology has simply been too friendly towards dystopian science fiction. With AI, however, many friends of cautionary techno-visions have once again found a worthwhile opponent.

AI risk levels as a matter of interpretation

The classification of AI applications into different risk levels from minimal risk to unacceptable risk represents the core of the AI Act. One example of minimal risk explicitly mentioned in the press release is recommender systems, while unacceptable risk includes applications that are intended to manipulate people and undermine their free will. The latter sounds malicious, but recommender systems are usually friendly shopping assistants that recommend additional products to customers. Behind these recommendations, however, are complex algorithms that sometimes process hundreds of parameters to arrive at a recommendation. And they pursue one goal: they want to persuade the customer, who did not want to buy anything or complete the shopping process with their current selection in the shopping basket, to do something else. You could call it sophistry to regard recommendations as a manipulation of free will. But it is not at all sophistical to note that recommendations are examples of complex technologies that can also be the basis for completely different, less friendly applications.

The same algorithm that determines: “If you buy A, you might also buy B” can also determine: “Because you do A, you are a politically suspicious subject”. I fear that it is not a particularly successful demarcation if the separation of minimum and unacceptable risk can be skipped over so easily.
This separation into risk classes and probably also the responsibility of the AI developer are very problematic as abstract definitions and will probably require some specific individual cases to be defined in more detail.

Information and labeling obligations: When is something understandable?

Other provisions of the AI Act include numerous information and labeling requirements. Users should understand that they are currently interacting with an AI system, and they should be able to understand and comprehend the extent to which AI contributes to this interaction. This is a challenging and quite enlightening requirement.

Who is responsible for achieving this transparency? Where is the boundary between the developers’ obligation to deliver and the users’ obligation to collect? Who determines what needs to be explained, when something has been explained in sufficient detail and in what form the explanation must take place? Will long general terms and conditions now be accompanied not only by equally long data protection declarations, but also by even longer AI disclaimers?

Today, every website operator has a long, automatically generated privacy policy linked in their legal notice without knowing what it actually says and why it is necessary. Users don’t read it. The only beneficiaries of this situation are the webmasters of the law firm websites that provide the data protection generators — and who can now look forward to an unimagined abundance of new backlinks. Will this also be the fate of information about artificial intelligence?

In addition to this general labeling requirement, a separate paragraph also calls for a labeling requirement for images, videos and audio generated with artificial intelligence. This again raises similar delimitation difficulties: How artificial should it be? Are filters or the use of animation programs enough? Does it have to be based on an unadulterated image, is the intention or possibility of deception or confusion with real scenes necessary? Or does every fantasy visualization have to be labeled with dragons and princesses?

What actually concerns me here: I couldn’t find any reference to the labeling requirement for AI-generated texts. Possible interpretations: Images and videos are seen as more effective , images convey more authority. Images are seen as illustrations; the fact that an external intelligence was involved in every image, even a merely illustrative photograph, which determined the moment it was taken, the framing and publication, is ignored. Images happen by themselves, whereas texts are not created by the random arrangement of letters — whether an artificial or other intelligence was involved is less important.

Texts lie even better than pictures

This is a series of misunderstandings. Anyone who has worked with digital image editing tools knows how many processes run uncontrolled even when only simple color corrections are applied. The degree of automation only determines the speed of the changes (and how many images can be processed in what time) — the real possibilities are the same. And even experienced users don’t know exactly what the software is really doing — it doesn’t matter whether they are typing commands, clicking buttons or using natural language to control a chatbot. It’s not much different with AI-generated texts either. Neither handwritten nor artificially generated texts should be published without carefully checking the text, researching and understanding the arguments and thinking through their consequences. Both happen.

The different treatment of images and text in the AI Act suggests that either images are given more authority over unsophisticated users, or that manipulation in text is seen as easier to decipher — or that the problem may be quite different and the labeling requirement for AI-generated images misses the mark. AI-generated texts in particular, which are based on the lowest common denominator, repeat what everyone is saying and cannot establish any reference to truth or arguments, can only be checked by people who are particularly well versed in the field that the text deals with. It is easier for an AI to learn whether it should draw people with five or six fingers than to decide whether Kant’s idealism can be used to argue relativism or realism, or whether a wealth tax will benefit or harm the economy and social system.

If the labeling was intended as support in the fight against fake news and disinformation, then this probably misses the mark. If it was intended as a contribution to raising awareness among broad and less digital sections of the population, then all the gray areas remain untouched, without which AI can hardly be understood as a relevant phenomenon.

US companies and institutions as the strongest co-designers in Europe

From this perspective, I always find it interesting to see who was involved in the development of which regulations and who lobbied how intensively. Publishers and other producers of intellectual property, without which artificial intelligence could not simulate intelligence, were quite reserved. Their core issues seem to revolve primarily around copyrights and licenses. In principle, this can lead to new business models if content is no longer only licensed to distributors but also to tech companies. But it may be a little short-sighted if the results of these licensed productions suddenly become competitors under new market conditions. It remains to be seen what AI will learn from once it has economically destroyed old content-producing intelligence, and whether AI will become the journalist’s best friend, helping him with research, archiving and other tasks.

Integrity Watch has documented 210 lobbyist meetings on the AI Act with the two rapporteurs, Axel Voss and other MEPs involved in AI, data and other technology issues. At the top of the list are Google with eight appointments and Microsoft with seven. The American Chamber of Commerce intervened six times, and the Future of Life Institute is also remarkable with four appointments. According to its own definition, the institute strives to assess and control the consequences of technology. Its biggest donors are Skype co-founder Jaan Tallinn and Ether co-founder Vitalik Buterin. Tallinn was on the institute’s board for a time, while Buterin — according to the institute’s own transparency page — has no influence on the agenda and priorities. TikTok made two presentations, as did IBM. The Federal Association of German Digital Publishers was content with one appointment.

Some exemptions from the strict regulations are presumably in the interests of all parties involved, even if they have the potential to undermine the Act’s regulations across the board. AI applications in the private and personal sphere and for research and development purposes are exempt from the regulations and risk classes. There are sandbox regulations for them in which experimentation is permitted. Development environments for AI will therefore ultimately become problematic environments similar to laboratories in which dangerous viruses and bacteria are studied. This creates hurdles and, if the rules are actually taken seriously, significantly restricts the circle of those who can and want to afford to research artificial intelligence in the long term. AI development will become a luxury — or will take place outside Europe in future.

As a transitional solution until the Act comes into force everywhere, tech companies can and should conclude an AI pact with the EU. This pact represents the provisional voluntary recognition of the Act and will hopefully also be used as an opportunity to further refine all of the provisions that can be discussed.

At last: an AI authority

What would bureaucracy be without authorities? A separate AI authority is to be created to monitor and further develop the EU AI Act, with its own branches in all EU member states.

A separate authority — this will naturally please Austrians in particular. A few months ago, State Secretary for Digitalization Tursky, who won’t be doing this job for much longer, was still a prominent AI alarmist, calling for urgent regulations and an authority and seriously pretending in interviews that he was driving the EU forward (incidentally, unchallenged by the ORF interviewer).

A few weeks ago, he mutated into a benevolent AI explainer who dispensed with visions of unforeseeable dangers and instead let it be known that AI was already working in many harmless applications. This sudden turnaround, a complete contradiction to his statements a few weeks earlier, also went unnoticed by the interviewer.

Tursky mentioned labeling requirements, wished for a “non-ideological” AI (which is usually the euphemism for an AI that reproduces one’s own ideology) and also confused AI regulations with measures against fake news and disinformation — just as he had apparently read in the draft of the AI Act.

The biggest concern in this constellation of bureaucrats who want to please and a technology that defies bold definitions is that these bureaucrats will sit in the AI authorities to be established and will invent new AI regulations and, even worse, work on the operationalization of the existing rules. They will therefore decide whether a labeling requirement has been met, whether the misuse of a risky application has actually been sufficiently prevented and whether the application classified as risky is firstly actually risky, secondly whether it serves research or commercial purposes and whether the necessary safety regulations are complied with.

This raises fears of an extensive bureaucratic overhead.
Overall, the AI Act is probably similar to the General Data Protection Regulation: there are honorable intentions behind it, and there is nothing wrong with the individual decisions on the whole. Overall, however, the regulations have the potential to create an army of paper monsters that are of no use to anyone and serve no other purpose than to fulfill the newly created requirements of these regulations. In practice, they will have little effect other than creating bureaucratic burdens. And they will not make it any easier to develop AI applications. On the contrary: in future, people will probably think even more carefully about whether development should actually take place in Europe.

One positive effect of the requirements and the efforts to achieve ethical European AI can be the development of our own European AI variants, which are an annoying additional task at the beginning, but which may produce new effects over time — similar to how sporting training at altitude is initially more strenuous, but then produces better effects. The question is who has the stamina to get through the stressful phases. Even at Open AI, currently the most successful AI company, the proponents of the non-commercial perspective only recently had the upper hand for a few days in a spectacular leadership dispute — and then had to ask advocates of commercial approaches to back down and vacate their own seats.

Seen in this light, the EU AI Act also contains a large dose of fear. The best remedy for this fear is more practical experience with artificial intelligence, more willingness to deal with it in concrete cases. I miss both in politics, but also in the media, art and many other areas. For me, bullshitting about the future is not a successful case of concrete debate.

--

--

Michael Hafner

Journalist, Author, Data- & Audience Manager. In other news: The Junk Room Theory - junkroom.substack.com