The trouble with legislating AI

Johan Loeckx
Artificial Intelligence Lab Brussels
7 min readApr 22, 2021

When I look back at the research performed at our university AI laboratory during the last 40 years, I see a lot of positive impacts:

Unfortunately, too often, the same applications are mentioned in the mainstream press, which might make us forget about the massive positive benefits that AI could bring. The voices of a few receive too much attention in the media and the discussion is too much centred around profit.

The majority of researchers that I know, the people that really have been building the technology — not just using it — are quite resolute in the need for a clear framework to ensure AI is used for the common good.

The newly established Brussels AI for the Common Good - https://fari.brussels

And therefore I think the EU’s attempt to legislating AI is laudable. Having the courage to face such a challenge, shows a commitment to the future, and really deserves praise. The EU is not trying to avoid the issue of how to regulate AI but to tackle it head-on. Also, the early involvement of stakeholders from governments, academy and industry reflects a down-to-earth approach with room for discussion and adaptation.

As the potential benefits and uses of AI are quasi-endless, legislation may well prove to be the right instrument to let us focus on the right things.

However, though I agree with the spirit of EU’s efforts, I wish to express some concerns that are intrinsic to regulating technologies like AI.

An ecosystem in full development

To start, the AI ecosystem — the complex web of interacting partners & organizations — is still in full development. Just like at the beginning of the web, new job functions are emerging and new kinds of companies are being created. We are really in the early, foundational years of AI.

For example, the once singular job of “AI expert” is being specialized into “Data scientist”, “data analyst”, “data engineer”, “AI engineer” and“MLOPs engineer.” Companies as well, are ever more specializing in specific parts of the process of designing, implementing and deploying AI solutions. New applications of and roles for AI are being invented every day, with according challenges. It is thus very hard to foretell how things will develop.

Which makes me wonder…

Is the EU going for ethics awareness instead?

Honestly, I don’t think the real intention is to introduce regulation AI already now. There are too many uncertainties and complexities.

Of course, it is not an option to wait and develop such kind of legislation in isolation from an ivory tower — it is better developed “in situ”.

The EU must obviously have realized that the proof of the pudding is in the eating, has put it to the test and decided to come out early with legislation, to give it the confrontation with the real world that it needs to learn from.

To me it feels more like a signal to the world that EU is not following the course of the US or China, but taking ethics serious.

The effect should not be understated. The experience of publishing GDPR regulation has taught us that even without massive enforcement, a significant behavioural shift can be established. Anyone collecting data, will now first worry: “What about GDPR?”

AI is a general-purpose technology

It is hard to predict upcoming challenges because AI is a so-called general-purpose technology. Algorithms implement generic tasks, that are applied to data or knowledge, to increase its value.

A useful metaphor is that of a motor. Motors power many different appliances like cars, trimmers, refrigerators, drills and elevators. In this sense, regulating AI is like trying to regulate a motor, a piece of equipment that can be used in many circumstances.

Just like motors, algorithms do not serve a single purpose nor realize in what context they operate. They are agnostic as they act on data that have no meaning to the machine.

This is at the same time the power and limitation of AI. Because data can take on any meaning, they are universal. The same algorithm can act on weather data, personality profiles, nuclear missiles, peacekeeping missions or anything you can represent in a computational model. How do you regulate AI in a way that applies to all of these fields simultaneously?

The problems occur in domains beyond AI

The problems addressed by the EU legislation are of course already existing in other engineering disciplines. They happen when algorithms act autonomously without a human in the loop. This could be an assisted braking system that assumes specific conditions (established in simplified lab conditions) to hold in the real world.

But even systems designed with a human in the loop are not free from these problems. Whenever a system is built based on knowledge acquired directly or indirectly from data — for example, PhD research performing focus groups with some bias in the participants — can have the same issues. Bias could sneak into the product. A good example is seat belts, designed with (Caucasian) males in mind.

Which leads us to the question: where to draw the line? Will companies still desire to claim that they have AI-related technologies embedded in their product?

Economies of scale

There is of course also the issue of enforcement of the law, and the unbalance of companies in their capabilities to comply. Again, in parallel with GDPR, the regulation may well benefit bigger firms who have the scale and capacity to invest in compliance, leading to even further polarization. Or, it may become an extra tool in legal battles to gain market share or put pressure on a competitor (just like patent wars) by claiming another firm violates the legislation.

Although open-source libraries and cloud computing have democratized AI technologies, in reality there is still a high entry barrier as you need access to talent and data.

And there is clearly the economic equation. As is often stated, introducing legal barriers may have a negative impact on the competitiveness of European firms as their operations will be more expensive, or organised in a less optimal way. Of course, this may well be a price we are willing to pay for the protection of our civil rights & liberties

Solutions should always involve governance

The sandbox idea in the law is promising, but could be extended in my opinion: any technological innovation framework should involve governance. This is even more true for AI and data-driven products.

With governance, I mean that the necessary supporting processes should be in place to manage and control the quality & integrity of the data that serves as the algorithm’s inputs, and its outputs. I will give a small example to illustrate the challenge of this task.

Take as an example a decision support system that estimates the number of people in a room based on temperature. Because people radiate heat, room temperature could be used as a proxy for the number of people in that room.

This very shallow view of the world introduces many potential problems when combined with algorithms without human oversight or governance because these techniques assume a closed-world model. This is the assumption that all information to draw a conclusion (ie. the number of people in the room) is present in the world model (ie. the room temperature).

This is, of course, clearly not the case. When trained on adults, the system will show bias and discriminate against children or women because they radiate less heat: the length of the room attendees is not accounted for because it is not in the world model. Or imagine that the windows are being replaced with more energy-efficient versions. This will of course impact the estimates made by the model, and therefore decision taken on the basis of these estimates.

At this point, the model should be adapted to this new context. The feedback loop should be closed, through proper governance.

The danger of using proxies — however powerful they are — is that they oversimplify the world. In our example, the whole concept of a human being would be reduced to one single number. This leads to the fragility of the resulting system.

AI and ethics cannot be solved at a purely technical level, because semantics are introduced only once the output of algorithms are used in a real life process. It requires sensible governance structures, that attempt to assess the impact of “each” business decision and change of context (e.g. rising global temperatures) on models.

This is clearly an open-ended problem that requires creative and critical minds. This leads us to the last challenge.

The skills needed for a world full of algorithms

We are currently not nurturing the skills that are needed to build, use, understand, maintain or “control” algorithm-based products. In my opinion, our current education focuses heavily on hard skills that can often be readily applied.

AI, however, requires a much wider palette of skills. Thorough mathematical skills and computational thinking are of course essential. But assessing the impact of algorithms on its environment, and vice versa, the impact of the environment on the output of algorithms, calls for holistic thinking and understanding both the philosophical, mathematical and implementation level.

We need interdisciplinary teams of creative, open-minded but critical thinkers that master the philosophical, mathematical and implementation-related foundations.

Let us start with that!

--

--

Johan Loeckx
Artificial Intelligence Lab Brussels

Professor @ Artificial Intelligence Lab Brussels (VUB), leading the applied R&D team and lifelong learning efforts. Passionate about music, education and AI.