EU Ethics Guidelines

Thought Process, Intentions, Prospects

Matthias Peschke
The Startup
7 min readDec 30, 2019

--

The EU Ethics Guidelines have been published in April 2019 and remain the subject of controversy as some believe focusing on regulating AI at this early stage will be detrimental to the development of the technology. For this reason, it makes sense to take a closer look at the thought process and intentions behind the guidelines to better understand their purpose and evaluate future prospects.

Ever since the High-Level Expert Group on Artificial Intelligence (HLEG) of the European Commission published the Ethics Guidelines for Trustworthy AI in April 2019, they have been subject to frequent debate because they address a fundamental question of our time:

Should AI be regulated according to ethical principles that guarantee the benevolent nature of this technology?

Many arguments are being put forward in favour of this initiative due to the negative impacts that come along with this technological change such as algorithmic bias and the exploitation of personal data for financial and electoral gains. Counterarguments usually point out that setting up restrictive regulation would only interfere with the development of AI, diminish the speed of innovation and lead to a competitive disadvantage vis-à-vis restriction-free countries.

Unsurprisingly, the guidelines have received mixed reviews. While being praised as a value-oriented step into the right direction that can include future tech developments, criticism centres around their non-binding character, overall vagueness and lack of external oversight mechanisms.

In this context, it makes sense to shed more light onto the thought process of the guidelines to get a better understanding of their intended purpose and evaluate the impact future legislation could have in this field.[1]

Why establish an ethical framework?

Just as previous general-purpose technologies like steam power and electricity, AI has the potential to contribute substantially to human welfare and well-being. The general idea of an ethical framework, therefore, was to ensure the goodness of AI while, at the same time, promoting its benefits.

The first problem the HLEG encountered was to define what is “good”. Since ethical concepts vary a lot and provide conflicting answers, the definition of “Good AI” must build a common denominator that is universally accepted. To get there, the HLEG assembled a diverse group of 52 members from industry, academia and civil society which regularly conferred with the European AI Alliance where citizens can participate and contribute ideas of their own. The logic behind this effort was that, if such a diverse set of people can agree on something as complex as ethical AI, the outcome can also be accepted by society as a whole.

Since the goal was to develop a human-centric approach that promotes the development of AI as a means to an end, not an end in itself, its foundation had to be based on three components: First, AI must comply with existing laws and regulation such as the GDPR and Consumer Protection Laws. Second, it should be compatible with ethical principles. And third, it had to be technically robust meaning that AI needs to remain functional when input errors occur (see figure 1).

Figure 1: The Basis of Trustworthy AI

To decide, which ethical principles to adopt, the HLEG looked at five different families of fundamental rights (figure 2, left). These are the same rights that apply to individuals in liberal democracies making them an ideal blueprint to derive ethical principles for a technology with such far-reaching implications for society (figure 2, middle).

The first ethical principle requires AI systems to respect human freedom which can be manipulated if personal data is used in a malicious way. We have already seen this unfolding in the US Presidential Elections 2016 where personal data from social media accounts were used in an illicit fashion to target crucial voters in decisive constituencies with tailor-made, yet dodgy ads that made them vote for something they would not have considered otherwise.

Next, algorithms can be harmful and discriminatory in many ways which is why the prevention of harm as well as fairness each constitute an ethical principle. Just as citizens are equal before the law in liberal democracies, algorithms need to provide the same fairness and be prevented from taking biased decisions that discriminate against certain people as this would cause substantial harm to society.

Ultimately, to achieve fairness and prevent harm, the explainability of algorithmic decision-making has to be ensured. At the moment, the most efficient algorithms do not provide explanations for the decisions they come up with. If a car insurance algorithm decided that a customer has to pay a high premium, it could not give a specific reason which becomes problematic when the affected person has a seemingly solid driving record. Such algorithms go through large data sets including driving data of millions of people and find patterns that are invisible to the human eye. A driver might then have to pay a higher premium simply because the algorithm has found similarities to other drivers that are more prone to accidents and, due to the inherent opaqueness of such algorithms, there is no way of knowing the specific characteristics that cause a higher premium. Increased explainability would, therefore, work as a means to provide people with the certainty that everyone is treated equally.

These four ethical principles were then translated into seven key requirements which have to be continuously implemented by any system that operates AI. This way, AI can comply with the EU Charter and Human Rights Laws while giving clear instructions to companies as to what the AI they deploy needs to comply with.

Figure 2: The Quest for Ethical Principles

It is true that the result is still broad and non-binding in nature, however, it is important to note that this is just an intermediate stage toward concrete legislation. The goal was not to immediately confront businesses with hard regulation but to stimulate a wider political debate about AI ethics and receive comments from society and industry to make sure the guidelines promote the safe development of the technology. Once this feedback has been evaluated, concrete legislative proposals will follow. In fact, Commission President von der Leyen has already signalled her support emphasising that legislative steps toward a regulatory framework for AI will be given priority under the new commission.

Future Prospects

Figure 3: The Black Box Model. Source: investopedia.com

While it remains to be seen how such a legally-binding framework will affect the development of the technology, many societal actors have come around to the mounting evidence that the negative impacts of AI require special attention. One big question that remains is whether businesses will be technically able to comply with requirements like explainability without losing the algorithm’s accuracy and efficiency. As already mentioned, the AI landscape is dominated by algorithms that are chosen solely by its prediction accuracy. Oftentimes, these algorithms are so complex that humans do not understand why a particular prediction was made. Although there are some tools available that can be applied to explain the output of these so-called “black box models”, they, nevertheless, either reduce accuracy or provide only a rough approximation on how the prediction was made. In any case, it seems that this is still a far cry from the EU’s goal to ensure the goodness of AI while also promoting its efficient use.

On the other side, a regulatory framework could give the EU a competitive edge by incentivising the development of accurate explainable AI (XAI). There is a growing demand for such a technology because more and more industries, including military, finance and healthcare, are relying on AI whose decisions have far-reaching consequences for people’s lives making accountability and transparency increasingly important.[2]

Furthermore, as industries are still only scratching the surface of this technology fearing the adoption of algorithmic decision making could violate regulation and put their assets at risk, XAI would certainly boost confidence and expand AI use cases. Especially businesses which either operate in highly-regulated areas or take decisions with substantial impact for people, shareholders and the economy would benefit considerably if they could obtain clear information over the algorithmic decision-making process.[3]

Another important aspect in this context concerns scientists who use machine learning in their research to gain insights from large data sets. Being able to understand how an algorithm processes the input data would constitute a huge advantage for researchers and might lead to breakthroughs in data-intense disciplines such as medical and behavioural science.[3]

For the EU, this means that, in the long run, “Trustworthy AI” could develop into a successful technology strategy that combines fundamental rights with economic efficiency creating a technological niche where the EU could take on a global leadership role.[4]

Endnotes

[1] These insights are drawn from speeches made by Natalie Smuha and Dr Anton Vedder at the 2019 Ius Commune Conference at the KU Leuven. Additionally, the EU Ethics Guidelines were used as a source.

[2] Angelov, Boyan (2019): Explainable AI (Part I): Explanations and Opportunities. Article accessible here.

[3] Barredo Arrieta, Alejandro et al. (2019): Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. In: arXiv:1910.10045.

[4] Csernatoni, Raluca (2019): An Ambitious Agenda or Big Words? Developing a European Approach to AI. In: Egmont Policy Brief №117. Document accessible here.

--

--

Matthias Peschke
The Startup

I have a background in international relations and my main interests are AI and its effect on society, foreign affairs and backsliding democracies.