Artificial Intelligence at a Crossroads

The following article is based on my research at the Berkman Klein Center for Internet & Society at Harvard University this Summer in the Special Projects team, and with the support of the Hans Böckler Foundation

Crossroad by Pieter Musterd https://www.flickr.com/photos/piet_musterd/8421297550/ licensed under CC BY-NC-ND 2.0

How much time did you spent behind a wheel this week? Probably more time that you should have. One innovation that could totally disrupt the way we get from A to B is the driverless car. The technology has evolved since General Motors vision of 1939 and if we’d ask companies like Ford and Volvo a fully autonomous vehicle will be ready to hit the road in 2021.

The driverless car is a vivid example to look at the ethical issues that evolve when it comes to its employment of Artificial Intelligence in our everyday life. Let’s do thought experiment to analyze the social dilemma, making use of the good old trolley problem:

Picture yourself in a fully autonomous vehicle in the year 2045. While the vehicle is driving down a street, a 5 year old boy appears on your route trying to chase his escaped ball. The car could either a) avoid the boy and crash into the van in the opposing traffic and potentially hurt you sitting in the car or b) avoid the threat of a car crash with the van and hit the boy. What would you prefer? And should the age of the pedestrian be incorporated in the decision? How would you decide if it would be a 85 year old man chasing the ball?

A study by the MIT Media Lab has shown, that most drivers would prefer other drivers to have self-sacrificing vehicles but when it comes to the vehicle that they are sitting in, they prefer it not to be self-sacrificing. Further the answers differ between cultures.

In this article we will deal with the ethical questions that are being posed in the legal context. Who is accountable in the case of an accident? The driver, the software-developer, or the manufacturer? And how can self-driving cars pass boarders without breaking national traffic regulations? How would international traffic regulations look like? And how can we include different driving behaviors and cultural differences when it comes to deciding who to sacrifice in a dicey situation?

There are many promises when it comes to autonomous vehicles. Statistically self-driving vehicles could cause less accidents, meaning there would be less damage and overall also a decrease of health costs if there are less humans being injured. Furthermore without drivers, labor costs would sink which would make the use of self-driving cars more affordable and accessible. People who are not able to drive a car today could therefore increase their mobility tomorrow. Moreover the traffic flow and urban space could be improved by responsive smart traffic lights for example, and there could also be a positive effect on greenhouse pollution.

But there are also a number of concerns that are being raised. There is a potential loss of jobs in the service and manufacturing sector. Furthermore it is still ambiguous, whether the outcomes will be positive regarding traffic for example, as maybe more people will use driverless cars than before — just think about children as drivers for instance. And understandably there is a list of concerns bringing privacy and security issues up, thinking of hackability for instance.

While US-american states already have passed the way for developers to test their self-driving cars on public roads, Germany has recently overhauled with their legal response by introducing “the most modern road traffic law in the world” this spring.

German companies like Volkswagen, Daimler and BMW are experimenting meticulously with self-driving technologies, as Germany is one of the biggest car exporters worldwide. In response, the Federal Government introduced a law, that allows the driver to take the hands off the wheel during the highly-automated ride, in order to check e-mails for example. But the driver must still observe the traffic or monitor the function of the autopilot and take over in case of emergency. The core of the law is the legal equality of human driver and computer. If a driver allows the automatic pilot to take control, then the car’s manufacturer will be responsible.

The question of who is accountable in an accident is central to the success of autonomous driving — and further blurred. For this purpose, the law defines a comprehensive monitoring system, with the help of which it is possible to determine in an emergency which car was driving when. A black box should record when the system was active, when the driver was driving and when the system asked the driver to take over. As we will see later, the black box technologies come along with several issues that have to be dealt with in advance.

The German legislation leaves many questions open. If the manufacturer is responsible for the decision-making of the autopilot-mode, they must ensure that the system is aware of every speed restrictions and other road traffic rules in every German street for example. Further, the law creates an uncertainty for the drivers when it comes to accountability questions.

Despite these obstacles, Germany pioneered an interim solution to get the cars on the road and to make them learn by harvesting precious street data. But the questions addressing legal issues with self-driving cars are not being deployed in a vacuum, rather there are existing regulations that are being implemented as in the example by the German legislators for instance. Before introducing a law like in the German example, there has to be normative societal consensus in the first place to determine what is right or wrong when it comes to autonomous vehicles and potential deaths in traffic.

Even though the autonomous vehicles promise to decrease crashes, not all of the accidents will be avoided. In some cases there are difficult ethical decisions that have to be made by the algorithm. As argued by the MIT Media Lab, self-driving cars must be programmed to kill. In order to make the vehicle to know how to perform in a dicey setting, a society has to decide, who should be sacrificed and who should be protected.

Experimental descriptive ethics can be a solution to these ethical questions. By giving the public different ethical dilemmas and letting them decide case by case what they would consider as a moral or immoral decision, when it comes to killing someone in a situation like ours with the kid and the van. Should there be an overall regulation for cases like this? Or should each car owner decide in the installation process of their autonomous vehicle? And if so, who is to blame in the case of an unethical decision by the car? An ethical framework can be developed over time and implemented into the design of the autonomous system.

Empty Road in the Mountain Side by Unsplash https://www.canva.com/media/MACCM7JG178

Steering at the global level

These issues bring us to the question of ethics, in which law needs to be anchored. One of the biggest challenges now is to find that consensus, but even within a nation this can be intricate. If we already struggle agreeing on an ethical framework within a culture, how will we ever find an global intergovernmental consensus?

These challenges also appear in the case in other applications of Artificial Intelligence. In already highly regulated sectors, like finance; health or in the justice system the legislators can built upon existing laws and apply and adapt them to the new evolving technology. How should our society look like, when Artificial Intelligence will be implemented in most areas of our everyday life? Different policy-makers respond in various ways, but we will see how they all address similar issues. Let’s look at Japan, the European Union and the United States for instance to see the reactions to AI on a policy-level.

Japan, the world’s leading robotics nation, introduced its concept Society 5.0 — a completely networked society empowered by the advancement of AI. Japan is a rapidly aging society, by 2050, the national census estimates that 40 per cent of citizens are over 65 years old. Japan aims to answer to demographic change with Artificial Intelligence. In nursing and retirement homes thousands of robots are being used today, walking and standing assistants and artificial toys which help the elderly and handicapped people to move more. “Japan, with its energy and resource constraints and demographic pressure, is placed among developed countries on the front line in seeking new societal models, ensuring sustainable and inclusive growth, and maximizing the wellbeing of its citizens.” Legal challenges are being addressed in Japan’s report on AI and human society, when it comes to determine accountability for accidents involving AI. In regard of the Olympics and Paralympics in 2020 in Tokyo, Japan promotes the autonomous vehicles development and drafted rules for its testing on public roads this spring.

How can we design self driving cars that accessible for everyone? Just think about an elderly citizen diagnosed with Alzheimer that uses autonomous vehicles to move from A to B. How can AI be more transparent, so that the diversity of citizens is being well informed enough to understand the risks of utilizing such AI technologies?

The European Union agreed on needing EU-wide rules for the fast-evolving field of Artificial Intelligence, to enforce ethical standards and establish a model of accountability for accidents e.g. involving driverless cars. Furthermore the EU commissioned a study to evaluate and analyse, from a legal and ethical perspective, a number of future European civil law rules in robotics.

With its 28 States, the European Union faces a variety of traffic laws. Several member states allow testing of driverless cars on public roads already. Alone in the case of the use of seat belts, speeding and drink driving the opinions differ between each state. How can the EU find a legal consent when it comes to accountability? The European Parliament prefers to handle global governance issues like this by proposing a “framework in the form of a charter consisting of a code of conduct for robotics engineers, a code for research ethics committees when reviewing robotics protocols and of model licences for designers and users.” How can cars designed outside of the EU enter european roads? And to which extent would foreign car makers therefore adapt to a European code of conduct?

A big amount of research and testing on autonomous vehicles is done in the United States. 7 strategies have been developed by the administration to facilitate AI technologies that provide a range of positive benefits to society, while minimizing the negative impacts. One of the strategies is to measure and evaluate AI technologies through standards and benchmarks. “Additional research is needed to develop a broad spectrum of evaluative techniques.” The black boxes that are being used for decision making often still leave many questions to their developers open. Last year a self driving car crossed a red traffic light for example and in Florida a driver using the autopilot lost his life. “There are serious intellectual issues about how to represent and “encode” value and belief systems. Scientists must also study to what extent justice and fairness considerations can be designed into the system, and how to accomplish this within the bounds of current engineering techniques.”

As we can see there are mutual cross cutting legal issues that are occurring, that raise a number of different interesting questions addressing inclusion; global governance and explainability worldwide. Albeit these three countries are separated by their specific socio-economic and historically defined social norms, they still have some common challenges across nations. Let’s take a closer look to the case of explainability.

Black Sheep by kirahoffmann https://www.canva.com/media/MACV3TpZrSM

Black Boxes and Black Sheeps

When it comes to decision-making about human beings made by algorithms the results are often being displayed without an explanation, as the results are not interpretable. Especially when we think about consequential decisions about access to credit, health and employment, this can be quite concerning. But when it comes to autonomous vehicles these issues seem even more urgent when it’s a matter of life or death.

There are three different forms of explainability that are being considered here. First of all, at the current state, some algorithms are still as complex, that even the software-developers don’t understand their decision making. The so called black box algorithms may be applied in the case when you are applying for a credit at a bank to buy a house. The bank accountant feeds the algorithms with your data, and the machine decides that you are not entitled for the credit. Nor the bank accountant or the software-engineer can you explain the specific reasons why.

In the case of the self driving car, this means that black boxes decide who to kill or not to kill. There are experimental approaches with deep learning self driving cars for instance. Deep learning is a part of machine learning that describes a software learning by recognizing data like images or sounds and classifying them into patterns and then drawing conclusions and applying those results to decision making and actions. Deep learning models are especially complex and hard to interpret. Further deep learning models offer dramatic performance improvements for many perceptual tasks, for example image recognition, that are required for self-driving cars — and so we can’t easily substitute in a more transparent modeling approach.

The company Nvidia developed a technique where the car learns to steer without any explicit instructions by its engineers after being trained what to optimize for. Its decision making simply based by its surroundings activities, the neural-network-based system functions surprisingly well, but it is as complex that the researchers can’t fully follow its decision-making.

Secondly, some decision-making can be unreasonable. In the health sector for instance, IBM’s Watson may decide that every cancer patient in the dataset that wears yellow socks, should be treated with a certain medication, but what kind of logic would that follow if you’d ask humans? Machine learning algorithms may make decisions based on correlations that are not necessarily causal. Especially as health care is so individual, it is very difficult to generalize data. The outcomes can be far-reaching from worsening conditions up to reverse health effects.

When it comes to autonomous vehicles unreasonable decision making can have fatal results. As in the example of the Tesla driver in Florida who got killed while driving in autopilot mode, the car kept going even after its roof got torn off. To a human arbitrations like that occur unreasonable.

And third, some decisions may just purely be injustice, meaning that there is a lack of adequate justification for the algorithm’s design. The models will replicate whatever patterns that exist in the data that are used to train them and will therefore perpetuate any structural biases that might be present in the data collection process or exist in society already in an obscure way.

It seems impossible to explain how the algorithm is consistent with law or ethics, e.g. in the case of sexually biased AI that assists in the employment of people. As the data, that is being fed in those recruitement systems, reflects how male applicants in the past decades have been more successful — as they have been more promoted for instance — the algorithms would propose to hire most likely males as they are more successful according to the logic shown in the database. Obviously the male applicants are not more “successful” because of their sex, the data is simply biased. A number of incidents in the US connected to algorithmical bias have became widespread and disputed across its borders. Consider how problematic a biased autonomous vehicle could be. Think about a car’s decision making in a dicey situation involving fatal outcomes in an accident, that only properly recognize pedestrians as humans that are cisgender white people. We already saw the outcomes of algorithms that can be racially biased.

A possible answer to the concerns about explainability may be the European Union’s solution. A law coming into effect in May 2018 will give its citizens the right to obtain an explanation of automated decisions and to challenge those decisions. But critics challenge the feasibility of that legislation as it “lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless”.

Clear Road in the middle of Mountains by Unsplashed https://www.canva.com/media/MABKNGtXtQ8

As we saw, there are many ethical and legal issues coming across in the context of AI. As a society we need to decide how we want to design our AI and how we can ensure them to be inherently fair and explainable. Specifically in the case of autonomous vehicles we saw how urgent the development of a consensual ethical and legal framework is. When it comes to possible legal responses, the policy-makers have to take into account that there are applications specific issues as well as cross cutting challenges that need to be addressed. We will see, how the response will look like responding to the evolving technologies and what ethical frameworks society will choose.

But before you step into a car the next time and imagine a dystopian driverless future, use this pacifier and keep in mind that there are still highly doubts, that Ford can keep their promise on launching fully autonomous vehicles in the next decade anyway. We as a society are capable of deciding on how we want to design our driverless future, we are being the ones being asked here and not the cars themselves.