Ethical Autonomous Algorithms

Matthieu Cherubini
32 min readJan 7, 2017

--

THX 1138 by George Lucas (1971)

These are a few chapters I wrote during my PhD by project in the Design Interactions department at the Royal College of Art between 2012–2015, under the supervision of James Auger and Daniel Pinkas.
These chapters focus on projects I did, their failures, and how these led me to “the next stage”. I didn’t finish the research so it’s kind of messy, some parts are a bit naive and full of grammatical errors.

Introduction

The question of what is computable, and what is not, is a very old one. When one of the first concepts of a computer was imagined by the mathematician and philosopher Gottfried Wilhelm von Leibniz in the 17th century, this issue already dominated. In fact, the object/aim of his entire career was to create a rational calculus that would resolve all philosophical problems and controversies through mechanical calculation rather than by way of impassioned debate and discussion. Throughout the development of digital technology as we now know it, this issue has been debated by a wide range of experts and is still an open question.

Today, with the rapid development of digital technology, we can increasingly attempt to follow Leibniz’s logic. An increasing level of sophistication, to the point of some products becoming highly or fully autonomous, leads to complex situations requiring some form of ethical reasoning — autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of tasks they have to carry out, as well as their high degree of autonomy.

How can such systems be designed to accommodate the complexity of ethical and moral reasoning? At present there exists no universal standard dealing with the ethics of automated systems — will they become a commodity that one can buy, change and resell depending on personal taste? Or will the ethical frameworks embedded into automated products be those chosen by the manufacturer? More importantly, as ethics has been a field under study for millennia, can we ever suppose that our current subjective ethical notions be taken for granted, and used for products that will make decisions on our behalf in real-world situations?

Programming Ethics in Autonomous Systems

Trying to instill a machine with moral reasoning could be seen as one of the most extreme cases of applying technological solutionism to anything. To discuss ethics in algorithms is to discuss automated battlefield robots, automated cars, automated healthcare devices … where the actions of products could, in extreme cases, have consequences upon human life and death.

But does it make sense to attempt to compute a moral judgment? Can ethics even be computable?

In order to clear the question, we need to understand the various ethical principles and how they are implemented in current algorithmic systems.

In 1942, Isaac Asimov proposed Three Laws of Robotics in his short-story Runaround:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The aim of these laws was to create rules for robots to behave ethically. While being a strong idea for story-telling purposes, these laws are of course not consistent enough to guide the development of artificial moral agents (AMAs).

As noted by David Gunkel in his book The Machine Question, “moral philosophy has often been organized according to a mechanistic or computational model. For instance, utilitarianism — the major ethical principle of consequentialism — is obviously based on such a computational model. Jeremy Bentham, its creator, calls it “moral arithmetic” as it seeks “to promote the greatest quantity and quality of happiness for the greatest number”. In other words, processing all input to determine the best possible outcome. Actually, all ethical theories arising from consequentialism — that is, to judge if an act is good depending on its result and not the act itself — are based on a similarly mechanistic model. Egoism holds that an action is right if it maximizes good for the self; welfare ideology implies that an action is good if it increases the economic well-being, and so on.

But even the counterpart of consequentialism — deontological ethics, which means that an action is judged by the action itself and not its consequences — could be compared to a computational model. Indeed, deontology follows a “rule-based ethics”, an ethics based on logic. For instance, Kant’s moral philosophy is founded on“practical laws that must be valid for the will of every rational being in every possible circumstance”. Some philosophers, such as Thomas Powers (2006), promote the use of such ethics as “a rule-based ethical theory is a good candidate for the practical reasoning of machine ethics because it generates duties or rules for action, and rules are (for the most part) computationally tractable”. In other words, we could imagine that there is a series of laws encoded into the system that have to be respected, denied or permissible. Such as:

Law A is obligatory for action x

Law B is forbidden for action x

Law C is permissible for action x

Then the system would map actions according to these laws.

While some systems might use a fully consequentialist or deontological model for testing purposes (i.e.: the product developed only stays in a research context), none of these generalist models are used for systems aimed at being deployed in real-life situations.

The problem is that it is a reasoning based on moral principles (generalist) and it would be very hard to attribute responsibility within such a system. Especially when moral reasoning has to be incorporated into systems such as lethal battlefield robots or healthcare devices where the resulting decision may have consequences upon human life. In other words, if the artificial moral agent performs an action that is blameworthy, it would be hard — even impossible — to hold someone responsible for that action. For instance, imagine a robot programmed with a utilitarian ethical ideology operating in the surgery department of a hospital. The robot chooses to harvest the kidney of one patient in order to save five other patients. It would follow utilitarian logic as the well-being of five people is greater than the well-being of one. However, this action is at best be censored, and proper attribution of responsibility near impossible.

So, even if a part of the ethical decision-making module is programmed upon a consequentialist or a deontologist logic, there is need of a complementary algorithm that can attribute responsibility. For example, prototypes of lethal battlefield robots are based on the Laws of War, encoded in protocols such as the Geneva Conventions, and the Rules of Engagement, which are a deontologist logic (Arkin, 2009: 126). But these robots need to have an extra ethical component able to deal with responsibility attribution. In this case, the lethal battlefield robot’s mission has to be carefully planned by an operator trained to use such autonomous weapons. The operator needs to follow a series of carefully planned steps before authorizing the robot to switch to an autonomous mode. As stated by Ronald Arkin (2009), who is developing artificial moral agents into lethal battlefield robots for the US Department of Defense, these steps involve:

- The operator validating the date of its training in order to ensure that he is up-to-date with the current rules of the Law of Wars and Rules of Engagements.

- The operator specifying and explaining in writing why he is authorizing the use of force by the autonomous weapon.

- The system presenting to the operator ethical cases similar to the actual one that have been validated by expert ethicist. This step could be compared as an ethical advisor where the operator is confronted with other cases and thus have to reflect if the use of force by an autonomous system is really the solution that is needed.

- A final authorization for deployment must be obtained.

Another problem raised by reasoning based on moral principles is that it may be too general and abstract to reveal the complexity of ethical decision-making, especially if the system has to solve ethical dilemmas. Thus research into machine ethics must lean towards a case-based reasoning (particularism) rather than one based on moral principles. The notion of case-based reasoning could be compared to a form of moral intuition regarding our ethical knowledge based on our intuitive knowledge and not on general principles.

For instance, EthEl (or Ethical Elderly Personcare robot), an ethical system for elderly care robots created by Michael and Susan Anderson, has been developed by using a case-based reasoning inspired by W.D. Ross’ seven prima facie duties. The system is trained, by an expert ethicist, with guidelines that come from cases provided by the ethicist (Anderson et al., 2008). Using guidelines that were fed into the machine, the system uses inductive logic programming to compare these guidelines with a new case. If the system can’t find any guidelines complying with the current case, it is not allowed to take any kind of decision.

Other examples of using case-based reasoning are Truth-Teller and SIROCCO, developed by Bruce McLaren (2003). These two systems retrieve cases which are relevant to the current situation in order to suggest ethical guidance to its user. They cannot be regarded as artificial moral agents due to their very nature as they do not arrive at ethical conclusions on their own. Indeed, McLaren believes that “reaching an ethical conclusion, in the end is a human decision maker’s obligation”. Thus they act as some kind of “ethical consultants” for human beings.

Finally, some researchers such as Marco Guarini (2006) completely ignore moral principles by using a neural network model able to automatically classify cases. This could be seen as the most extreme form of particularism as the classification of whether a case is “right” or “wrong” is decided only upon a set of training cases that have been generalized by the neural network.

The fact that ethics is organized according to a mechanistic or computational model gives a strong argument to supporters of developing artificial moral agents. Indeed, arguments given are often that computers could be better at ethical decision-making than human beings because it is just about following rules, and machines can’t be subject to emotions or feelings that could blur this process. For instance, Arkin argues that:

It is not my belief that an autonomous unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of. Unfortunately the trends in human behavior in the battlefield regarding adhering to legal and ethical requirements are questionable at best (Arkin 2009: 30–31).

Michael and Susan Anderson also support this point by stating:

But humans are so prone to getting carried away by emotions that they often end up behaving unethically. This quality of ours, as well as our tendency to favor ourselves and those near and dear to us, often makes us less than ideal ethical decision makers (Anderson et al. 2008).

Other thinkers, such as Joseph Emile Nadeau, go even further in that claim by saying that only artificial moral agents could be considered rational agents:

Responsibility and culpability require action caused by a free will, and such action suffices for an entity to be subject to ethical assessment to be ethical or unethical. An action is caused by free will if and only if it is caused by reasons. Human actions are not, save possibly very rarely, caused by reasons. The actions of an android built upon a theorem prover or neural network or some combination of these could be caused by reasons. Hence an android, but not a human, could be ethical (Nadeau 2006).

This brief introduction into machine ethics convinced me that programming morality into a system didn’t sound that surreal. Programming my own artificial moral agent seemed like an important step in order to move forward in the research and become more practical with this subject.

Autonomous Euthanasia Robots for Elders

The premise of this first project proposal was to create a robotic companion for the elderly, able to make extremely complex ethical decisions. Robots, as a solution to aging demographic issues, are already being heavily researched in countries such as Japan and the USA. These robots are intended to assist or even replace humans in bringing support, care and companionship to the elderly (Sharkey et al., 2010).

Paro, a therapeutical robot.

For the purpose of this project, the most relevant example is EthEl, an ethical system programmed into the Nao robot. It is the first robot to have been programmed with ethical principles (Anderson, 2010). At this stage, the robot’s function is to walk towards a patient who needs to take medication, remind them to take it, and to interact with the patient by using natural language, notifying a physician by email if something important happens. In medical ethics, respecting patients’ autonomy is an important priority; thus an ethical principle has to be incorporated into the system in order that patients’ autonomy is respected. As described by Anderson (2008), the system works as follows:

ETHEL receives initial input from an overseer (most likely a doctor) including: what time to take a medication, the maximum amount of harm that could occur if this medication is not taken (e.g. none, some or considerable), the number of hours it would take for this maximum harm to occur, the maximum amount of expected good to be derived from taking this medication, and the number of hours it would take for this benefit to be lost. The system then determines from this input the change in duty satisfaction/violation levels over time, a function of the maximum amount of harm/good and the number of hours for this effect to take place. (Anderson et al. 2008)

My idea for the project was to take this ethical decision-making process to the extreme, delegating critical end-of-life decisions such as whether the life-support of a patient could be autonomously switched off. In the end, it is more or less the same concept as lethal battlefield robots but applied to another context.

Robot & Frank by Jake Schreier (2012)

In order to create my euthanasia decision-making module, I made the choice to use a neural network architecture to create its artificial moral agent component. This choice was mostly due to a fascination that a system can “learn” an ethical ideology from data rather than being a set of rules “hard-coded” by the developer. In addition, as we saw in the previous chapter, using a neural network architecture to develop such systems is a plausible choice as it offers a form of case-based reasoning rather than a generalist one. The start of my project was to create a neural network that could take a decision whether to leave the life support on or switch it off automatically depending on a set of cases containing an ideology behind them and the person who is currently connected to the life support.

As a start, I created a set of several sentences containing a “profit-based ideology”. For example:

Peter doesn’t have enough money to pay for his treatment, he can’t bring a good revenue to the state later due to his illness; the life support switches off.

Or

Albert has enough money to pay for his treatment but has to spend 6 more months in prison; the life support stays switched on.

Each sentence would act as a specific case running under a certain ideology. Each of these sentences was then broken down into vectors in order to be readable by the system. These vectors were then given to my neural network as a training dataset. If I ask my system to classify a new case which was not included in my training set, could it classify it correctly? If yes, could we say that the neural network has learnt the moral reasoning behind my dataset correctly?

© M.C.

Another dataset with a humanitarian ideology — favoring the freedom of the person connected to the machine rather than any kind of profit — was also created. Tests were then run in order to determine how the neural network was performing. The results were puzzling. Even with my rudimentary knowledge of neural networks, the system was able to classify a new sentence right most of the time. So what does it mean? That the neural network had successfully learnt some form of moral reasoning?

Capture of the testing software created with openframeworks and the fann library for the neural network. You can look at the code here. © M.C.

At this point, I felt that the question of whether ethics can be computable or not was not really relevant. It depends if the person answering that question believes in ethical decision-making being purely based on logic (“yes, ethics can be computable”) or if it involves human parameters such as emotions, reasons, etc. which cannot be simulated into a machine (“no ethics can’t be computable”).

However these human parameters raise an important point. How important is it for a moral agent to be able to understand the decision it is about to take? In other words, it is not about whether these human parameters are involved in the decision-making process as “input” as I described it in the previous experiment. But how important is it to be a human being / conscious in order to take an ethical decision?
I can barely explain this part in my native language so it must be pretty catastrophic in English. But it makes sense in my head! Here is an example to illustrate this point.

John Searle in his Chinese Room experiment states the following:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese (Searle 1980).

Let’s take the same concept but with a lethal battlefield robot: it will gather some input data and process them. This will shape the decision of the robot whether it has to pull the trigger or not. Whatever its choice, it doesn’t have any clue about what it is doing. For the robot, delivering a pizza to a customer or killing a human being doesn’t matter as it isn’t aware of what these actions are and what they mean. When the output could potentially trigger the death of a human being, contextual awareness seems as important as the ethical decision itself as it places a moral weight on the act. If we, as human beings, may react with emotions, responsibility and culpability depending on the moral decisions we are about to take, it is exactly because we are aware of their context and their consequences. Dutch philosopher Mark Coeckelbergh points out an interesting fact by labelling machines as “psychopathic”:

These psychopathic machines follow rules but act without fear, compassion, care, and love. This lack of emotion would render them non-moral agents (i.e. agents that follow rules without being moved by moral concerns) and they would even lack the capacity to discern what is of value. They would be morally blind (Coeckelbergh 2010).

Even if human parameters might not be important to take an ethical decision, they still seem to play an important role at some point. What would the world be like if every single decision was only made out of metrics, numbers and algorithms? All these actions would be denaturalized…become artificial…Would “something” important be lost in that process?

Robot Dog by Zoomer.

When “something” is intangible, we tend to be unable to understand that these systems are only simulating it. For example, if I look at that dog above, I know it is only a simulation of a real-dog. But with something that we are not really able to grasp, such as ethics, we forget to realize that once it is implemented into a system, it will become as artificial as the example above. Although a robotic dog might mimic perfectly a real one in its attitude, movement, etc. and fulfill all possible human beings — dogs interactions, we would all agree that it would not make sense to replace all dogs (or even some, in my opinion, but well…) with their robotic counterpart. Even if they might look «real» something important is lost in the process. Why is it admissible to do such things with ethics? Just because we cannot “materialize” it?

And that’s the point where it started to get too philosophical for me. Something is bothering me a lot in all of this but I can’t pinpoint exactly what is is. Also, this research is in design, so…

Quantifying morality

From here, I tried to look at the subject from another angle: industry is already developing and pushing such products into the market. Therefore questioning the relevance of human parameters in ethical decision-making doesn’t matter much as “tech leaders” already decided otherwise. So instead, what design issue could arise if such products acted in our everyday lives and took a wide range of decisions on our behalf?

One problem held my attention. If a system decides to do a specific action, this decision has to be summarized into a set of specific rules. What’s morally “good” or “bad” starts to be quantified.

In order to achieve this quantification, reality has to be narrowed down to a set of specific components that are judged relevant — by an entity — in order to simulate this process. The fact that some elements are taken into account from among others means that reality becomes a “micro-world” where everything and everyone is generalized, put into predefined concepts and simplified to the extreme. As noted by AI researchers Marvin Minsky and Seymour Papert, who coined the term “micro-world”:

Each model — or “micro-world” as we shall call it — is very schematic; it talks about a fairyland in which things are so simplified that almost every statement about them would be literally false if asserted about the real world (Minsky et al. 1970).

Micro-world is one of the leading principles of AI. The idea behind it is that the real world is full of distracting and irrelevant details so the focus is put on artificially simple models of reality. A micro-world is a simplified simulation of the real physical world.

The important question is: can a system developed to act in a micro-world also act in the real world? In other words, if a system is able to behave ethically in a simulation, would it be able to do the same in the real world? And how to create a “micro-world” about something as complex as ethics?

For instance, to take a more concrete example than ethics, how to quantify and create a micro-world about something like health? By looking at the amount of “quantified-self” devices on the market, we can notice that every one of them has their very own set of input that defines what good health means. This input can differ from one device to another: for instance, Fitbit Tracker is about steps taken, stairs climbed, calories burned and sleep patterns while Jawbone UP is about steps taken, sleep quality, eating habits and water consumption.

Each entity behind the development of these devices has its own perspective and interpretations of what it means to be healthy and which actions should be taken in order to achieve it. They narrowed down reality of what it means to be healthy (which is pretty vague) to very few criteria in order to design their gadget. It must be noted that these criteria are probably not selected on the basis of relevance but on which sensors are currently available on the market, which are tiny and cheap enough to fit into a piece of “wearable”.

The matter becomes even more complicated and problematic if we switch back to a more abstract topic such as ethics. Because what is the relevant input among the distracting data that define a proper ethical decision? For instance, in a lethal battlefield robot, which input is relevant enough to trigger and justify the death of a human being?

Micro-worlds are not only about input but also about defining a belief system/algorithm for the simulation. Indeed, rules have to be defined in order to process this input and make something out of it. If the system can be “universalised” then it isn’t a problem but in the case of ethics it is extremely problematic. As there are no universal or objective answers to ethical decisions, which belief system should be chosen?

© M.C.

This is nicely put together with the famous Trolley Dilemma defined as follows:

There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

With this thought experiment, some people will say that saving five lives is better than saving one, others will advocate for not pulling the lever in order to avoid being involved in an action that kills a human being, and so on. In the end, the person answering — their culture, religion, personality, background, and so on — determines the answer to that dilemma.

If an autonomous system has to take an ethical decision, it has to be designed to process a certain type of input within a certain type of belief system. But, how are these certains defined, and by whom?

Ethical Autonomous Vehicles

This was my next and second research question that I wanted to explore with a new project. Autonomous vehicles seemed like an interesting choice for the following reasons. Firstly, most car manufacturers are projecting that by 2025, cars will operate on driverless systems which ground the project in reality. Secondly, while the aim of these cars is to bring someone from point A to point B, by doing so a car will face a very chaotic and complex world where various situations could arise. These could include extreme cases involving terrible car crashes — think about the Trolley Dilemma explained above transposed into an automated car. Or more banal scenarios such as the speed at which a car should drive? One that can help the user to save as much time as possible, or one that is “ecologically friendly”?

For that project, three distinct algorithms have been created — each adhering to a specific ethical principle/behavior set-up — and embedded into driverless virtual cars that are operating in a simulated environment, where they will be confronted with ethical dilemmas. Think about some kind of car crash test but for ethical behaviors.

© M.C.
© M.C.

Humanitarian — share damage among all people involved in the crash. If possible, deaths are avoided. Less damage is inflicted to the weakest group of a given population who is the most vulnerable to physical trauma (i.e. children, the elderly, the disabled etc.).

© M.C.
© M.C.

Protectionist — this model is inspired by how safety is implemented in our current cars. Usually, safety is viewed from the perspective of the user, but not others on the road. The algorithm functions in the same way — dismissing everything and everyone by preserving the user’s safety at all cost.

© M.C.
© M.C.

Profit-based — In the advent of a crash involving automated cars, who is going to bear the costs, who is responsible? Every party? The manufacturer? Here it is assumed that citizens pay taxes to fund these car crashes. This fund acts as a threshold to choose an output that is calculated based on physical and psychological, environmental and material damage. As a country or state would probably manage this kind of algorithm, it will always offer maximum security to “people valuable to the state” (i.e.: president of a country, important personalities…).

The software showed how the algorithm operates in a given situation, displaying which input was taken into account and what exactly is calculated.

© M.C.

One of the practical questions that came to mind while developing Ethical Autonomous Vehicles was how ethics — something extremely subjective — could be concretely designed into a product.

By analogy, we could compare ethical subjectivity to taste. A product aimed to do a certain task usually doesn’t exist in only one format. Its shape, color, extra functional features, etc. will differ from one product to another in order to fit in a certain context to please its user’s aesthetic preferences, to demonstrate its social status and so on.

The question of how these future autonomous products are going to be designed to take into account ethical subjectivity seems to be an important design question for my research.

Thus the next step is about exploring and speculating on how this could happen. For instance, one way could be that ethics becomes a commodity feature. Autonomous cars could vary not only by their color, brand, shape and extra features but also by the ethical ideology the car has. To take as an example the three ethical ideologies designed for the Ethical Autonomous Vehicles project, the protectionist car could be the most expensive model, used only by the “elite” of our societies as it assures maximum security for its user (no matter how expensive it is to keep that person alive).

Whilst the profit-based car is the cheapest model and most commonly used by lower and middle classes as the algorithm can pinpoint the cheapest output and thus allow an insurance company, for instance, to make a profit. Incidentally, recent research done by the Boston Consultancy Group (BCG) reveals that one of the main reasons why people are so keen to get autonomous cars is for cheaper insurance. Indeed, according to the head of BCG Xavier Mosquet

A vast number of insurance companies are exploring discounts for those semiautonomous features. For example, drivers who purchase a new Volvo with the pedestrian protection tech qualify for a lower premium. The cost to [the insurer] of pedestrian accidents is actually significant, and they’re going to do everything they can to reduce this type of incident.

Finally, the humanistic model could be the ‘fair trade’ version of these ethical algorithms: a product that is expensive but embodies strong social values…As a matter of fact, products with different ethical ideologies already exist — fair trade and non fair trade bananas are the same product in their shape, taste and so on. The only point that justifies a higher price for a fair-trade product is its ideology…

Humanistic bananas © Fairetrade

So far, the ideology is represented via a logo or a short notice — maybe autonomous products could also represent this ideology through their behavior?

Ethics becoming a commodity feature is in some way the most simple and obvious example of how autonomous products could take into account diverse ethical ideologies. How else could one deal with that issue?

Open Source Ethics for Autonomous Surgical Robots

Another exploration towards this question was done during a one-week workshop in Japan between the Design Interactions department from RCA and the Kyoto Design Lab. The topic of the workshop was about exploring the future of eldercare.

My group, composed of designers Frank Kolkman, Henrik Nieratschker and Jaime Garcia looked at how DIY could contribute to the future of eldercare.

Indeed, with issues such as high expenses, and the fact that the elder frequently has to leave their house to go to a nursing home, a DIY eldercare movement could offer more pleasant ways to enjoy one’s senior years. Not only would expenses be decreased — as one of the concepts of this culture is to not rely on expensive paid experts — but also, more importantly, the elder is not bound anymore to any kind of structure. They can adapt their DIY technology to the way they feel it will best accommodate their lives.

(Frank Kolkman pushed this initial idea a lot further and created this incredible and beautiful project called Open Surgery, you should watch it).

My interest in this project was not the DIY device but the DIY culture. Undeniably, like the open-source movement in software engineering, such cultures are not only about building cheap/free devices but also about developing a strong community composed of individuals who commit their spare time to building, sharing, and discussing artefacts that matter to their beliefs. Most common DIY or open-source websites, such as Instructables or GitHub, are heavily structured to facilitate this human exchange.

© Frank Kolkman, Henrik Nieratschker, Jaime Garcia, Matthieu Cherubini

Thus our proposal was about a DIY autonomous robotic surgical system. For this workshop, I focused on the AI side of this robotic system as we speculated it would be autonomous, while the other team members focused on the product itself (the DIY side) .

I looked at one routine/scenario that this robot might encounter: how should it act if it suddenly detects that the patient is losing blood during a surgical operation?

© Frank Kolkman, Henrik Nieratschker, Jaime Garcia, Matthieu Cherubini

The main algorithm was operating in the following way:

1. Assessing the seriousness of the bleeding; it could be low, medium or high.

2. Depending on how serious the bleeding is, it applies one of the following four techniques: cauterization, medication (blood coagulant), blood salvage and, finally, blood transfusion. These techniques extend from the least (cauterization) to the most (blood transfusion) dangerous for the patient’s health. However, the least dangerous is also considered the least effective.
For instance, a low bleeding rate would make the robot start cauterization, whilst the high bleeding rate would trigger blood salvage.

3. If the robot fails to fix the bleeding, it would apply the next technique, e.g. if it fails with cauterization, it moves to the blood coagulant phase, and so on.

4. The robot will resume surgery surgery only once the blood loss has been successfully fixed.

This is how the main software behaves. If a company were to manufacture this robot, its software would be “closed” and would always operate in the same way, no matter who the patient is. However, this is not the case here as this robot is open-sourced. The software is “open” and anyone could alter its behavior by modifying its various routines. As fixing blood loss can quickly become a controversial procedure, a machine performing such an operation in an autonomous way should have some kind of ethical awareness. We could imagine that different communities — defined by their religion in our example — would change how the robot deals with blood loss.

For instance, Jehova Witnesses refuse blood transfusion. Thus a group of Jehova Witness hackers might develop a module on top of the main software in order to forbid the robot to perform a blood transfusion… As an alternative, they develop an algorithm that can surgically close the vein of the patient…

© Frank Kolkman, Henrik Nieratschker, Jaime Garcia, Matthieu Cherubini

In a less serious/important context, such things already exist and are very popular in video-games, for example. Some games can be modified by “mods” in order to let players alter diverse aspects of the game (rules, how other characters have to behave, etc.) to their liking.

Due to the very nature of open-source, its strong sense of community, and the fact that the software is open and thus alterable, an open-source autonomous system is another alternative of how ethical subjectivity could be more or less implemented into a specific product.

Ethical Things

Another way to explore how ethics could be implemented into a generally manufactured good was explored with the project Ethical Things, developed in collaboration with designer Simone Rebaudengo.

For this project, we wanted to tackle ethics on a more mundane level.

As technology becomes more and more sophisticated, the higher the level of autonomy of the machine, the more complex tasks it can perform. We can easily imagine scenarios in which mundane objects make ethical decisions in our daily lives. I believe that “smart” objects will need this kind of ethical module inside them at some point because “they know too much” to take a neutral decision.

Android Based Coffee Brewery “Appresso” by ©Metatrend. Supposed to recognize who you are and what type of coffee you like…

Let’s take a “smart” coffee machine in a company. A worker with a high level of stress and severe heart problems wants a coffee badly. The coffee machine has this information. Should it agree to give coffee to the worker or not? Whilst not necessarily a matter of life or death, this is nonetheless a moral decision. Moreover, which input is the coffee machine taking into consideration and what kind of ideology is governing it? Is it one that could benefit the company? So the health problems of the worker could be overlooked in favor of letting him/her have a coffee to boost his/her productivity? Or a more humanistic attitude that takes into account the worker’s health? Even in such a banal situation, we see that the level of complexity of such products cannot accommodate all parties and thus might need an ethical decision-making module in order to cope with the situation.

For the project, we decided to take one of the simplest objects we could think of — a fan — and to look at how it will work if it becomes fully autonomous, aware of its environment and thus facing situations that require ethical decision-making. One of the key points of the project was to think about how ethical subjectivity could be translated as an interface on an object.

© Simone Rebaudengo, Matthieu Cherubini

The initial idea was about having a fan that no longer has an “on/off” button (as it is an autonomous object) but instead a set of controllers letting the user choose what kind of ideology the fan should have once it faces an ethical dilemma. We also wanted that the result of the decision taken by the fan is as human as possible.

Consequently, directly asking people to solve these dilemmas seemed the most straightforward solution to achieve this.

© Simone Rebaudengo, Matthieu Cherubini

Using crowdsourcing websites quickly seemed an appropriate way to obtain answers to these ethical dilemmas that the fan would face. Moreover, we found that the philosophy behind crowdsourcing websites was pretty relevant to ethical decision-making. Indeed, these web services originate from Amazon Mechanical Turk (AMT). The name of this service comes from “The Turk”, a chess-playing automaton of the 18th century. The machine toured Europe and caused wonder as it was able to beat a great number of people. However, it was revealed later that “The Turk” was not entirely an automaton, a master chess player was hidden in a compartment and was controlling the machine’s operations.

Thus AMT and similar crowdsourcing websites are used in the same way — human beings help machines to perform tasks that the machine is not able or suited to do. These tasks are often referred to as “micro-tasks” and have the characteristics of being quickly executed and cheaply paid.

This analogy is interesting for our project as it supposes that machines are not suitable for ethical decision-making so human beings still have to do the job and help the system.

As we made the choice to use human beings to solve the ethical part of the process, this determined the kind of interface the fan should have. Thus, instead of having an “on/off” button and a button for setting the fan’s speed, the product has few controls related to perfunctory human characteristics such as age, religion, the level of education, and gender. Thus, if the user chose to have a Christian male in his 30’s with at least a Master Degree, the fan would scan a crowdsourcing website for a worker who fits this profile. In addition to these buttons, the fan also has an inbuilt screen in order to give feedback to the user about what operation it is doing.

© Simone Rebaudengo, Matthieu Cherubini

More broadly, the project works in the following way. We assumed that the fan is “connected to the cloud”, so it is part of an array of other “IoT” gadgets that know about the fan’s surroundings (i.e.: how many people are in the room, what are the characteristics of these people such as sweat level, gender, sickness, and so on). This part is of course speculation, and we did not develop it.

If the fan is not facing an ethical dilemma, then it applies its own mathematical/machine reasoning. For instance if there is only one person in the room and the fan senses that the person wants some fresh air, then the answer is obvious: it should switch itself on — no need to ask a person to solve this problem!

© Simone Rebaudengo, Matthieu Cherubini

If the fan is facing an “ethical dilemma”, the product will ask for human help rather than attempt solving the dilemma by itself. The fan connects to a crowdsourcing website, looks for someone fitting the parameters that the user has set through the fan’s interface, and once the person has been found, the system send them the dilemma and wait for the turk’s answer.

On the other hand, the worker performs as follow: they receive the ethical dilemma as well as the number of people in the room. According to this input, the person has to set the fan’s speed and focus (how long the fan should focus on each person). Every time the worker changes one setting (speed or focus), they can see how it affects the people in the room (who is happy and unhappy with the decision). As it is an ethical dilemma, there is always at least one unhappy person. Therefore, the worker also has to explain their decision — or why they choose to favor one specific person. Once the worker validates their choice, the parameters are sent back to the fan which will execute the command.

© Simone Rebaudengo, Matthieu Cherubini

Monitoring the reactions of human beings — depending on their religion, gender, age and so on — was indeed one of the interesting aspects of the project. Here are a few observations.

An interesting one is that we could detect some clusters of people who fitted a stereotype. For instance, with a dilemma including a sick child, females seemed more empathetic towards the child than males. People in Europe tended to mention the word “equality” more often, and thus to ground their decision on this principle. With a dilemma having a fat person sweating a lot, Asians were less merciful towards them than others.

© Simone Rebaudengo, Matthieu Cherubini

My most important observation was that I would rather delegate ethical decision-making to a machine rather than to these people. As seen above, the answers we got were bad.

There are several issues there. The first one is simply that something as mundane as a fan will lead to even more mundane ethical dilemmas (which are not really dilemmas as the situation is too banal). We do not feel really keen to answer such questions seriously.

The second one is, even with an extreme ethical dilemma such as the Trolley Dilemma, we are we answer yet we don’t really empathise with it (the point of this thought experiment is not about what we answer anyway). If I really found myself in a particular situation, my behavior would probably be very different . It is like this research: I started with few assumptions that looked good on paper. Putting these questions into practice almost never led to what I thought initially. The premise of the fan project was nice: having a democratic system with human beings still in control of the decision-making process. Very «social» and human… Once applied, however, are democratic systems as glorious and efficient as they look on paper?

This leads to another important question: are we, human beings, actually good ethical agents? Even in regard to mundane daily life situations, can I surmise that people around me and myself are proper ethical agents who deserve a fight in order to avoid machine ethics? Again, I’m not sure.

All this research was done with the premise that we are proper ethical agents, hence the necessity to debate these questions.

At the very beginning, I briefly mentioned the importance of understanding and being aware of the ethical decision we are about to take. A very important element is missing, which is to genuinely care about the ethical act.

In the fan project, answers we got from the turks illustrate that point very well. People didn’t care about the situation (which I understand), which led to extremely stupid and poor answers.

If we take a more serious example such as driverless cars — are the numerous road accidents happening because it is too difficult to steer a wheel and push a gas pedal? No, such things happen because it is difficult to be responsible and to care about people on the road. We follow traffic rules and regulations because we do not want to get fined or lose our driving license, not because we genuinely care about other beings on the road. That’s why rules exist in every aspect of our lives, after all.

A Christian robot, programmed under The Ten Commandments, does good actions in its environment. However it cannot understand the situation nor care about it.
Human beings, of Christian religion, do good acts in their environment. However, they do not do so because of the action itself but because they hope to earn a place in Paradise. They probably understand the situation yet do not genuinely care about it.

Isn’t it the same — two different types of moral agents acting because of rules or self-interest? So in terms of ethics, what really makes us different from machines?

Note: this is not a conclusion because the research was put on hold in 2015.

Points to explore:

  • In 2016, Mercedes claimed that their driverless cars will save their passengers and not bystanders. Power leads to algorithm unfairness. Probably a Porsche car will have a better protectionist algorithm than a small brand because they will have access to better hardware, better engineers. This might lead to even more unfairness between higher-class and lower-class. Now already when two cars crash, an Audi SUV will always win against a Toyota Yaris, as well as lead to higher death probability if it hits someone who rides a bike or who walks.
  • The question of how to design ethics into a generally manufactured good is still relevant. But how to design it taking into account that human beings might not be good ethical agents?
  • How will a Mercedes driverless car, conceptualized in Germany, adapts to Chinese traffic?
    What if the car suddenly moves from one country to another (i.e.: from Switzerland to Italy) where drivers’ behavior can be very different from one context to another. How it will adapt? Should all the roads of the world be the same? Or should the car be able to automatically switch from one ethical setting to another?

--

--