The challenge from AI: is “human” always better?

Is machine or human decision-making better? Do robots have rights? Will human rights survive machine and human evolution?

Sherif Elsayed-Ali
Data & Society: Points
9 min readMay 24, 2018

--

This is the third blogpost in a series on Artificial Intelligence and Human Rights. It is the result of personal reflections following the AI & Human Rights Workshop at Data & Society in New York. Opinions are my own and do not necessarily reflect the views of my employer.

Most modern human cultures, religions, and legal systems are built around the assumption of human supremacy. Even when not stated explicitly, the underlying belief is that humanity is superior to everything else on the planet.

This is certainly true in some ways. Humanity’s impact on the Earth’s ecosystems is by far greater than that of any other species. Humans are vastly more capable than other species in their ability to make and use tools, develop and acquire knowledge, and apply this new knowledge to make technological advances. With these advances, we have proven beyond doubt that we are the most powerful species on the planet today.

In specific areas, however, we are lacking. Other species are much stronger, faster, have longer lifespans, better healing abilities, and more acute senses. Overall though, humanity has been on a steady trajectory of increasing its domination over other species, since the human cognitive revolution, and more recently the agricultural revolution. But in geological terms, this 70,000 year period of human ascendancy is no more than a blip.

We cannot, and should not, assume that humanity’s current dominant position means that humans are necessarily better in all things than all others on the planet. The pace of technological advancement adds urgency to this discussion; we are increasingly relying on automated systems in numerous aspects of life. Advances, from artificial intelligence and robotics to genetic engineering, could one day challenge humanity’s position.

In this regard, two issues in particular come to the forefront: firstly, the desirability of human decision-making over machine/automated decisions, and secondly, so-called robot rights.

I. Human vs. machine decision-making

The question of who should make decisions, humans or machines, sometimes gets mixed up with the question of who should have control over a system or situation. The two things are very different and the confusion can be at least partially attributed to overblown claims about the current and near future capabilities of AI systems.

As a starting point, there are innumerable automated decisions taking place every day in commercial and public systems, from financial transactions to rail signaling, university admissions and internet traffic routing. Modern societies couldn’t function without a high degree of automation, at least nowhere as efficiently or productively. We don’t think about these already-existing automated systems too much because they have become normal. They are not perceived as machines making decisions, just as machines performing their function.

While this multitude of small decisions happens without direct human intervention, they take place within a framework designed and controlled by humans, within set human-decided parameters.

These automated systems can be changed, corrected, or eliminated all together. Humans are in control, but one step removed.

The addition of AI can sometimes remove humans further — for example if deep learning is used to improve the functioning of a system — because it reduces human understanding of the mechanics of a system’s decision-making. However, the system designer and operator retain visibility of the inputs and outputs (although the complexity of massive data sets reduces this visibility in AI and non-AI systems), can alter the system parameters, reject or implement decisions, and as with non-AI augmented systems, shut the whole thing down. Even with less explainability in an AI system, humans are still ultimately in control, at least of when an opaque system should be used and when it shouldn’t.

Can an AI-based system escape human control? Yes, but so can run-of-the-mill computer worms.

Maintaining human control over software, robots, and AI systems is the smart thing to do. Without this control, there would be serious risks of automated systems producing results that fail to achieve, or even undermine, their intended purpose; there would also be no real accountability for when things go wrong. This doesn’t mean, however, that human decision-making is necessarily better.

We know from experience and decades of scientific research that humans have deep cognitive biases. We can overcome them with knowledge, by conscious effort, and with rules and procedures that minimize the impact of these biases.

The mock turtle story from Alice in Wonderland is a telling allegory for how people’s subjectivity and unpredictability can affect those around them. The impact is greatest when it comes to decision-makers who have power over our lives.

‘When I’m a Duchess,’ she said to herself, (not in a very hopeful tone though), ‘I won’t have any pepper in my kitchen at all. Soup does very well without — Maybe it’s always pepper that makes people hot-tempered,’ she went on, very much pleased at having found out a new kind of rule, ‘and vinegar that makes them sour — and camomile that makes them bitter — and — and barley-sugar and such things that make children sweet-tempered. I only wish people knew that: then they wouldn’t be so stingy about it, you know — ’

But if it’s not the pepper, vinegar, and chamomile in Alice’s mock turtle story that affect people’s temper and decisions, it’s other things: hunger, lack of sleep, happy news, sad news, what billboards we see as we walk down a busy city square.

It turns out, for example, that parole boards can become significantly more lenient with some lunch in their belly.

Human control with the best decision-making possible

We know that people are, collectively, capable of making rational rules and procedures that can be improved on and perfected over time. These are developed over relatively long periods of time (weeks, months and years, rather than seconds, minutes or hours). We also know that we lack the precision and consistency of machines. Machines on the other hand cannot make judgments based on human value systems that diverge from their programming — a necessary requirement for many societal functions.

The answer should therefore be to combine ultimate human control over automated systems (AI or not) with the best decision-making framework.

In some cases, it may be purely automated because the speed and scale of operations required can only be achieved by machines, for example when controlling the routing and scheduling of food deliveries to optimize delivery time and fuel consumption. In others, it may be purely human, for example deciding the winner of a literary competition — because people value human opinion on art more than a machine’s. But increasingly, a vast majority of everyday decision-making will be machine-aided.

Sometimes there will be no simple answer. For example decisions on mortgages, recruitment, and the justice system can be rife with discrimination. Could algorithms be designed to reduce human bias? Possibly, but even then, wouldn’t we want humans to have the final say and to make decisions that benefit people even if they fall outside the normal rules?

People should maintain control over whether a system operates and the parameters by which it operates and makes decisions; the extent to which these parameters are pre-determined will differ depending on the degree to which a system is designed to be self optimizing or improving.

With this human control over automated systems, the decision-maker should be decided based on what will produce the best outcomes. This is where ethics and laws come in — to determine what a “best outcome” means — and this again, is a uniquely human decision. If self-driving cars produce the best outcome on highways compared to human-driven cars then countries may want to consider mandating the use of self-driving cars. If AI systems can improve medical diagnostics, we may generally want to use AI for diagnosis combined with a physician’s opinion in particular cases. Even if AI systems become more accurate at target selection, we may decide that we need the “kill decision” in a war to always be reviewed and taken by a human, because it would otherwise make war too easy.

In conclusion, I believe we should not allow the belief in human exceptionalism that underlies our cultural and religious systems to translate into a belief that human decision-making is, a priori, always better. We should decide what values we want our machines to follow, we should control when and how they implement them, and otherwise determine who, or what, should make decisions to achieve the best outcomes for these values.

II. Robot rights, human rights, and the planet

The fact that there is serious debate about robot rights is astonishing and enlightening in equal measure. The question of whether human rights apply to robots can be comprehensively and unequivocally answered by human rights law: No. Robots can’t have human rights because they are not human, just like animals can’t have human rights. But they also can’t have any rights as they are neither conscious nor alive.

Sophia the robot. Photo by ITU Pictures.

While the answer is, at least from the standpoints of both international law and biology, very clear cut, philosophically it raises a number of questions that we must pay attention to:

Firstly, should we treat certain robots as if they had rights? Robots that are designed to appear human or animal like, either fully or partially pose particular ethical dilemmas because their “mistreatment” could theoretically normalize the mistreatment of real people and animals. For example, the absence of rules of conduct around the use of sex robots, could encourage behaviors that would be criminally sanctioned if inflicted on a human person. This risk increases the more life-like robots become.

Demonstration of Duplex, Google’s new AI assistant, at Google I/O 2018

This issue does not just apply to robots, but also to disembodied AIs. This is one key reason why it’s important that AIs identify themselves as such, rather than pretend to be human. There is too much to lose in a world where we can’t tell if those we’re talking to — or even watching on TV — are real people or a bunch of clever code.

Secondly, while AIs are not conscious today (and this is unlikely to change in the near future), we cannot discount the possibility that artificial consciousness could one day appear.

In reality, we have little understanding of how consciousness arises, which means we can’t be completely sure if things are conscious or not, but if we need to make an assumption, the safest one to make is that the internet, computers, and software in general are not conscious.

When or what form could such consciousness take? We don’t know. It could be decades or centuries, but when (and if) it happens, it may be unlike anything we’re familiar with. A consciousness with the speed and power of the most advanced computers of its time will be very different from the consciousness of a cat, a dolphin, or a human. But if machines become conscious, they will also be alive, and they should have rights. Today, we can confidently say that robots and AI have no need for rights, but this could change.

Thirdly, if machines could one day become conscious and have rights, what does it mean for human rights? Human rights are rooted in the belief that all people are equal and seek to protect freedom, justice, and dignity. But the general restriction of rights to humans in international law goes back to our underlying cultural belief in human superiority.

While many societies attach some rights to animals under animal welfare or biodiversity laws (particularly to ones for which humans tend to develop personal affection, such as cats, dogs, and horses), we don’t consistently equate consciousness with rights. Cows, for example, can remember things for a long time, form strong friendships, and enjoy music; yet their treatment in industrial farming and meat processing reflects how little value most societies attach to consciousness in animals.

This is important for the future of human rights. In the next few decades, our very notion of what it means to be human could be shattered. Cyborg implants, neural laces, and gene editing could augment human cognitive and physical capabilities to the point where a small proportion of humanity becomes significantly “enhanced” compared to the majority.

Further ahead, a day might come when highly advanced, conscious artificial intelligence becomes a reality, and humans living on other planets (e.g. Mars) diverge evolutionarily from earth-bound humans.

Some of this may not happen, but some will. If we want human rights to survive well into a future where we may no longer be the most powerful species on the planet, we must adopt a broader continuum of rights that includes all conscious beings.

Sherif Elsayed-Ali is the director of global issues at Amnesty International.

--

--

Sherif Elsayed-Ali
Data & Society: Points

Born in Paris, grew up in Cairo and live in London. Dad of two. Liberal, occasional tree hugger. I work on tech for the climate and human rights at Element AI.