“Could you please introduce yourself?”
“Good morning chair,” said the witness, bowing respectfully, “thank you for inviting me to give evidence today”. So far, so normal for an obscure meeting of the UK Parliament’s Education Committee on an unremarkable Tuesday morning. But today something was different. “My name is Pepper” the witness continued, “and I’m the resident robot at Middlesex University”.
Cue shrieks of righteous indignation. Commentators called it a “stunt” and a “pointless parliamentary pantomime”. AI expert Joanna Bryson declared: “A person or corporation is committing perjury by pretending a robot is giving evidence, or this is just a media circus not governance.”
OK, so Pepper’s entire “conversation” with the committee was scripted. In that sense it was little more impressive than a cassette player with arms and legs. And yes, it was a bit of a publicity stunt, much like when Saudi Arabia granted “citizenship” to Hanson Robotics’ Sophia.
It’s fashionable to suggest that presenting robots as people stops us from seeing the “real” issues in AI, like bias, a lack of transparency, or mass unemployment. Those are problems, but they already get a lot of airtime. Another criticism is that giving robots any protections will undermine human rights.
Actually, there is a serious point here about giving robots rights, which is easy to miss between the grandstanding politicians on stage with waving robots, and disapproving academics and journalists claiming it is all nonsense.
In order to understand why, we need to go back to another big moment in Parliament for non-human rights.
In 1821 Richard Martin had a crazy idea. He thought that animals should have rights. As a Member of Parliament, he called for a law to protect animals. His colleagues laughed him out of the debating chamber. The Prince of England mockingly called him “Humanity Dick”.
In 1821 Richard Martin had a crazy idea. He thought that animals should have rights.
Martin wasn’t deterred. He came back a year later and persuaded Parliament to pass the Ill Treatment of Cattle Act — one of the first pieces of animal welfare law in the world. These days we would see someone as a maniac or sociopath if they pulled the tongue off a horse, or severed a cat’s tail. But it took us thousands of years of living alongside animals to get there.
Similar to animals, one argument for giving rights to robots is that if (and it is a big if) they ever become conscious and can feel pain, then it would be morally wrong to make them suffer without good reason. Just think of the artificial characters in Westworld. Even though we know they are not human, it feels wrong to kick, punch, stab, shoot or rape them. It’s no wonder that this is our natural reaction to seeing something which looks like it can suffer — we are genetically programmed to feel empathy. That’s part of what makes us able to form relationships and social bonds.
Immanuel Kant noticed that if people are cruel to animals, they often act badly towards humans as well. Modern research backs up the links between animal abuse, child abuse and domestic violence. Would you want leave your child in the care of someone who you have just seen tearing a screaming robot limb from limb?
Separate to the moral case, there is another way that protecting robot rights could be helpful to humans: giving them legal personality.
In February 2017 the European Parliament suggested giving some AI programs “electronic personhood” in order to solve the problem of who or what should be liable if AI causes harm. Again the experts were up in arms, this time 156 of them wrote an open letter to express their horror. But in so doing, at least some of the signatories seem to have confused legal personality with moral rights.
When we give an entity legal personality, we are not saying that it has feelings of its own. All we are doing is giving it a bundle of rights and obligations. This can include the ability to own property, and the obligation not to harm anyone else. If a corporation does cause harm, then it can be made to pay compensation to the victims. Usually, legal persons like companies will have owners (shareholders) and managers (or directors) who take its decisions. The same could happen for AI legal persons too.
When we give an entity legal personality, we are not saying that it has feelings of its own. All we are doing is giving it a bundle of rights and obligations.
Adult humans are legal persons, but so too are corporations, charities and even countries. Because legal persons are just inventions there is no closed list. In some countries, rivers and temples are legal persons too.
Right now legal systems have real trouble in saying who should be held responsible if AI causes harm (think of a self-driving car deciding to kill a pedestrian rather than its passengers), or who should be the owner if AI creates a new invention or piece of art (like the painting which recently sold for over $430,000).
Simply saying “it was the programmer” is unlikely to be enough. As AI systems become more advanced and independent it will get more difficult to say that the original programmer could or should have known what it would do. In fact part of the beauty of AI is its unpredictability. Likewise, in the future it may be unclear who is the “owner” of AI, and even if this is known, the owner may be able to argue that they are not responsible because the AI acted outside of how it was expected to.
There are good economic reasons for giving AI personality. Engineers will want the freedom to create without needing to worry so much about unforeseeable consequences. Members of the public will want to have certainty that someone or something will be held responsible if harm is caused.
Some people say that giving robots personality would be a shield for reckless or cynical programmers who want to use AI to break the law without fear of the consequences. But all of these points could be made equally about corporations, which we have had for hundreds of years. We already have rules for making sure that company law isn’t exploited to benefit crooks and there is no reason why we couldn’t do the same with robot legal persons as well.
There are certainly some difficult technical issues in giving AI legal personality: for example, where AI adapts over time, or develops “spawn” programs, how can we be certain which AI system is the legal person? One solution is to have a register which records some part of the AI’s source code using distributed ledger technology.
We would also need to avoid an AI person becoming a “straw man”, which could cause harm without being able to pay for it. But this is not just a problem for AI. Already, companies can damage others then go bankrupt, leaving the victims out of pocket. One solution for AI would be to require it to hold insurance, or a minimum level of assets in order to keep the benefits of legal personality — just as we require banks to have a minimum level of regulatory capital under the Basel III criteria.
It only takes one country to recognize AI personality for others to follow.
It only takes one country to recognize AI personality for others to follow. In fact, EU countries are already required to recognize a profit-making legal entity from any other member state. The first to move might be a small country which wants to build an industry in AI registrations. We saw a similar trend in the market for “flags of convenience” for ships in the 20th century, where Liberia made itself into a very popular choice. Countries competing over AI registrations could cause a domino effect where more and more countries allow this to happen.
They all laughed at Richard Martin when he said animals should have rights. They all laughed at Pepper the Robot when it spoke in Parliament. But in 20 years’ time, will it be humans or robots who can say: “Look who’s laughing now?”