AI Ethics: We Need to Walk the Walk, Not Just Talk — Arm Blueprint

Andra Keay
Sep 21, 2020 · 6 min read
Andra Keay, Managing Director of Silicon Valley Robotics and Visiting Scholar at CITRIS People and Robots Lab

The ethics of androids and autonomous systems has fascinated me since childhood. Asimov’s Four Laws of Robotics unfolded in a series of classic cautionary tales describing just how badly those simple rules could and would go wrong. I have been searching for effective ethical rules for robots ever since.

Before we talk about the ethics of robotics and AI, we must understand the goals and limitations of any public discussion into ethics. Ethics are like emotions: everyone has some, yet they aren’t always positive. Or, for that matter, equal, informed, or appropriate. We often try to fix this by oversimplifying; trying to find solutions for all robots and all people at the same time.

One such example is media favourite the trolley problem. But the trolley problem is a philosophical tool for exploring hypothetical situations, not a prescription for real-world action. It’s deliberately light in context, and to solve real-world ethical challenges, context is vital.

Who, What, When, How, Why

We must then follow ‘who’ with ‘what, when, how and why’? In my experience, many published whitepapers, reports or policies restrict themselves to the ‘what’-if we’re lucky, perhaps the ‘why’. That may be informative, even educational. But they stop short of suggesting any actionable outcome, and can even come across as simple virtue signaling or corporate theater.

Growing preoccupation with AI ethics

Attempting to steer CEOs through the adoption of these technologies, PricewaterhouseCoopers (PwC) conducted a recent study of 59 AI ethics or principle documents from across the world. There are certain topics that are raised in almost all. And there are some topics that are not raised at all. And, even though accountability is cited in more than 75 percent of all documents, we’re only talking about the accountability of the robots or AI. Not the accountability of ethical principles.

To put it another way, it doesn’t matter what sort of ethics principles, oaths, or guidelines you have if they don’t include calls to action and accountability metrics. Arm’s 2019 AI Trust Manifesto is perhaps better than many similar documents I’ve seen because it includes calls to action. But, by what metrics will we know the results of any actions?

Taking action is what matters right now

If you’re still unclear on the difference between ethics of AI and ethics of AI robotics, I recommend reading the EPSRC Principles of Robotics. In 2010, the Engineering and Physical Sciences Research Council of the UK Research Institute started holding workshops with experts across many disciplines in order to minimize issues and maximize the social benefit of these new technologies. The EPSRC five simple principles for robotics and AI can be related to existing social and legal frameworks, and are intended for the roboticists, not the robots. Or for purely software-based AI, then for the builder not the ‘brain’.

These principles are the closest we get to actual advice on taking action. And taking action is what matters the most now. Actions will almost certainly differ from place to place, with different cultures, consumer laws and commercial regulations. But we can still get started right away-for example, the fifth of the EPSRC’s principles ‘that robots should always be identifiable’ could be introduced immediately. Around the world, the vast majority of vehicles must have some form of registration number, license or identification plate. This alphanumeric ID is usually required to be publicly visible.

License plates for robots?

How do we identify individual robots in a fleet of identical units? [ Image: Starship Technologies ]

The reasoning is obvious: vehicles are capable of doing harm to those in the vicinity around them. Fleets of robots are still quite a novelty, but already found in some hospitals and supermarkets, as well as increasingly as ‘cobots’ in manufacturing environments. Each deployment often consists of several robots from different manufacturers, yet these robots have no distinguishing features or markings. How can we expect to report an accident or an issue if we can’t identify the robot?

And then, how can we be sure that that robot, or its software, has not been hacked or hijacked? Before we can even ask who is responsible, we have to be able to identify which robot we are dealing with. Requiring robot registration and a visual ID gives us a good starting point.

That’s just one action out of many we can start taking now. In my experience from studying waves of innovation entering our society, each new industry moves forward when companies proactively address issues that enhance the reliability, interoperability, accountability and quality of their products. The vaunted entrepreneur who ‘asks for forgiveness, not permission’, and ‘moves fast and breaks things’ does no one any good, and almost certainly doesn’t build anything that lasts.

Silicon Valley Robotics is rewarding robotics companies that push the robotics industry forward with our inaugural Industry Innovation and Commercialization Awards. The deadline to submit an entry is September 22 and we will announce the first winners on October 22 2020. Companies who exemplify good practices and who build good robots should be recognized and rewarded.

I am also campaigning for the development of a global ‘ethical ombudsperson’ network for new technologies like robotics. The network would hear the complaints of ordinary people, collect evidence of the use and misuse of technologies and then can both inform people about best practices and hold people accountable for bad practices based on local regulations.

One of the biggest challenges that an ethical approach to a new technology faces is uncertain jurisdiction, alongside of lack of evidence of potential issues. Hence the proposal to create a global ombudsperson network, which can collect and share information about ethical issues.

New technologies are moving rapidly, and they are very powerful. That means our approach to ethics has to move equally rapidly and be effective. We need to walk the walk, not just talk.

Join Andra at Arm DevSummit 2020

I’ll also be chairing a session with the Arm Gen 2Z ambassadors on Thursday, October 8. Join the 4IR: Designing an Ethical Future for the Next Generation session and hear how these optimistic architects of tomorrow hope to influence next-generation technologies.

Originally published at https://www.arm.com on September 21, 2020.

CITRISPolicyLab

TECHNOLOGY POLICY RESEARCH & ENGAGEMENT IN THE INTEREST OF…