Should we be treating algorithms the same way we treat hazardous chemicals?

Algorithms have the potential to cause injury, illness, and even death. Yet we’ve hardly scratched the surface of how to assess and manage their risks.

Andrew Maynard
EDGE OF INNOVATION

--

If algorithms have the potential to cause harm, do we need new risk assessment tools to ensure their safe use?

At first blush, algorithms and hazardous chemicals have precious little in common. One is created out of computer code, and exists in cyberspace; the other is made up of atoms and molecules, and forms a very real part of the world we physically inhabit.

Yet despite these differences, both have an uncanny ability to impact our lives in ways that are neither desirable, nor always easy to predict. And because of this, there are surprising similarities in how the risks associated with the development and use of algorithms might be assessed.

The responsible development and use of algorithms has, of course, been a focus of discussion for some time. At one extreme there’s the well-publicized yet rather speculative risk of superintelligent machines wiping out life on earth. On the other, there are the closer-to-home risks of algorithms deciding who lives and who dies in a self-driving car crash, or the dangers of them reflecting the racial and gender biases of their creators.

--

--

Andrew Maynard
EDGE OF INNOVATION

Scientist, author, & Professor of Advanced Technology Transitions at Arizona State University