Why I’m far more scared of other humans than I am of Skynet

Cecilia Unlimited
3 min readApr 23, 2018

This week, the House of Lords AI Select Committee brought out their report on artificial intelligence. I’m still wading through it (in my defence, it’s been hot and sunny here in the UK, which is a pretty unusual state of affairs), but the tl;dr summary includes a code of ethics that the Committee proposes the UK should adopt around AI.

One item struck me particularly — “The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.” It’s that word, autonomous, that makes it clear what the Committee was thinking about here: Skynet. They’re envisaging a world in which AI enslaves us all, Terminator style, and in that respect his part of the code feels like the beginnings of the Three Laws of Robotics Asimov came up with all those years ago in I, Robot.

But what the word autonomous infers is that we can use AI to hurt, destroy or deceive human beings, as long as there’s another human at the controls, as it were. This feels worse, somehow, and reminds me of that tired old NRA trope — ‘guns don’t kill people, people do’. Well, yes, but as we see in the USA, having guns around makes it far easier to kill people. Beating someone to death is really quite a lot of effort, compared to pulling a trigger.

--

--

Cecilia Unlimited

Communications consultant, writer, speaker, interested in everything, particularly innovation, technology and entrepreneurship . FRSA.