When Artificial Intelligence Goes Bad

Robert Rittmuller
Predict
Published in
4 min readAug 29, 2018

--

Disclaimer: There are statements about potential uses of AI within this post that some people may find disturbing.

In the beginning.

When the formal study of AI was conceived back in the post-WW2 era it’s potential was heralded as a boon for mankind and viewed in a almost exclusively positive light. Thinking machines? What could be bad about that? Well, beyond science fiction’s killer robots and ghosts in the machine stories, AI still has much of that positivity surrounding it as we enter into the era of autonomous vehicles, functional digital assistants, and recommendation engines that actually make good recommendations. But is the rise of functional AI actually hiding another more sinister development, one which has the potential to do serious harm to those who are on the wrong side of the algorithm? Bias, over-fitting, and the “black box” nature of AI all come to mind but that’s not the danger I speak of here. It’s AI that’s well trained, but explicitly to do a task that’s malicious in nature that keeps me up at night.

When AI is meant to be bad.

Most of the media stories, commentary, and even some scientific studies have focused on what might happen if an AI component associated with some piece of technology goes wrong in such a way that it directly harms a human. The…

--

--

Robert Rittmuller
Predict

A devout technologist, I write about AI, cybersecurity, and my favorite topic, photography. https://www.rittmuller.com