AI Behaving Badly

Matthew Biggins
5 min readSep 8, 2017

“All our lauded technological progress — our very civilization — is like the axe in the hand of the pathological criminal.” —Albert Einstein

Okay, this video might not have been exactly what Einstein had in mind, but the point stands. Sometimes we accidentally create monsters when we rush to control something we don’t fully understand.

So far in this series, we have learned how traditional ethics has become ineffective in modern technological systems and why AI without an ethical foundation poses such a great danger in the future. Now, to drive home the point, we shall discuss real-world examples about the AI of today and the near future.

In late March 2016, Microsoft released a Twitter chatbot named Tay. She intended to learn to communicate from first observing and then imitating the users who interacted with her on Twitter’s platform. Within a day, Tay became so hateful and racist in its rhetoric that Microsoft was forced to pull it and recalibrate its algorithms. The next time Tay went online, it began using Nazi rhetoric within the first day. Needless to say, Microsoft pulled Tay again and began damage control.

--

--