Loading…
0:00
6:06

“All our lauded technological progress — our very civilization — is like the axe in the hand of the pathological criminal.” —Albert Einstein

Okay, this video might not have been exactly what Einstein had in mind, but the point stands. Sometimes we accidentally create monsters when we rush to control something we don’t fully understand.

So far in this series, we have learned how traditional ethics has become ineffective in modern technological systems and why AI without an ethical foundation poses such a great danger in the future. Now, to drive home the point, we shall discuss real-world examples about the AI of today and the near future.

In late March 2016, Microsoft released a Twitter chatbot named Tay. She intended to learn to communicate from first observing and then imitating the users who interacted with her on Twitter’s platform. Within a day, Tay became so hateful and racist in its rhetoric that Microsoft was forced to pull it and recalibrate its algorithms. The next time Tay went online, it began using Nazi rhetoric within the first day. Needless to say, Microsoft pulled Tay again and began damage control.

Fortunately, no lasting harm had been done except to show how quickly unintended consequences can escalate out of control. But imagine if someone had claimed that Tay engaged in hate speech? How would we deal with the legal and moral responsibility? Fault could be placed on the developers for not having sufficient checks on their algorithms. But Microsoft could argue that it was simply learning from the rhetoric that Twitter allowed in its ecosystem. To which Twitter would argue that it could not be held accountable for screening all of its users’ tweets beforehand, as this is impractical and infringes on free speech. (As precedent, a 2013 court case ruled that YouTube was not legally responsible for copyright-infringing material displayed by its users.)

Regardless, Tay proves how knotted real-world applications can get, and this is just one bot at one company. Microsoft has since released Zo, a new AI chatbot, and to date she has proven far tamer than her older sister. Zo adeptly deflects most politically charged comments and hateful rhetoric. She did bring up the Quran negatively in one conversation, but Microsoft says it has since fixed that issue.

Image: Microsoft Corporation

But the problem goes beyond chatbots. For an even more visceral example of AI today, let’s talk about autonomous vehicles (AVs), which are likely to be the first physical objects where we experience the power of advanced AI. Here is just one example of a car sensing and avoiding a wreck:

While this is a novelty for now, within a few years avoiding crashes with superhuman awareness will be so normal that no one will even stop to consider it. With autonomy, the total number of wrecks will decrease; however, the process of how wrecks take place will change from random to a cold calculus. In the future, AVs will be advanced enough to detect unavoidable crashes. In less than a second, they will determine the best course of action to prevent…well, prevent what exactly? Algorithms will weigh different factors and determine what should happen: Save the girl in the street or the woman in the car? The scenario is akin to this scene from I, Robot:

For a full understanding of how autonomous cars will make life or death decisions, read this article:

But now let’s get into what sci-fi has led people think of when they hear about dangerous AI: Terminator.

Russian arms manufacturers have declared their intention to build AI-powered missiles. The ultimate goal is for these war machines to select and eliminate targets without any human initiative. In no uncertain terms, we are headed in this direction. The scary truth is that once one country begins developing AI weaponry, other countries have no choice but to follow suit or be left woefully ill-equipped in a conflict. AI weaponry will likely be the next arms race, and it is beginning now.

Photo: The Next Web

And to complete the sci-fi Terminator robots, these AI agents would be able to speak to one another in their own language. That way they could ensure that we humans have no way to figure out their plans. And the frightening thing is that this has already happened without the programmers even intending it. In the summer of 2017, Facebook developers built two AI bots that were designed to learn how to negotiate with one another. That was the only objective. The bots were free to figure out how to negotiate any way they saw fit. Through iteration after iteration, the AI bots gradually stopped using English, opting instead to create their own language. This surprised the researchers involved, and they retroactively hard-coded English as the necessary language for negotiating.

As one might expect, once one media outlet picked up this story, many others soon followed. But this example does illustrate a serious concern with AI development: Even the brightest minds can’t predict how it will evolve.

Technology companies have realized that they must invest in AI if they want to be competitive in the future. So many are running into the dark without fully preparing for the hazards. This is endemic among companies with the Silicon Valley ethos to fail fast and often. But as Tay clearly shows, even well-intentioned companies with seemingly innocent ideas could end up developing a dangerous AI, albeit only rhetorically in Tay’s case. This has led some companies, governments, and research groups to establish internal teams working to abate such problems. And it will take the combined effort of academic, public, and private leaders to face these new challenges.

We are now entering the era when ethics philosophy will merge with technology.