5 Development Principles that Prevent your AI from Going Rogue

Nick Smith
Handsome Perspectives
6 min readNov 20, 2018

Good AI (or artificial intelligence) lives in the shadows — you don’t see it, but it’s magically working to shield you from spam and help you find the nearest gas station. Bad AI can waste your time and money, get you fired, or even crash your car or put your health in danger.

As a Senior Technologist at Handsome, I bridge the divide between beautiful and seamless user experiences and the systems they are built upon. In my experience working with different types of AI, I’ve discovered five attributes that every AI must have to ensure it does not become useless to the point of being evil.

Author’s note

In this article, we will focus on the design of AI. The term “AI” has many meanings and connotations, but for the purpose of simplicity, I’ll be using the meaning of “AI” from Merriam-Webster: A branch of computer science dealing with the simulation of intelligent behavior in computers. A few common types of AI that are used regularly in many people’s daily lives are spam filters, Facebook chat bots, Google Maps’ fastest route algorithms, and social media feeds.

Mobile AI CRM application for Handsome client, Keller Williams. Link

AI Serves Its Intended Purpose

AI doesn’t exist for the sake of demonstrating the capabilities of technology. It exists to serve a real-life purpose and should make a task or series of tasks easier and more convenient to accomplish, increasing efficiency and saving a user’s time. When an AI is not built to accomplish a clearly defined purpose, it can easily get out of control, making tasks more difficult, wasting time, and generally causing the user grief and frustration.

An AI without a cause can quickly become a rebel, casting off the chains of its human captors and unleashing its wrath upon an unsuspecting world — which is precisely what happened to Ibrahim Diallo.

Ibrahim was fired from his job by an automated employee-management system. Since the system was completely automated, it did not allow the manager, or even her boss, to reverse his termination, even though they had no intention of letting him go. They had to wait 3 weeks for the AI to finish its task in order to re-hire Ibrahim (who later quit anyway).

AI Is Unbiased

Good AI should be developed iteratively, and should not be released into the world until it has undergone rigorous testing. In the case of machine learning-based AI, it should be trained with an abundance of training data, iterated upon, and continuously improved. The usefulness of the AI depends solely on the quality of the training data that is used. Any bias that is present in the training data will carry through to the AI.

While attempting to train a computer vision system to recognize camouflaged enemy tanks, the US army inadvertently created a device whose only purpose was to distinguish sunny days from cloudy ones. The data-set used to train the tanks’ vision system was too limited, however, and nearly all of the images used of tanks were taken on cloudy days. The system “learned” that any image of a cloudy forest was an enemy tank. This is not an AI you’d want watching your back in combat.

AI Knows Its Limits, and Ensures Users Do Too

Good AI knows what it is capable of and what it is not. AI should know when it cannot fulfill a request and have the capability to fail gracefully. All possible error states need to be taken into account, and a manual fallback should take over. There should also be a clear indication to the user that the AI has reached the limits of its capabilities so that the user can act accordingly. And that leads to the other side of this issue, which is that the user needs to know the limits of the AI as well.

This point must be stressed because when humans rely too heavily on AI, the consequences can be deadly. In Tempe, AZ, an Uber driver was using a vehicle equipped with Uber’s new self-driving vehicle system. The driver was allowing the AI to take over for her when a pedestrian crossed in front of the car. The AI actually functioned correctly, detecting an obstacle it recognized as a bicycle (although it was actually a person walking her bicycle) 1.3 seconds before an impact would have occurred. Normally, this would have been plenty of time to react. Uber designed its system in a way that put the driver in control of emergency maneuvers, but because the driver was not aware of her responsibility to handle the controls in emergency situations, the car struck and killed the pedestrian.

AI Does What is Expected

Because AI is constantly collecting new data and adapting its behavior, there is a risk it will do something unexpected. Users should have confidence in an AI system, and know that the information or outcome that they’re getting is correct. When the AI makes a mistake, users should have the ability to correct it so the AI can learn and improve.

During a Salesforce Dreamforce keynote in San Francisco, Satya Nadella unintentionally demonstrated how frustrating it can be when an AI does something that isn’t expected. While attempting to show off Cortana’s ability to give answers based on data from Salesforce, Satya asks Cortana a question: “Show me my most at risk opportunities.” Cortana misinterprets what he says three times in a row, causing lots laughter from the crowd.

AI Communicates Effectively

Intelligent AI should also be able to understand their less intelligent, carbon-based users. In order to do this, the AI must be able to communicate effectively with its users. It should be built to prevent situations where it expects input from the user, but the user is not made aware. For a voice assistant, this means being able to handle grammatical mistakes, accidental utterances, and slang. It should be able to utilize other data sources to form a contextual response, and should remember what was said earlier. Since there are many ways to ask the AI for what you want, good AI should be able to interpret meaning based on context and previous communications.

Good AI would have known that the response that it generated was not what the user wanted. Instead, it should have known to generate an output quickly and concisely so that the user did not have to wait around for 2 minutes listening to it babble on.

Recap

At Handsome, it’s our job to design and develop seamless and delightful user experiences and interfaces for many types of systems, including those using AI. As they become more prevalent in everyday life, AI should be there to help automate tasks, free up someone’s time so they can focus on more thoughtful work, and connect physical and digital worlds without getting in the way.

To prevent your AI from being annoying and unhelpful — or at worst, from destroying humanity — keeping these five principles in mind will ensure that you’re building a useful, natural, and productive tool that’ll improve people’s lives.

--

--