Benefits and Risks of Artificial Intelligence

Thomas G. Dietterich
6 min readJan 23, 2015


Discussions about Artificial Intelligence (AI) have jumped into the public eye over the past year, with several luminaries speaking publicly about the threat of AI to the future of humanity.

Over the last several decades, AI — computing methods for automated perception, learning, understanding, and reasoning — have become commonplace in our lives. We plan trips using GPS systems that rely on AI to cut through the complexity of millions of routes to find the best one to take. Our smartphones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. AI algorithms detect faces as we take pictures with our phones and recognize the faces of individual people when we post those pictures to Facebook. Internet search engines, such as Google and Bing, rely on a fabric of AI subsystems. On any day, AI provides hundreds of millions of people with search results, traffic predictions, and recommendations about books and movies. AI translates among languages in real time and speeds up the operation of our laptops by guessing what we’ll do next. Several companies, such as Google, BMW, and Tesla, are working on cars that can drive themselves — either with partial human oversight or entirely autonomously.
Beyond the influences in our daily lives, AI techniques are playing a major role in science and medicine. AI is at work in hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are helping to find important needles in massive data haystacks. For example, AI methods have been employed recently to discover subtle interactions between medications that put patients at risk for serious side effects.

The growth of the effectiveness and ubiquity of AI methods has also stimulated thinking about the potential risks associated with advances of AI. Some comments raise the possibility of dystopian futures where AI systems become “superintelligent” and threaten the survival of humanity. It’s natural that new technologies may trigger exciting new capabilities and applications — and also generate new anxieties.

The mission of the Association for the Advancement of Artificial Intelligence is two-fold: to advance the science and technology of artificial intelligence and to promote its responsible use. The AAAI considers the potential risks of AI technology to be an important arena for investment, reflection, and activity.

One set of risks stems from programming errors in AI software. We are all familiar with errors in ordinary software. For example, apps on our smartphones sometimes crash. Major software projects, such as HealthCare.Gov, are sometimes riddled with bugs. Moving beyond nuisances and delays, some software errors have been linked to extremely costly outcomes and deaths. The study of the “verification” of the behavior of software systems is challenging and critical, and much progress has been made. However, the growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must redouble our efforts in software quality.

There is reason for optimism. Many non-AI software systems have been developed and validated to achieve high degrees of quality assurance. For example, the software in autopilot systems and spacecraft systems is carefully tested and validated. Similar practices must be developed and applied to AI systems. One technical challenge is to guarantee that systems built automatically via statistical “machine learning” methods behave properly. Another challenge is to ensure good behavior when an AI system encounters unforeseen situations. Our automated vehicles, home robots, and intelligent cloud services must perform well even when they receive surprising or confusing inputs.

A second set of risks is cyberattacks: criminals and adversaries are continually attacking our computers with viruses and other forms of malware. AI algorithms are no different from other software in terms of their vulnerability to cyberattack. But because AI algorithms are being asked to make high-stakes decisions, such as driving cars and controlling robots, the impact of successful cyberattacks on AI systems could be much more devastating than attacks in the past. US Government funding agencies and corporations are supporting a wide range of cybersecurity research projects, and artificial intelligence techniques in themselves will provide novel methods for detecting and defending against cyberattacks. Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks.

A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians? Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave.

This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands in a literal manner. An AI system should not only act on a set of rules that it is instructed to obey — it must also analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. It should also be continuously monitoring itself to detect abnormal internal behaviors, which might signal bugs, cyberattacks, or failures in its understanding of its actions. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions” — and always be open for feedback.
Some of the most exciting opportunities ahead for AI bring together the complementary talents of people and computing systems. AI-enabled devices are allowing the blind to see, the deaf to hear, and the disabled and elderly to walk, run, and even dance. People working together with the Foldit online game were able to discover the structure of the virus that causes AIDS in only three weeks, a feat that neither people nor computers working alone could come close to matching. Other studies have shown how the massive space of galaxies can be explored hand-in-hand by people and machines, where the tireless AI astronomer understands when it needs to occasionally reach out and tap the expertise of human astronomers.
In reality, creating real-time control systems where control needs to shift rapidly and fluidly between people and AI algorithms is difficult. Some airline accidents occurred when pilots took over from the autopilots. The problem is that unless the human operator has been paying very close attention, he or she will lack a detailed understanding of the current situation.

AI doomsday scenarios belong more in the realm of science fiction than science fact. However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of the three important risks outlined above (programming errors, cyberattacks, “Sorcerer’s Apprentice”) is being addressed by current research, but greater efforts are needed.

We urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research. We urge the technology industry to devote even more attention to software quality and cybersecurity as we increasingly rely on AI in safety-critical functions. And we must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.

Tom Dietterich
President, AAAI

Eric Horvitz
Former President, AAAI and AAAI Strategic Planning Committee



Thomas G. Dietterich

Distinguished Professor Emeritus, Oregon State University. Former President, AAAI, IMLS. ArXiv moderator for cs.LG. Google Scholar Profile