Serious AI Challenges that Require Our Attention

Complexity, cyberattacks and social issues are just a few concerns to address sooner, not later

MIT IDE
MIT Initiative on the Digital Economy
5 min readJul 14, 2017

--

By Irving Wladawsky-Berger

Several weeks ago Vanity Fair published and article by NY Times columnist Maureen DowdElon Musk’s Billion-Dollar Crusade to Stop the AI Apocalypse. “Elon Musk is famous for his futuristic gambles, but Silicon Valley’s latest rush to embrace artificial intelligence scares him,” noted Dowd. “And he thinks you should be frightened too.”

Entrepreneur and inventor Elon Musk is one of a number of world-renowned technologists and scientists who have expressed serious concerns that AI might be an existential threat to humanity, a group that includes Stephen Hawking, Ray Kurzweil and Bill Gates. But, the vast majority of AI experts do not share their fears. A few months ago, Stanford University’s One Hundred Year Study of AI project published a report by a panel of experts assessing the current state of AI. Their overriding finding was that:

“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.”

Just because experts conclude that — at least for the foreseeable future — AI does not pose an imminent threat to humanity, doesn’t mean that such a powerful technology isn’t accompanied by serious challenges that require our attention.

Earlier this year, the World Economic Forum published Global Risks Report 2017, its 12th annual study of major global risks. As part of the study, the WEF conducted a survey in which it asked respondents to assess both the positive benefits and negatives risks of twelve emerging technologies. Artificial intelligences and robotics received the highest risk scores, as well as one of the highest benefits scores.

The WEF noted that unlike biotechnology, which also got high benefits and risks scores, AI is only lightly regulated, despite “the potential risks associated with letting greater decision-making powers move from humans to AI programmers, as well as the debate about whether and how to prepare for the possible development of machines with greater general intelligence than humans.”

A concrete case in point is predictive policing — the use of data and AI algorithms to automatically predict where crimes will take place and/or who will commit them. As explained in a recent article in Nature, “tight policing budgets are increasing demand for law-enforcement technologies. Police agencies hope to do more with less by outsourcing their evaluations of crime data to analytics and technology companies that produce predictive policing systems. These use algorithms to forecast where crimes are likely to occur and who might commit them, and to make recommendations for allocating police resources. Despite wide adoption, predictive policing is still in its infancy, open to bias and hard to evaluate…”

“Criminologists, crime analysts and police leaders are excited about the possibilities for experimentation using predictive analytics. Surveillance technologies and algorithms could test and improve police tactics or reduce officer abuses. But civil-rights and social-justice groups condemn both models. Offender-based predictions exacerbate racial biases in the criminal justice system and undermine the principle of presumed innocence. Equating locations with criminality amplifies problematic policing patterns.”

Last August, the ACLU and several other organizations issued a Statement of Concern about the increased use of predictive policing tools, “pointing to the technology’s racial biases, lack of transparency, and other deep flaws that lead to injustice, particularly for people of color… Decades of criminology research have shown that crime reports and other statistics gathered by the police primarily document law enforcement’s response to the reports they receive and situations they encounter, rather than providing a consistent or complete record of all the crimes that occur…”

“The natural tendency to rush to adopt new technologies should be resisted until a true understanding is reached as to their short- and long-term effects. Vendors must provide transparency, and the police and other users of these systems must fully and publicly inform public officials, civil society, community stakeholders, and the broader public on each of these points. Vendors must be subject to in-depth, independent, and ongoing scrutiny of their techniques, goals, and performance.”

The Nature article also calls for caution in the use of predictive policing. If properly deployed, the use of data and AI software could be of great help in improving policing and public safety. But, “sophisticated predictive systems will not deliver police reform without regulatory and institutional changes. Checks and balances are needed to mitigate police discretionary power. We should be wary of relying on commercial products that can have unanticipated and adverse effects on civil rights and social justice.”

Transparency is a major step in the right direction. One way or another, we should be able to understand the reasons for the recommendations and actions of AI systems.

Several weeks ago, an article in Quartz described the approach taken by CivicScape, a predictive-policing startup that has made its code and data open-source by publishing them in Github, as well as a variety of documents detailing how its algorithms interpret police data to foresee crimes.

One of the best discussions of the challenges of bleeding-edge AI systems is a 2015 article on the Benefits and Risks of Artificial Intelligence by Tom Ditteriech and Eric Horvitz — former presidents of the Association for the Advancement of AI. They listed three major risks that we must pay close attention to:

  • Complexity of AI software: “[T]he growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must redouble our efforts in software quality.”
  • Cyberattacks: “AI algorithms are no different from other software in terms of their vulnerability to cyberattack.… Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks.”
  • The Sorcerer’s Apprentice. We must ensure that our AI systems do what we want them to do. “In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance. They must know when to stop and ‘ask for directions’ — and always be open for feedback.”

A number of initiatives have been recently organized to address these and other AI long-term challenges, including the aforementioned One Hundred Year Study of AI, the Future of Life Institute and the Partnership on AI. Hopefully, as has been the case with previously powerful technologies, such efforts will help ensure that these various risks are properly addressed, and that our increasingly capable AI systems will have a major beneficial impact on the economy, society, and our personal lives.

Irving Wladawsky-Berger is Chairman Emeritus of the IBM Academy of Technology, and a Visiting Professor of Engineering Systems at MIT, where he is involved in multi-disciplinary research and teaching activities focused on how information technologies are helping transform business organizations and the institutions of society.

This blog first appeared June 26, here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.