Perspectives on AI

Astro Teller
5 min readSep 1, 2016

--

In 1997, when I was a graduate student studying computer science, I published my first novel, a parable about the dangers of approaching technology in general — and artificial intelligence (AI) in particular — with fear and hostility. Having done a Ph.D. in AI in the 90’s, I’ve been working for over twenty years to help people understand AI and to calm dystopian hysteria that has wormed its way into discussions about the future of AI and robotics. Only by clearing away panicky rhetoric we can have a productive conversation about how those fields can be explored to the maximum benefit of humanity.

Over the ensuing decades, we have seen the rapid evolution of AI (e.g. Self-driving cars, Google Brain), but the public rhetoric of fear hasn’t changed. To make a more sanguine look at the situation more fun to consume, I painted myself blue, and my wife, Danielle Teller, and I gave a talk at the first annual Silicon Valley Comic Con titled “AI vs. SuperBabies”. It’s a satirical look at the doomsday scenarios for both artificial intelligence and genetic engineering that get in the way of logical conversation.

Debating AI vs. SuperBabies at Silicon Valley ComicCon

I also agreed to work on a project called the AI100, a panel of 17 AI experts from academia and industry tasked with kicking off a 100 year study of the social impacts of artificial intelligence. The goal of the project is neither to whitewash the challenges surrounding AI nor to let those challenges blind society to opportunities for progress, but to dispassionately assess the current state of technology and predict future developments. After about nine months of work, we just released a report detailing our balanced assessment of the state of the field.

Rather than summarize the document (which is recommended reading for those interested in the trajectory and social impact of AI), I thought I would share some higher level observations about the process for creating such documents and the thinking behind the conclusions drawn by our committee.

First, let me say that it was a pleasure to work with this distinguished group and, somewhat to my surprise, the document did not turn out to be a middle ground between warring factions. Although opinions differed as to how to best capture the group’s ideas, there was consensus about the current state of the field of AI and likely future directions it will take. That such a wide range of experts agree about the probable impacts of AI on society adds considerable weight to the report.

One of the big take-aways from the report is that in the medium term (the next two decades) our discussions should focus on the social impact of increasingly dynamic technologies, not primarily on self-aware AI as the emergence of something that was self-aware, self-replicating, and self-improving is considerably less likely.

Members of the panel agreed that, on balance, we will do more harm than good to humanity by allowing fear mongering to drive how we build AI, how AI interfaces with society, and how we regulate it. For that last reason (regulations), it is imperative to ensure that the basics of AI (what it is and how it works and what it can and can’t do) become critical knowledge pieces for the government of any high functioning developed nation. Having the experts outside of government saying “trust us” to the governments of the world is not a productive dialogue.

Perhaps most importantly, we can’t have a balanced discussion about AI if we focus only on scary hypotheticals and fail to address the ways in which AI is likely to dramatically benefit humanity. The list includes but is by no means limited to the following:

  • Transportation of people and goods will be made more efficient, safer, and less damaging to the environment with the adoption of automation and possibly more sophisticated forms of AI and machine learning. We will be able to manage the movement of goods more efficiently, matching supply with demand to reduce waste and scarcity.
  • AI could mitigate the effects of climate change by opening up new opportunities for clean power generation or by monitoring and recommend interventions into changing ecosystems.
  • In educational settings, artificial intelligence could address the individual needs of students, tailoring the style and pace of instruction.
  • In medicine, AI and robotics could help doctors diagnose and treat conditions at lower cost and with greater accuracy.
  • Assistive care could be provided by robots and AI to differently-abled people, the aged, or anyone who may require physical assistance.

This list includes areas of life where the impact of AI can already be felt, or are likely to soon be felt given the work that is currently being done. There will also be new discoveries that cannot yet be predicted. Turning the power of machine learning on unsolved problems in science will accelerate the pace of scientific discovery, unlocking new technologies or sectors that haven’t yet been fully conceived.

Protecting human dignity, including the right to privacy, and providing new opportunities to live fulfilling lives will be an important goal to achieve as artificial intelligence becomes more commonplace. As industries and sectors evolve and begin to incorporate artificial intelligence and automation over the coming years, they need to be allowed the space and opportunity to demonstrate that these important goals can be met without early and prescriptive rules or policies that risk stifling or predetermining the kinds of technologies and techniques available to innovators. The recent NTIA multi-stakeholder process to define privacy best practices for unmanned aircraft systems is a good example of how governments can create a space for best practices to develop organically without pre-defining a specific outcome. Technology in these sectors will evolve quickly, and could itself present novel ways of protecting consumer privacy and dignity.

In sum, artificial intelligence and related technologies like robotics and automation will play an important role in solving some of the world’s big challenges. Encouragement of more research into the opportunities and implications of its adoption within specific economic, industrial, or social sectors is a useful way to produce tangible guidance for how governments, innovators, and other stakeholders can help encourage that integration quickly and responsibly.

All new technologies present the possibility for misuse. AI is no different. And even technologies that are a clear net positive for humanity have negative side effects that should be understood and dealt with thoughtfully. AI will be no different in this either. But in my lifetime I have seen no technology with greater promise for making people smarter, happier, more productive, and more connected than artificial intelligence.

--

--