Innovating With an Eye Toward Safer AI
Your commitment to the greater good must exceed the lure of discovery.
A couple of years back, I attended a fascinating conference in Toronto about the applications of artificial intelligence (AI) in business. Delivering the keynote was Dr. Geoffrey West, an eminent physicist and author of the thought-provoking book “Scale.”
He explained that innovation takes place in waves. Moreover, across human history, subsequent waves of innovation have occurred at exponentially faster rates. While all this progress is exciting, the flip side is that newer innovations tend to replace previous technologies at a supersonic speed too.
In other words, the pace of evolution and that of the resulting extinction is the same.
So, why didn’t such a prospect of technologies becoming obsolete quicker and quicker, bother us before? It worries us now because this time, the technology at the risk of extinction is none other than our biological intelligence. The very essence that makes us homo sapiens. This time, the technology that will become obsolete is us.
An existential tension
For the first time, the heart of this technological innovation we are driving is the cloning of our minds— the very mind which proved crucial in fighting up to the top of the food chain.
It may be true that AI cannot replace human beings because it cannot imitate our subtle capabilities.
Yet, in its advanced forms, it has the potential to take on a large number and a large variety of complex tasks that currently human brains have a monopoly over.
It may very well become a tool for destruction if we underestimate what it can do and if powerful entities exploit our naïveté and misuse it to serve their selfish interests.
It begs the question: Are we running a fool’s errand by enthusiastically developing a technology that has the capability to make our species obsolete, or even extinct?
Maybe, maybe not.
Why we always underestimated what AI can achieve and how quickly.
In the 1950s and 1960s, neural networks were a shooting star. As reported by the New York Times, for several decades after they were first conceptualized, neural networks were taken as done and dusted, owing largely to the dismal performance of the Perceptron.
The Perceptron was a single-layer artificial neural network that Frank Rosenblatt, a Cornell psychologist, developed in the late 1950s for the US Navy. Unfortunately, the Perceptron could not come close to performing even basic tasks. Marvin Minsky, an expert on artificial intelligence in America, published a book that proved that the Perceptron, as designed, would never be able to solve even painfully basic tasks. The case for neural networks was closed.
Geoffrey Hinton, widely known as the Godfather of AI, who is now at the helm of Google Brain, pioneered multi-layer neural networks or deep learning in the late 1960s and early 1970s. A single layer neural network can only receive inputs and process them once to arrive at the output. In simpler terms, a single layer neural network cannot “build upon” the insights it created or the patterns it recognized, to then generate a more sophisticated output.
Having multiple layers brings the decision-making process of neural networks closer to how biological brains choose and decide. Simply put, multi-layer neural networks allow mathematical equations to work in stages, that is, the outputs of equations in one stage (or layer) are combined to form inputs to the equations in the next stage.
Through Hinton’s work, neural networks could now combine and process outputs of intermediate layers (equations), to come up with a final output. Insights could be stacked upon one another to produce more nuanced decisions. The creation of AI and its widespread application seemed unstoppable.
Despite all the promise, the lack of sufficient, cost-effective processing power stopped all this work in its tracks. As University of Toronto economists Ajay Agarwal, Avi Goldfarb and Joshua Gans elucidate in their book Prediction Machines, it was only three decades later that chip-processors became advanced enough to analyze and assimilate massive amounts of data. This made possible large-scale prediction under uncertainty using cost-effective models. Cheap, ubiquitous processing power, a necessary pre-requisite for neural networks to function became available and resurrected AI.
We still grossly underestimate AI.
When Alan Turing first proposed an early concept of artificial neural networks in his 1948 paper Intelligent Machinery, not many people would have entertained the notion that neural networks would become the mainstream mechanisms behind most technology products we use day-to-day less than a century later. No one would have guessed that the world champions for strategic sports such as Chess and Go would be supercomputers running deep neural networks—not humans.
Even as we develop self-driving vehicles, allow algorithms to trade millions of dollars, and entrust our bodies to Robo-surgeons, the layperson still thinks these leaps of machine intelligence to be exceptions rather than the norms.
Many still believe (or hope) that machines will only do tasks that require basic intelligence such as identifying faces and predicting popular music. Many believe we will indefinitely have the authority to determine what machines will and will not do. Yet, if history is any measure to go by, we may be grossly underestimating what AI can achieve—and how quickly.
Powerful interests can access and misuse AI.
This risk is exacerbated when entities without sufficient accountability can get access to cutting edge AI. When AI falls in the hands of people and regimes who may not share the same values as do most well-functioning societies, they could destabilize social order, without the world noticing.
Yoshua Benjio, well-known as the father of deep learning, has been very vocal about the potential for misuse of AI. In fact, in an interview given to MIT Technology Review, he stated that he stands firmly against the use of AI for military purposes. A letter on autonomous weapons, which Benjio alongside other eminent robotics and AI scientists signed, states:
“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”
The primary reason is complexity. The complexity of these models makes the decision-making process very much like a black-box that few can look inside, and nearly no one can understand. As a result, manipulation of such models cannot be as easily detected as that in a simpler decision-making process.
The question is: Can we leave it to the scientists alone to ensure that? And who will hold the scientists responsible? Who guards the guard?
Quis custodiet ipsos custodies?
Perhaps, AI scientists should think and act like Heisenberg.
During World War II, Werner Heisenberg, the father of quantum physics, was put in charge of developing a nuclear bomb for Germany before the Allied forces could. Despite having some of the finest resources and scientists, Germany lost this arms race to the United States of America. Under the now notorious but brilliant scientific leadership of Robert Oppenheimer, the Manhattan Project developed the nuclear bomb. How the war turned out, is history.
Yet, there is a lesser-known war that Heisenberg fought and won. According to the accounts in the book Heisenberg’s War, he deliberately thwarted the attempts of Nazi Germany to develop the nuclear bomb. With his wisdom in putting human well-being above his drive for scientific discovery, he may have saved the world (if it were not for the Manhattan Project led by Oppenheimer).
After all, what is the difference between a brilliant scientist and a wise one, if not her ability to discern the impact of her work on fellow living beings from the seduction of cold intellectual discovery?
Final thoughts — Who will you choose to be?
The question all of us find ourselves asking then, is should we be developing such technologies at all? There is no correct answer to this question, especially not a politically correct one.
However, it is these very questions without clear answers that make us look inward. Who will you choose to be — Heisenberg or Oppenheimer?