Artificial Intelligence: Jarvis of today or Ultron of tomorrow?

Artificial Intelligence has become a key behind-the-scenes component of many aspects of our day-to-day lives, from the virtual personal assistants such as Siri, Alexa, and Google Assistant to the suggestions from our favorite music and TV subscription services. The promise of AI has lured many into attempting to harness it for social benefit. But there are also concerns about its potential misuse.

It is already an important consideration when programmers create AI systems with specific functions such as self-driving cars. Today the so-called narrow AI that is designed to do one specific task is not capable of acting independently. It is designed with the sole purpose of enabling humans to complete their task more efficiently.

Dr. Stuart Russell is one of AI’s true pioneers and has been at the forefront of the field for decades. According to Russell, while these applications and expected developments in AI are enormously exciting; others such as the development of autonomous weapons and the replacement of humans and economic roles may be deeply troubling.

The danger with the future of AI is the general AI that can perform many different tasks very well. It is a computer mind that improves, learns, and thinks like a human or even exceed the level of human intelligence. A hypothetical scenario where AI becomes the dominant form of intelligence on Earth, with computer or robots effectively taking the control of the planet away from the human species, has become a significant point of controversy in the public imagination. The worry arises from the possibility that machines may become smarter across the board, that they will develop general purpose capabilities. The possible risks from building systems that are more intelligent than us are not immediate but the need to start thinking about how to keep those systems under control and to make sure that the behaviors they produce, the decisions they make, are beneficial to us. We need to start doing that research now.

In the history of nuclear physics, there was a very famous occasion when the leading nuclear physicist Ernest Rutherford said that extracting energy from atoms was impossible and would always remain impossible. The very next day, Leo Szilard invented the nuclear chain reaction and within a few months designed the first nuclear bomb. So, sometimes it can go from ‘never’ and ‘impossible’ to happening in less than 24 hours.

Unlike humans, many systems with AIs are unable to understand the consequences of their actions. So, the relevant question is, what level of control should be given to them, or whether they at all should be permitted to act autonomously in certain situations. In view of the recent warnings from researchers and entrepreneurs that artificial intelligence may become too smart, major players in the technology field are thinking about preserving human control. One such method is the development of an AI Kill Switch. This would be a technology that prevents AI systems from taking control of their own destiny. The concept of an AI Kill Switch has already been put forward by many prominent experts in the field of artificial intelligence. It was first raised by AI experts in 2013 when DeepMind technology demonstrated the ability of their computer program AlphaGo, to beat one of the world’s best Go players. Deep learning algorithms draw powerful insights from quantities of data typically beyond human comprehension. But what if machines become so superior in intelligence that humans lose control.

We’ve already seen a lot of progress on brain machine interfaces that allow, for example, someone who’s completely paralyzed to control a robot arm to pick up a cup of coffee and have a drink. That’s done by direct connection of electrodes into neural tissue. The amazing thing about that is that we don’t understand the signals that the brain uses to control its effectors: its arms and legs and so on. Basically, we leave it up to the brain to figure out what signals need to be sent to this robot arm to have it do what it does. It’s not a conscious process.

Just from common sense, if you’re a gorilla, are you happy that the human race came along and they’re more intelligent than you? So, there’s a threat that having things smarter than us could potentially be a risk. The particular risk of having systems smarter than us comes from the fact that when we give a very intelligent system an objective, it’s going to carry it out. It’s not going to want to be turned off because if we turn it off it can’t achieve the objective, we gave it. So, we’re essentially setting up a chess match between the human race and machines that are more intelligent than us and we know what happens when we play chess against machines.

Researchers acknowledge that robots may not always behave optimally but they are hopeful that humans should ultimately be in charge. Therefore, some researchers are calling for an international effort to study the feasibility of an AI Kill Switch we discussed earlier. According to them, future intelligence machines should be coded with a kill switch to prevent them from going rogue. An AI Kill Switch is a mechanism for restricting machine intelligence, by which humans who remain in control can intervene to override the decision-making process.

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a mis-programmed super intelligence, which by definition is smarter than humans in solving practical problems that it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the super intelligence, therefore, decides to resist shutdown and modification, it would be smart enough to outwit its human operators and other efforts to shut it down.

Russell postulates that it might be wise to build oracles as precursors to a super intelligent AI. An oracle is a hypothetical AI designed to answer questions, but it’s prevented from gaining any goals or sub-goals that involve modifying the world beyond its limited environment. The oracle could tell humans how to successfully build a super intelligent AI, and perhaps provide answers to difficult moral and philosophical problems. The oracle may also be used to determine how human values translate into an engineering specification for superintelligence. This would make it possible to know in advance whether a proposed super intelligence design would be safe or unsafe to build. Russell has proposed a novel solution, a new human computer relationship to solve the problem of super intelligence. The way he thinks about it is that everything good we have in our lives, everything that civilization consists of, is the result of our intelligence.

So, if AI, as seems to be happening, can amplify our intelligence, can provide tools that make this world in effect much more intelligent than it has been, then we could be talking about a golden age for humanity with possibly the elimination of disease, poverty, solving the climate change problem; all being facilitated using this technology. The upside is very great and that’s the reason why we need to make sure that the downside doesn’t occur.

Mayank Saroha is a Business Consultant for Tata Consultancy Services in the India, Middle East & Africa region and a part of TCS’ Strategic Leadership Program. He graduated from IIM Bangalore (MBA — Cohort of ‘22).

--

--

Mayank Saroha
Communications & Media Industry —A Futuristic Outlook

I'm a Business Consultant at TCS. Inclined towards sports, adventure and occasional travel. If you're interested in movies, then you're a part of my clan.