Ernest Davis: “I can’t imagine an AI having to make a decision to go to war”

--

We had the chance to receive Ernest Davis, Professor of Computer Science at NYU and co-author with Gary Marcus of Rebooting AI, Building Artificial Intelligence We Can Trust (2019), on the occasion of the 40th edition of ASDN. The book focuses on deep learning, the dominant approach to AI, and its multiple issues, and considers solutions to go beyond it.

AI has had a number of marquee successes, that is to say very impressive successes (and disappointments) which has caused ripples in the public sphere. But this has not prevented the hype created around AI which permeated public discourse.

However, Ernest Davis and Gary Marcus seem to warn on the dangers of putting excessive trust in AI technologies and shed light on its limits: data greediness, opacity, absence of knowledge of the world, etc. So how can we build a reliable and trustable AI? Ernest Davis projects scenarios in which technology sector players show common sense and draw upon the resources of the human mind so that innovation hinges on strong engineering principles. We discussed all of this with him.

You are claiming for deep understanding rather than deep learning: can you elaborate on this opposition?

First, the deep learning just refers to having a lot of layers. In the original neuronal system, we are limited to two or three layers of neurons. Technical improvements in hardware and software have made deep learning systems much more powerful. They now have hundreds of layers. They are deep only in that sort of mathematical perspective.

Deep understanding is a real grasp of how the world works, how people interact, of how events are structured in time and how objects are structured in space. Deep-learning systems lack that knowledge of the world to a startling degree. There has been a lot of publicity recently about the GPT-3 system and other modelling systems which generate very plausible looking text for pages and pages, but they do not really understand a word of what they’re doing. So, this is the gap we are claiming in deep-learning and we claim that AI should focus on that.

How do you explain that there is such a fascination around deep learning?

The reason that deep learning has gotten so much excitement is that it’s been enormously successful in several domains. AI systems that don’t incorporate some degree of deep-learning are very much the exception rather than the rule, and many AI systems are almost entirely powered by deep-learning. It’s working much more effectively than any other technique that we currently have.

Will the reality of AI someday meet the fantastic expectations we have of it?

The book had two purposes. The first was to propose a realistic view of where AI is now. The second was to deflate expectations of where it is likely to be in the very near future, let’s say in the next ten years.

And let me say that it is very difficult to predict what will be accomplished and I would not have any great confidence in predictions for the future. Some extreme claims have been made for AI, particularly regarding its ability to solve humanity great issues. This seems way of the mark because it’s not God, and it’s not going to be God. Problems are structural features of our society and of our political system. AI is not able to fix them. I would not venture to predict what the capacities of AI will be 50 years or how far that will lead us. It seems to us that there is a great deal to be gained in AI learning, with realistic expectations and a realistic timeline, and the way to get beyond the limitations of deep learning is along the line.

Your analysis suggests that an AI in which we can trust should depend on common sense. But would not that be paradoxical? Can you just give us some examples of how we can approach common sense with machines?

Well, that’s a very tough problem. For a very long time, few people were interested by this approach, and it has become somewhat a hot topic lately. It is going to have to be I think some combination of learning from all kinds of sources, from interacting with the world, from videos, from text, knowledge collected from people… I certainly do not know what the solution to that problem is going to be. But I think until we solve the problem, we’re not going to have very powerful AI.

Would you say that we will not be able to build an AI we can trust in if we keep on having huge expectations?

There certainly exists AI programs that deal shallowly with broad characteristics. Certainly, translation programs will translate texts about whatever you want, but it can’t do anything but translate the text. Other example: the robots that are on Mars are very limited in what they can do, and/or operating on their own necessarily to a large degree, and are dealing with anything they have to confront, which, on Mars, is an unlimited and unpredictable collection of things. So, I don’t see that AI is necessarily limited either in task or in subject matters. We don’t have any AI programs, yet that I know of, which can deal and carry out lots of tasks. We have broad capacity in various domains so I don’t see an inherent reason why we could not get there. I’m optimistic. Until it’s done you cannot be sure whether it is doable. And you can never be sure it is not doable.

In the long term, should AI systems be banned from specific positions where it has to take critical decisions? Such as warfare situations, critical hospital facilities…

One has to evaluate each situation on its own. In a hospital facility, one can consider that machines might be able to act more rapidly than doctors. Pacemakers are not AI driven but they are in a way constantly making decisions. However, I can’t imagine an AI having to make a decision to go to war. In general, you have to consider each situation by itself, figure out what the trade-offs are and how reliable is the AI system. This diagnostic helps decide whether the AI system can make a critical decision or not.

--

--

Spintank & Renaissance Numérique
Aux sources du numérique

Aux sources du numérique, c’est une rencontre mensuelle avec des auteurs et autrices qui pensent la société numérique. Par Spintank et Renaissance numérique.