Yes, We Will Live With Artificial Intelligence. But It Will Be Friend, Not Foe.
Boris Sofman
13518

Why AI is not about consciousness

Ever since Lady Lovelace’s objection, people wonder about whether or not machines will one day be able to think. The answer to this question, however, depends on the way we define thinking. As it turns out, consciousness and wisdom may need not be part of the equation (or algorithm — haha).

Current state of the art artificial intelligence (AI) treats every problem as optimisation task, where variables are changed for the best possible outcome. This outcome may be the solution to a math problem or the answer to the question: “is this a cat?”. These algorithms need vast amount of data, which has been made available by the information explosion brought about by the internet economy. Combined with cheap computing power of the same origin, this information is used to train algorithms on fairly complex tasks — like playing chess or winning at Jeopardy!.

Spooked by these advances, very smart people start to warn about the implications for humanity. And these people are actually not the usual Luddites. In recent months Bill Gates, Elon Musk and Stephen Hawking urged the society to be more cautious. They see super-human intelligence not as likely, but as an existential risk, which means that we don’t have a second try. This is also the argument of Nick Bostrom, a Swedish philosopher at Oxford, in his current book about this topic.

Consciousness and wisdom

This argument, however, sparks serious head scratching in the AI community. How can something so mundane like an optimisation algorithm resemble something remotely like the human intelligence? One might program a machine to win a board game, but human features like moral reasoning or consciousness are a whole other ball game. Even with substantial advances in data breath and computing muscle, AI will only be good at what we tell it to be good at. And the better it gets at one thing, arguably the worse it gets at others. How could such a system become super-human?

One possible objection to this point would be the importance not to over-glorify the human mind. There is a strong case for consciousness being merely an illusion — or something that programmers would call an abstraction, a method to cope with the brains vast parallelness (I know, its not a word. But so is yolo…). We shouldn’t make the mistake and simply extrapolate our form of intelligence. This would be as futile as to extrapolate from early amphibians in the hope to come up with humans. Smarter may be different. Maybe there is no need for consciousness and all the other stuff that makes human live so unpredictably messy (and beautiful).

The second feature that gets usually wrapped up with super-human intelligence is some form of wisdom — the ability to differentiate what is good or bad in all circumstances, something we humans aspire to, but regularly fail to achieve. This might be as wrong as the idea of artificial consciousness. Morality is an outgrowth of human evolution, a way to communicate in very complex groups of individuals, called society. We humans even have modules for morality. Seeking for purpose or truth or justice, therefore, could be a distinct human trait, which machines — even very smart ones — need not share.

Projecting wisdom on potential mega minds usually says more about the projector then the projectee. It usually comes with doubts in the faith of humanity. People may wish to have a gentle Yoda-like chieftain, who unlocks the secrets of the universe and who guides us safely into the future without the pitfalls of human imperfection. Or we might conclude that a wise intelligence surely must reason that in the grand scheme of things the best solution would be to get rid of us bad humans entirely. Do we have a guild issue?

Humans-like problem solving

What we might fail to recognise is that current state of the art AI algorithms already do what humans do, but differently. In principle, our whole life is an optimisation task. We optimise for wellbeing and pain avoidance, status and recognition, morality and purpose, and of course for survival. It is indeed a multi-objective optimisation problem, and we are arguably very bad at solving it. That is because our brains deal with statistics and heuristics, meaning shortcuts that can be fooled and in some circumstances might not apply. Interestingly, modern day AI uses similar methods. It just has access to more data — or experience, if you like. Maybe, one day, we should ask machines how to run our lives. That would be bad news for the self-help industry.

This, however, is still a long time off. Until then, AI researcher struggle with core concepts of human intelligence, including image recognition, concept formation and natural language processing. These problems are surprisingly difficult as we don’t even fully understand how the human mind does the trick. A solution could be self-directed learning. Instead of training the algorithm, one could train the algorithm to train itself. And this is exactly the root cause for the prominent warnings.

A problem of control

When machines learn to improve themselves, they would not stop at a human level. They would surpass us. And they would probably do it with break-neck speed. If this is happening so fast that we can’t react and if the AI manages to take control of vital systems of human civilisation, we wouldn’t have another try. It would be very unfortunate if a super-human AI might come to the conclusion that it needs to wipe us off the planet because we suck up all the valuable resources it needs to solve the task we subjected it to.

To prevent such — admittedly highly unlikely — tragedy, we might want to think about how to program AI so that it serves our interests, even if it is smarter than us and even if it would be smart not to. luckily, we have plenty of experience with this. We have been creating self-governing systems for centuries, if not millennia. One might think of parliamentary democracy, market competition, the scientific method or the concept of money. All these methods are devised to deal with concentration of power — the power over freedom, the power over resources or the power over truth. The difference is, however, if politicians screw us, we re-elect; if AI misbehaves, we’re toast. So a kill-switch might be appropriate, or other methods like hard-coded rules.

The important thing, however, is not to over-react. It is one thing to determine that “this is a cat”. It is another to conclude “This is a bank, I want to break into the bank and steal money, which I will use to buy nuclear warheads on the black market and bribe terrorists to help me force humanity into submission.” This day is far off. The computing power required for these calculations (and learnings) would exceed everything we have right now, even if everything we have would be accessible, which is not the case.

We don’t need to act now — and we surely don’t need regulation. But that doesn’t mean the debate is futile. To the contrary, we might want to think hard about the threads of our marvellous future. This may, however, require a conscious being.