Why the Deep Learning for AI Idea Might be Right, But for the Wrong Reasons

Joachim De Beule
Code Biology
Published in
3 min readDec 19, 2014

It is not often that one is obliged to proclaim a much-loved international genius wrong, let alone two at the same time. Nevertheless, in the alarming predictions regarding artificial intelligence and the future of humankind, I believe both Professors Stephen Hawking and Mark Bishop are.

More humbly, I do agree with both of them in some respects. In a previous post, I already argued why the deep learning for IA idea is flawed: because it doesn't solve the symbol grounding problem or, as Bishop puts it, there is a “Humanity gap”. Machine learning, however deep, always remains “mechanical”. It is not “creative”, you never get anything out of it that wasn't put in. And like Hawkings and Bishop I am concerned about the (short term) consequences of the ongoing revolution in AI in terms of how how it is controlled by a few big companies and governments and how this affects people’s lives.

The common assumption in these lines of thought is that the singular super-AI will be independent. Something self-sustaining, independent from us. It might be running on a world-wide cluster of computers, controlling an army of robots, but in any case it will be something that “lives” or exists and learns on and by itself. This, I agree with Bishop, simply isn't possible with today’s state of-the-art AI, and probably not with tomorrow’s either.

But what if this assumption is wrong to begin with? From what we see about real deep learning today, things indeed look quite differently. It’s not an image of an independent super AI, but one of a distributed and integrated AI that is constantly being trained and created by people. Each time someone uploads and describes a picture on Facebook or Google+ or Instagram, it is trained. Then it is fed with information about how we see and describe reality. This is precisely the sort of feedback needed to overcome the symbol grounding problem and fill the human gap.

Moreover, what people teach the AI directly influences what they get back from it, which includes a large part of our daily lives. It determines for instance what we get to see in our Facebook feed, and what we are “spared” of. This in turn determines who we are.

Although perhaps not intended and potentially harmful, there is thus a closed feedback loop, which very much resembles something called active learning. In active learning, learning is achieved by performing actions and by comparing the result of those actions to what is expected. This is precisely what is happening now with deep learning being applied at mass scale and fueled by social media.

The brain is also believed to apply active or deep learning. Some even see active learning as the very secret to cognition and life itself. The point is that the deep learning for AI idea, because of the social context and mass-scale at which it is applied, is a form of active learning where what is being learned is our human, distributed intelligence. In a sense, deep learning AI is what makes active learning possible on a world-wide scale. It provides the infrastructure for humankind to coordinate its collective intelligence into one.

From this perspective, the deep learning for AI idea and Bishop’s as well as my own concerns are actually very much compatible. The humanity gap is bridged because it’s still humans that ultimately bring in the intelligence, the actual experience and creation of reality, including the internet in which the AI “lives”. This does not lead to a new living intelligence however. The AI cannot sustain or train itself let alone compete with us. But it does lead to a new AI nevertheless, namely our collective, distributed human intelligence that transcends each of us as individuals.

This point of view also softens Hawking’s concerns, because it suggests that AI, instead of being a threat, is our very own extension into a larger, distributed and perhaps “higher” intelligence. That said, the question remains what then, if not extinction, will the emerging collective intelligence bring us? Will it be a brave new world or one in which we have freed ourselves from some of today’s big problems? This will I think largely depend on how and for what purposes the emerging AI is applied, that is, on who controls the application and the infrastructure on which the AI runs. In this sense, Stephen Hawkings and Mark Bishop’s fears for the future might be just after all, although for the wrong reasons.

--

--