The symbiotic relationship between humans and artificial intelligence

Often AGI, or ‘human-level AI’ is considered the holy grail of AI research. It’s even in the name, human level AI. Why are we so obsessed with getting to a human level? What’s the relation of AI to humans? And what’s our role in a symbiotic relationship between human and computer?

Robot Sophia and Einstein on stage at Web Summit

How to built a machine that can improve itself, not on one task, but on many tasks? Brain researchers and AI researchers alike note that the only model we currently have of anything close to AGI (artificial general AI, a.k.a. human-level AI) is the human brain. The way our brain is built, how each neuron has thousands of synapses, is a great source of inspiration as long as we lack better alternatives. Our brain for instance filters very effectively to allow us to have a lot of input from our surroundings (for instance sensory input), but compute these on the necessary speed with limited capacity. We are able to learn without enormous amounts of data available to us. Plus the brain is very flexible, especially compared to current AI systems, that are currently very narrow. It’s not for nothing that we measure artificial intelligence to our own intelligence. The Turing Test is the most literal form. Goertzel et al. introduced two new tests in “The Architecture of Human-Like General Intelligence”.
The coffee test, and the robot college test. Perhaps the most interesting variation on the Turing Test comes from Nilsson, the employment test
“To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines.” The real challenge is to make a self-improving general purpose intelligence, not to built a non biological human.

A popular argument to diminish advancement in AI can be summarized by this quote from Peter Thiel.

“In 2012, one of their supercomputers made headlines when, after scanning 10 million thumbnails of YouTube videos, it learned to identify a cat with 75% accuracy. That seems impressive — until you remember that an average four-year-old can do it flawlessly. When a cheap laptop beats the smartest mathematicians at some tasks but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other — they’re categorically different.” Peter Thiel — Zero to One.

Or in other words ‘yes, it’s impressive that computers can do that, but we will be able to do things better’. I think this is anthropomorphising intelligence. It’s also a matter of moving the goalposts. It’s not that long ago that we thought that computers would never beat us in chess and certainly not Go. If we will ever achieve general artificial intelligence is a debate itself. The vast majority of researchers seem to agree that it will be possible somewhere in time. Whether that is in 2030, 2050, 2070 or 2090 is very hard to predict. According to this survey, by Nick Bostrom in 2013 (and later replicated by the participants of the AI Safety conference in Puerto Rico in 2015) researchers concluded that in the most pessimistic estimation, (with 90% certainty) AGI will arrive by 2075 the latest. The median for 50% certainty was 2040. At the same time, AI researchers often have been wrong in the past, and seem to have a hard time making accurate predictions. It’s worth noting that a very small minority (2% on the Puerto Rico conference) thinks AGI will never happen. Next to that there is group of researchers that warns to be cautious with hyping AI technology. Gary Marcus wrote a great paper on the limitations of AI, now, and in the upcoming future. In 2015 Wait But Why wrote two excellent summarising blogs (part 1, part 2) about this topic, and I don’t think there is much to add. (Read also this piece on aeon.co.) In this post I would like to focus on the relationship between humans and computer when it occurs, and the road towards it. In my previous post I already argued that the road towards AI is just as interesting. Whether we will achieve super intelligence in the long run will make a great difference to our lives. However, even if we don’t achieve it, AI will impact our lives greatly in the upcoming decades. Most analyses, like the above seem to perceive our relationship with computers as static. But history shows us, this relationship has constantly changed so far, and is very unlikely to be static from now on. When AI capabilities change, it’s relationship to humans will too. If we look at the current differences, I think there are 4 reasons why humans still outperform computers on most tasks. Especially simple ones. It seems logical that in the short term we develop and enhance our symbiotic relationship, where computers and humans will strengthen each other. So let’s dive into the 4 biggest differences.

First, we humans have a very good understanding of our contextual environment. Empathy and contextual awareness are essential to human interaction, and we are born with a great intuition for both. Throughout our lives, we’ll further develop these skills greatly. This skill converts our intelligence into wisdom (at least occasionally). Currently, AI systems are notoriously bad in understanding the question behind the answer. Let alone the question behind the question. They would be even worse in the 5 why’s. Still, in its essence, ‘human traits’ like empathy, contextual awareness and wisdom are ‘just’ a set of rules. According to many studies (1 & 2 )like “the future of employment” your best shot of long-term employment is to train yourself for jobs that put a high premium on creativity and empathy. But what is creativity in the end? According to Dictionary.com creativity is the ability to transcend traditional ideas, rules, patterns, relationships, or the like, and to create meaningful new ideas, forms, methods, interpretations, etc. At first sight, this might sound exactly contrary to what algorithms try to do. They follow patterns. However, it doesn’t take that much imagination to let the algorithm explore the edges of the norm though. And with that creativity becomes a matter of degrees, how far do you stray from the normal? In the end, I don’t see a compelling reason why AI can’t learn creativity. I think the same can be said for other ‘human traits’. Empathy, social cohesion, debating, comedy, things that according to many reports (1, 2, 3) are the things we humans are good at. These human traits are very complex, and the set of rules is lengthy with many exceptions. But in the long run, I don’t see a really compelling reason why AI systems cannot translate our social norms, our preferences, and debating skills into algorithms to perform them for us. We already see the first examples of empathy, comedy, and creativity.

Secondly, we have and can process more relevant information in a shorter amount of time. The abundance of data that we are born with, and our ability to obtain much more data through our life is something unique to humans today. We, homo sapiens, have a 200.000 years head start on AI in processing data in the most effective way. Our sensory data is very effectively filtered through our brain. We store only a very tiny percentage of the data we process. It’s not just hard computational power, but also the input that matters. Similarly like a bat is able to locate itself with echolocation. The bat doesn’t have more brain power in general, but by having different sensory data, it trained itself to understand echolocation. The human brain is very complex, and when we look at the universal applications we use our brain for, there is no computer that comes near our skills. It’s estimated that a human brain comes pre-programmed with about 1.6GB of data as part of our DNA. Electrically we store about 10GB of data, and 100TB of chemically/biologically (numbers derived from Life 3.0). AI systems are usually fed with only a couple of datasets, and are learned to process it with a very specific narrow goal in mind. AI lacks the intuition and as computer scientist Knuth puts it

AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’ — Donald Knuth

However, many of the breakthroughs in deep learning usually stem from trying to learn the AI to develop an intuition. By stapling multiple algorithms together the AI learns to do things in a smarter way, instead of brute-forcing its way through the dataset. This also allows us to achieve more with fewer data.

Thirdly, our world is shaped towards human cognition. We externalised a big part of our intelligence to our society. We are smart because we are modules in a big world, whether it is financial or political. It’s comparing apples to oranges if you compare us as a whole, the hive mind, to one AI system alone. But this is a big factor in the differences on how we perform. AI systems usually stand alone. There also seems to be a limit on what we as a society can consume on intelligence. Even the smartest people with IQ’s of 130+ don’t seem to be much more successful throughout their lives over people with ‘just’ an IQ of 130. If others do not understand, and cannot consume that wisdom it is partly pointless. This is limiting the true potential of AI for now. AI can only advance as fast as humans can keep up with it. To find the balance between human and AI will definitely be an important part of developing AI towards an AGI. The human factor might be the one that limits AI development the most. And therefore it’s important that people with different backgrounds help AI systems and AI researchers understand how these systems work precisely. It’s likely that we have to recreate some fundamentals of society to fully embed AI’s full potential. The technical perspective is a lot easier to imagine compared to the societal changes that are probably required.

Fourthly, and perhaps the most important difference is conscientiousness. Can a computer become (or is it already) conscious? I want to sidestep the moral implications of consciousness right now. (It’s a fascinating topic, and I would encourage you to watch the Westworld series, and listen to this podcast between Sam Harris and Max Tegmark for starters.) According to Steven Pinker, a psychologist at Harvard, in the end, consciousness can be boiled down a mathematical structure and always has a physical correlate.

Westworld, a series about a fictional theme park for psychopaths that explores the boundaries of ethical behaviour towards robots.
“The philosophical problem of sentience or qualia or (sometimes called) the hard problem of consciousness I think might ultimately be a quirk of our own way of analyzing the world — that is, the mind reflecting on itself is naturally going to be puzzled by some aspects of itself.We know from neuroscience that there is no aspect of consciousness that does not have some physical correlate. There’s no ESP. There’s no life after death. There’s no mysterious action at a distance. It’s all information-processing and neurons. Why it should feel like something to me to be that network of neurons, I don’t think we have a satisfying answer to.“ — Steven Pinker
Steven Pinker on intelligence

Consciousness is not binary, it’s a matter of degree. There is a difference between the level of consciousness between, humans and other animals, adults and children, and even between adults. Consciousness is a structure of thoughts, or at a deeper level just neurons. All these things combined make me think that it’s not impossible to make a machine conscious. It will be a matter of degrees, but a computer doesn’t necessarily have to stop at the human level. We can train our brain to become more conscious, but there will be a different physical limit than that of a computer. I do believe that this is hardest of the four differences between humans and computers for computers to solve. Not in the last place because this is also the area where we understand the least of our own capabilities. Therefore I think this will also be the area where humans will contribute the most to the long-term symbiotic relationship between humans and computers.

In the long term, it’s unclear to me what the human role will be. Will we just be the monkey in the loop that slows progress down? Will evolution take care of that problem? And if so, how? Clearly, we don’t have answers to those questions yet. Our biggest strength over AI is our conscious experience. We can give meaning and purpose to what AI will invent for and with us. Computers are very task driven if they don’t have a purpose they don’t do anything. We are the ones that give it purpose. AI might be able to simulate empathy (and potentially consciousness), but it doesn’t have a clear path to experience it as well. As Brynjolfsson noted, the saddest outcome would be if we invent AI systems as a way to kill each other and or make each other miserable. If we don’t derive pleasure and meaning out of our world, then wouldn’t it be just a colossal waste of space?

Are we able to design our own future? We currently have already big systems in place, like the financial market, and democracy that we barely seem able to control. We have a hard time aligning our values with these systems. Partly because we, humans, differ in what we want from those systems. But it also stems from the fact that we don’t fully comprehend the finer details of its inner workings. With AI this challenge will be even bigger. By design, AI will amplify human behavior. At least today, it’s mostly trained on data created by humans (manually or mechanically). It trains itself by looking at our current world and tries to mimic it. While the technology might be neutral, the designers and users are never neutral. That intuition that sets humans apart from computers, relies on biasses and subjectivity to work. If we don’t deliberately design AI to correct for human bias and fix our worst characteristics it will amplify the worst that humans brought to this world. It seems to me that once we are further down the road we will further deviate from this path of mimicking human society. As the Alpha Go Zero experiment with Go showed, the human brain is not the most efficient form of cognition. So as an inspiration it’s fine, but I don’t think it should or will be the final goal. And yes, humans have terrible traits as well. We don’t want AI’s to manipulate, murder and be self-centered. Therefore, it will be increasingly important to think about the purpose of our innovations. If we make the goal of improving the quality of human life as a baseline, and the overall goal of AI innovation, we can measure innovation towards that goal. ‘Yes, your automated solution sounds cool, but does it really improve human life?’ It seems to me that we currently adopted a more technological deterministic approach. Where the technical capabilities drive innovation, instead of purpose. We need to focus on shared positive visions for the future as a basis to start the collaboration.

Over the long term, the future is decided by optimists. — Kevin Kelly

In my next blog, I will go deeper into the symbiotic relationship between human and computer. What is the current relationship, and what does the foreseeable future look like?