Co-evolving with the future of AI
Artificial Intelligence has arrived to stay. We don’t only see it in movies, tv series or books. Right now, cars are self driven and there are systems able to automatically compose a realistic video of one person’s head naturally speaking and resting over a different person’s body; last October, the first ever painting created by an AI was auctioned at New York’s Christie’s and final price reached almost half a million dollars; and, a couple of months ago, an orchestra performed in London the unfinished symphony by Schubert, but in this case, finished by an AI system created by Huawei.
“AI is just a tool and, like any other tool, it’s not good or bad, it only depends on who uses it and what for”.
There is no doubt that we are surrounded by AI and this amazes and scares people in the same proportion. But, why? Why are people so worried about it? Actually, AI is just a tool and, like any other tool, it’s not good or bad, it only depends on who uses it and what for. We have this never ending debate about AI trustworthiness, but the truth is we don’t even trust each other. Why are we so eager about trusting or distrusting AI when we don’t trust humans in the first place?
The fear comes from the unknown and all the fantasies we create about it. This is one of the main reasons for the fear. Artificial Intelligence is a relatively young discipline, so the legal and ethical bases are still beginning to be developed. We need to learn how to manage the relationship between science and engineering within the business arena. At current pace of development, sensationalist headlines apart, we still have time to decide whether or not we need machines to be self-aware. Do you imagine what’d happen if we created AI systems completely equal to the human mind? With all their doubts, fears, anxieties, depressions,…? For sure, we would spend most of our money in psychologists for our robots!
In some sense, from a third-person point of view, human brains and computers are very alike, they have inputs, outputs and the main thing happens in between the two, they are like black boxes. Many of the processes running in these black boxes are profoundly subsymbolic, composed by myriads of electrical signals dynamically sparking through a vast and complex network of neurons. Trying to explain them through human language is almost impossible.
“The most important part of an AI is the data you use to feed the system”.
There is a fundamental problem in the concept of AI. If you ask someone where the intelligence is located in the “input -> black box -> output” scheme they always tend to put it in the middle of the process, but the truth is that the most important part of an AI is the data you use to feed the system. It doesn’t matter if you have the best algorithm ever, if you feed it with low quality data, the results will be poor and mediocre (Garbage In, Garbage Out).
So the first step to create great AI systems is to feed them with good quality data. Next, we have to think about the following: out of the wide range of problems we want machines to solve automatically, is there any subset that require self-consciousness to be solved? Do we really need to cope with all the human psyche complexity in order to achieve human-like intelligence in machines?
Elon Musk claims that if we want to prevent the tyranny of Artificial Intelligence, humans and machines must develop a symbiotic relationship. We need a brain-computer interface that would enable humans and super-intelligent machines to coexist peacefully. And that could be achieved “by improving the neural link between your cortex and your digital extension of yourself”.
This is the road AI is taking. Little by little we are adding layers to human capacities: taming, new technologies… We can also extend sensory abilities and physical abilities (exoskeletons) but, can we extend mental abilities? To extend a mind we need appropriate interfaces. The scheme is as follows:
Both, artificial systems and humans, have input/output subsystems. In traditional human-machine interaction, data is interchanged via these subsystems. But we could eliminate “unnecessary” proxies, thus removing the boundaries between the human body and its extended capabilities. We will directly connect mind and machine, instead of seeing the information in a screen, we would feel it directly, as an extended mental capacity.
Therefore, the main issue is not determining whether or not machines will be more intelligent than humans, the real problem is how we will co-evolve with machines. The decisions we have to make concerning an ethical code for AI must be more focused in this area.