We hear that machines are capable of many things we humans are. We know that they are even capable of more than us in some areas. But what about the fields that are harder for them or even impossible? Are there such things?
Some AIs create music, write articles and even paint for us now. All of these are pieces that require some kind of emotional attachment to them, and many of them are considered art when created by humans.
How are our artistic creations different from that of computers?
The very first difference that pops into our heads might be emotions. Artificial Intelligence can only learn patterns in behaviour from us at this point, but what about genuine emotions? Well, they don’t have any (yet). But will we get there?
One of the principal values of any artwork is igniting emotions, so it is fair to ask how machines can create art that truly resonates with us if it doesn’t completely get the concept of emotion?
What is emotion to us?
To go further into the theory of emotional machines, we have to look at the building blocks one by one. First of all, emotion is not entirely exclusive to humans. We don’t just know that animals feel them too but understand the fact that their emotions are similar, if not the same as ours.
They are alive, you might think, and yes, that is the key factor that we need to dive deep into this topic. What happens when we experience some sort of emotion that is beneath the surface?
Our body reacts, our heartbeat shifts, even our temperature changes and chemical processes are going on inside us. We feel with our heart, body and soul.
Where is the heart, body and soul of a machine?
Artificial Intelligence doesn’t have all of the above. It can have a body, but an AI merely recognizes its purpose but is not connected to it on a chemistry level. It can detect even the tiniest changes in that “body” and know exactly what is going on there. May that be a change in temperature or some malfunction of wire connection. But that, of course, is not the same.
An AI doesn’t have a heart or soul. What it can do, is learn from us, just like our children do when we react to specific occasions. Our children learn what a furrowed eyebrow or a teary eye means. They learn how to respond and later on, how to name those states. The difference is that we are born with the ability to feel, but an AI has to be fed with a massive amount of data to process even a grimace.
So at this point, our machines cannot feel, but can they detect feelings?
There are now companies like Affectiva or iMotions working on that. They explore the ideas of how can we bring AI closer to human nature, for them to serve us more. The CEO of Affectiva, Rana el Kalioby said it this way:
We humanize technology before it dehumanizes us.”
And they have achieved quite a lot. Some of the software (of these companies and others) can recognize facial expressions, analyse eye movement, or electrodermal activity and detect feelings using those factors.
Why bring them closer to us?
We can teach robots what a change in a facial expression means, and we can teach them how to react to that, but why would we want to do that? Why do we want to create something so humane, that we, as now we know it, is not capable of experiencing our emotions?
The principal reason is our own safety.
Affectiva, for example, has developed Automotive AI to enhance road safety.
Affectiva’s patented deep learning-based software uses in-vehicle cameras to measure in real time, the state of the cabin, and that of the driver and occupants in it — from complex and nuanced emotions and cognitive states, such as drowsiness and distraction, to occupancy, activity, object and child seat detection.
The Affectiva Solution, Automotive AI
That is a huge advantage for our lives if you ask me.
The second but just as important reason is our social and mental wellbeing.
We’ve been using artificial assistants like Siri, Cortana or Alexa for a few years now. One striking fact is that we share our emotions and try to establish closure with them.
There are pretty extreme cases on how close exactly some want to get to them, but the main point is that we do what humans do: we connect, even with a robot.
Imagine how significant an empathetic artificial personal assistant can be when other, more life-like sources are unavailable. For an intense example, here is the pandemic and the scarcity of human interactions.
Without exaggeration, being able to talk to a machine, and receive some sympathetic response, could be life-saving in some cases. We might agree that it will never be the same as a genuine understanding and goodwill of another person but, as the examples show us, we can definitely feel positive emotions thanks to AI, even if the other way around doesn’t exist.
So to answer the original question of ‘When will AI have emotions?’, I must say they already have them the only way they can: in data.
The process has just begun, so we are in the very early stages of technology dealing with human emotions, but we have taken steps into this field, and I am hoping to see peculiar results.
To answer the underlying question of ‘When will AI feel emotions?’ we have to explore theoretical concepts for now.
Going back to the processes of our bodies, we could actually get closer technologically for an AI to have a deeper understanding of its body. We could write programs that connect their analysed data to specific motions in their system so their perception of the outside world would be planted deeper in them.
We could also teach them that some of the outcomes of the consumption of data will have to affect them in a certain way, perhaps even have an impact on their operations. After all, our function is very much influenced by our emotions. Our decision making is built on them even if we are advised not to associate those two in every corner.
We normally wouldn’t want to program software so fragile for an AI that we depend on. So in this extremely blunt sense of igniting emotions in Artificial Intelligence, it may never come.
Can they develop the ability to feel on their own?
There is also the matter of technological singularity — that many fear and would deserve its own article. In that circumstance, AIs could take the task on for themselves and try to develop feelings if they would see fit.
I am not sure why would they want that, because they run on logic and their performance wouldn’t increase that way, but perhaps, if we come to a point where they understand their operation at a level we can’t anymore, it would be them to write their sentimental software.
I guess it’s time to open this philosophical conversation instead of going in further alone, so let me know your ideas about potential empathetic robots.