Can robots achieve emotional intelligence?

Sparrow
sparrow.science
Published in
2 min readDec 22, 2017

Researchers are developing AI to acquire human-like emotions.
But do we want robots as our carers?

In 10 seconds? Traditionally, machine learning has been used to model natural intelligence. Emerging researchers shows that robots can be trained to have emotional intelligence. This has potential benefits for the care-giving sector, but it remains to be seen how humans will respond to human-like robots in their home. (Discover the science here)

How can we encode emotions into machines? Teaching robots emotional intelligence is a different task to training them to make smart decisions, requiring new neuronal network models. How do you teach robots to detect anxiety or confidence, for example? New emotional neural networks are investigating how robots can pick up social cues via facial recognition or pupil dilation.

What benefit will emotionally intelligent machines bring society? As a first step, scientists want AI to read how we feel, for use in caring for autistic children, the elderly or mental patients, for example. Robots can now be programmed to identify emotions in our facial expressions and the tone of our voice. (Read more here)

Surely a robot’s emotions will never meet the complexity of a humans?Indeed, we are still far from building AI that’s able to experience the full spectrum of human emotions, but more and more emotional features are being developed to improve human-machine communication in the context of patient-centred care and other services. (Read more here)

How would you respond to a robot in your living room? The ‘Uncanny Valley’ theory states that humans feel more and more repulsed as a robot starts to closely resemble them. Recent studies support the idea that people become worried when they believe a robot can experience emotions. So perhaps get out more and mix with humans? (Check the science)

What is the 'Uncanny Valley' theory? 
Forward this email to friends who'd rather talk to Siri than you

In an essay written in 1970, Masahiro Mori, a robotics professor at the Tokyo Institute of Technology suggested that people develop affinity towards robots as they become more human-like.

However, as the robot reaches near-human likeness, the graph measuring respondents' liking of the robot sharply dips to revulsion. This dip is known as the 'Uncanny Valley'.
Daniela Seucan

This research was curated by Daniela Seucan,
Sparrho Hero and PhD student investigating the socio-cognitive mechanisms that can make deceptive behaviour maladaptive at the Babes-Bolyai University in Cluj-Napoca, Romania.

(Psst, Daniela distilled 17 research papers to save you 510.5 min)

--

--

Sparrow
sparrow.science

Steve, the sparrow, represents contributions from the Sparrow Team and our expert researchers. We accredit external contributors where appropriate.