The Future of the Web is Emotionally Intelligent

Pete Smart
16 min readMay 20, 2016

You’re reading The Future of the Web series by Peter Smart.

This is 1 of 4 posts sharing all of my research and inspiration for my international speaking tour on ‘The Future of the Web: and how to prepare for it today.’

Series Table of Contents

1. Emotional (You are here right now)
2. Haptic (coming soon)
3. Adaptive (coming soon)
4. Invisible (coming soon)

(I’ll be announcing when I release new sections via Twitter).

Emotional?

Roughy two million years before the first human, we find ourselves in Africa. Here, at this moment in brutal climatic conditions, we find an early precursor to man — the hominid.

At this time, many similar human-like species were dying out. Hominids we’re under intense evolutionary pressures to survive — competing fiercely, often lethally — with other groups for scarce resources. What’s fascinating is that when we look from this moment on to the next 100,000 generations of the Hominid we see something really interesting.

We see their brains triple in size.

The Hominid species were adapting to survive and the amazing thing was how they were learning to do it:

Most of this new neural volume was dedicated to new interpersonal capabilities — new ways to work together such as cooperative planning, language, parent-child attachment, social cognition, and empathy.

Hominid skull compared to modern human.

More physiologically significant than the moment we learnt to use tools, it was the need to understand one another that drove the programming of our brains.

Further reading:

Emotional Intelligence

Then, 60 years ago, researchers discovered something interesting.

These researchers were interested in the personal characteristic that make up successful individuals. So, they interviewed 80 PHD students and asked them to complete personality tests, IQ test and interviews.

Then, 40 years later, they were contacted again — and their professional successes evaluated.

Where we might assume that intelligence is a good indicator of success, they found the correlation between professional success and IQ was unremarkable (around 10%). Instead, they found that Emotional Intelligence was four times more important in predicting professional success than IQ.

So what is it?

Emotional Intelligence is the ability to detect, understand and act on the emotions of ourselves and others. It allows us to understand and work with one another and is fundamental to how we understand and navigate our world.

It is the ability to detect emotional social cues around us, recognize them, filter their significance, interpret them and act to successfully navigate them.

From the time of the hominid onwards — this how we’ve been wired — it’s how we understand the world around us and it’s how we’re conditioned to be understood.

Frustrations today

We often feel frustrated with technology and the web. This, in my estimation, is often not because we don’t perceive the web as intelligent, but because we believe it doesn’t truly know what we’re trying to ask of it.

As we think about the future of the web, I believe the idea that is most compelling to us isn’t that the web understands more than us, rather that it understands us.

Indeed, we are evolutionarily predisposed to create a web that attempts to understand us. It’s in our programming.

So what’s happening today?

For so long, an emotionally intelligent web has felt like science fiction.
But when you break it down into it’s individual components (detect / recognize / filter / interpret / act) you start to understand how Emotional Intelligence might be programmed. Then you see how people are doing this very thing today.

The Emotionally Intelligent web can be split into 3 key areas:

1. Natural Language
2. Facial Language
3. Body Language

Let’s go!

1. Natural Language

From the Jacquard Loom to point-and-click user interfaces, for so long we’ve had to augment how we communicate to enable machines to understand what we’re thinking. However, every few years we see quantum leaps forward in how we are able to communicate more naturally with technology. In under 100 years we go from binary switches to MS-DOS to GUI to Voice. Yet, today, much of how we interact with the web is still augmentations of how we naturally communicate.

Much has been written on the notion of natural language. When done poorly, natural language input makes interaction unnecessarily verbose. When done correctly, however, the results signal the first significant shift in interaction since voice and drives us towards a more frictionless, self-actualizing web. I will be turning much of this research into further, deeper articles and experiments, but for now…here’s some amazing examples of what is happening today…

Conversation-driven task automation

Let’s start at the more basic end. Meet Clara and Amy. cc. Amy on any email and she’ll help schedule meetings. However, her ability to schedule meetings isn’t the amazing thing here - it’s that she understands how we speak: “I can’t do this week but what about next week at the same time?” Then, how she replies: “happy to get something on Greg’s calendar” —demonstrating a semblance of emotional cognition.

Chat Bots

Not at all as new or as sophisticated as Facebook would like you to believe, yet, likely the precursor to future, more sophisticated, conversational automation. Chat bots are 30 year old technology making a modern day reassurance. Facebook chat bots are not truly intelligent A.I. — like Amy and Clara most are operating inside predefined response models. However…

Conversational Deep learning

Far more interesting than Facebook Chat bots are conversational bots driven by deep learning. First, take a look at the below…

We’re all aware the gamed Tay — Microsoft experiment turn new-Nazi millennial chatbot:

One of my favourite examples is DeepDrumpf:

Note: I think it’s amazing how DeepDrumpf and (hacked) Tay end up sounding exactly the same.

Then, consider Xiaoice!

Yes, she deserves a whole section of her own!

Xiaoice is an advanced natural language chat-bot developed by Microsoft. Like Tay, Xiaoice is developed by Microsoft (roughly translated to “little Bing”) but launched in China. She is running on several Chinese services like Weibo. Unlike Tay, she’s no longer just an experiment.

  • 1% of the entire population reached in 1 week
  • Used by 40 million people today
  • 35.4 jokes per second via Weibo
  • Much unlike task-driven Siri, users talk with her as if she were a friend or therapist
  • Then….amazingly…..using sentiment analysis, she can adapt her phrasing and responses based on positive or negative cues from her human counterparts.
  • She listens with emotional cognition and is learning how to reply back accordingly. This is huge.

“When I am in a bad mood, I will chat with her, Xiaoice is very intelligent.”– Xiaoice user

What we’re seeing with Xiaoice is fundamentally amazing, signaling the beginning of another major advance forward in our interaction with technology and the web.

Emotion through vocal intonation

World famous author and educator, Peter Druker says

“The most important thing in communication is hearing what isn’t said”

In the (wonderful) film ‘Her,’ there is an incredible moment when Samantha — our protagonists’ artificially intelligent O.S. — is attempting to start up and calibrate. To do so, she asks 3 questions.

The final is a question about his mother.

He let’s out a single sigh, but before her can even answer, Samantha has understood and is calibrating.

This is Beyond Verbal. It is live sentiment analysis driven by voice. At the link below see it being used in real-time during the Megan Fox vs. Donald Trump Fox News debate.

Or try this today for yourself:

Get more inspiration like this every Sunday

Join my private mailing list where I send out one amazing example of something at the frontier of the web like the above every week.

Now let’s get back to it!

2. Facial Language

Facial expressions are some of the most powerful evolutionary tools we have at our disposal to naturally decode and transmit social information within our surroundings. Raffi Khatchadourian of the New York Times states:

“Our faces are organs of emotional communication; by some estimates, we transmit more data with our expressions than with what we say”

In a study I conducted 4 years ago on ‘the role of design thinking in solving social problems in communication,’ myself and fellow researchers discovered that in the world of sign language the emphasis isn’t exclusively on what the signer is signing with the hands. Crucially, the facial expressions the signer pulls as s/he communicates are as, if not more important, than the hand gestures alone. Without facial expression, readers found the contents of the communication itself to be harder and slower to understand.

As we look for more natural ways to communicate with the web, we recognize that facial expressions are a predefined evolutionary model for natural communication.

And the advances we are seeing today to capitalize on that are nothing short of remarkable…

Facial Recognition

The precursor recognizing emotions via the web is, obviously, the recognition of the face itself. Facial Recognition technology is not new and is here today. From being used by social networks to help automatically detect your friends in photographs:

to creating fun, live facial effects…

And even play games…

Facial Verification

Based on the building blocks above, these advances in facial mapping then allow for sophisticated innovations in the world of personal security. Take a look at this from Alibaba…

Emotional Recognition

But then! This is where things start to get really interesting! What if the web could understand how you’re feeling based on your facial expression and adapt accordingly? As we see more it’s increasingly evident that that this future is likely not as sci-fi as it seems.

Kairos can understand emotion with incredible accuracy, including happiness, surprise, sadness and even measuring attentiveness. How might the web of tomorrow adapt based on minute facial expressions?

And there are many (many) others…

And, did you know Microsoft have an API for Emotional Recognition that you can plug in and use today!?

Soon…Emotional Recognition in your Smartphone

Are you ready for facial recognition to be built into your Smart Phone? Consider this: Google is working with Movidius. Apple just bought Emotient.

How might mobile-integrated emotional recognition technology change the world of communication as we know it?

These examples are hardware rather than primarily web-based, but the implications are enormous in establishing societal norms concerning the technology.

Slightly freaked out? Me too…But consider two things:

1. The dispersion of innovation and it’s effect on public comfort

There is often a natural progression in the dispersion of specialist innovations.

  1. Specialized: First they are extremely specialist (even Militarised)
  2. Commercialized: Then we see commercial availability and greater public awareness and increased penetration
  3. Democratized: Next, we often see ommercial monopolies quickly followed by Open-Source / publically availibile alternatives.
  4. Decentralized: This results in distributed variations, alternatives, remixes

As this dispersion progresses, so too we see a direct correlation with public consciousness, comfort and ease of use.

2. Are we reframing how we communicate emotion already today?

Most of the connected world deliberately and explicitly look to communicate emotion in communication. We just do so differently today… :)

By the way Facebook just patented this…

Every experience design and technologist MUST consider the ethical implications every artifact they design, especially when you consider the notion of Ontological Design.

Indeed, seeing this technology used in commercial environments today is personally somewhat uncomfortable to me. But then I challenge myself on the above.

When, perhaps, when we reframe our understanding based on the above we might start to see things in a different light. Whether the famous aphorism “What one generation tolerates, the next generation will embrace.” is right in this instance is for us to discuss…passionately.

3. And finally…Body Language

The notion that the web might one day be able to understand us and better tailor experiences based on our body language might seem like science fiction.

However, the precursors of technology that we can use (and are already using) to understand the physiological symptoms of emotional response are already here today.

Many of us use wearable devices. These tiny pieces of technology are constantly measuring the minute kinetic and physiological changes in the body — creating uniquely identifying information about us that allows for infinitely customizable personalization. Today this is our heart rates and skin conductivity.

This same technology can be appropriated to understand physiological reactions in the human body when feeling emotions.

As we look for more natural ways to interact with technology — and pursue the web’s great purpose for frictionless human self-actualization — I believe it is not a question of ‘if’ but ‘when’.

Small beginnings

We already see small beginnings in our ability to relate the body’s movement with an emotional response.

The Granify plugin attempts to detect and attribute a user micro-movements to an emotional response in e-commerce experiences. They use the position of a user’s cursor to attempt to link them to feelings. What does a cursor hovering near the price say when compared to a cursor moving towards the top right of the screen say, for example?

This lovely experiment links the position of a user’s body in relation to the screen to text size. A basic but an empathetic response to enable easier reading dependent on how far my face is from the screen.

Preemptive Body Language

Recently Microsoft released their Pre-Touch demo, showcasing a technology enabling interfaces to be both preemptive and contextual to finger positioning. Much like in the real world, this technology enables the position of user’s body to indicate intent —working like a refined XBOX Kinect. Watch the video and you can’t help but get excited about the really interesting new possibilities this technology opens up…

Physiological

Finally, beyond the vast array of wearable devices we are already using today, take a look over some of the below. While doing so, consider agin that a web that can understand emotional response by understanding our physiology is well within reach.

We touch on ethics in the sections above and there are certainly many further fundamental considerations to discuss here. Unlike deliberately communicating emotions through emojis, much of our natural human physiological response is involuntary. Thinking purely practically, we recognize that this technology allows us to imagine a web that could change appearance in response to how relaxed or alert you were feeling. We’ve all heard of responsive imagery, but what if the content of imagery responded to what you found engaging.

Spire and Feel are two wearable devices which purport to be able to track your mood.

Going further, your heart rate is as unique as your finger print. Tae a look at the NYMI. How might future experiences be altered based on the heart-rate data that many wearables can detect today?

What might a physiologically empathetic web do, not only to our understanding of how we can interact with the web, but also to how we understand how we feel about others?

Here’s a playful / intriguing experiment from researchers at Carnegie Mellon University that identifies the friends that make you feel comfortable, excited and uncomfortable…

Summing up

So much of our personal success is linked to our ability to understand the emotions of others. So too, I believe, the success of the web is judged by it’s ability to understand us.

We recognize a fundamental part of our humanity is in our pre-programming to navigate our world with emotional cognition. We have an evolutionary disposition to understand and be understood in emotional terms.

As we look to develop news ways to more natural, human ways communicate with the web, there is still much to be imagined, discussed and developed. However, there are many exciting innovations at the frontiers of the web today that point to an amazing future tomorrow.

Importantly, as we witness the technology that continues to enables the building blocks of an emotionally intelligent web, how should we, as creators, respond?

TL;DR

You’ve just gone through c. 50 links to articles, patents, pieces of research as I find new ways to share the amazing things I am discovering as part of my Future of the Web speaking series.

This is article 1 of 4 and there are more sections to come. I’ll be announcing when new materials are complete via Twitter. You can follow me here.

You can also keep track of where I’m speaking next on the Emotionally Intelligent web plus Haptics, the Adaptive and the Invisible web.

And don’t forget

You can join my private mailing list to get one amazing example of something at the frontier of the web every week.

Enjoy yourself a Monday wow moment.

Next Up

1. Emotional (You are here right now)
2. Haptic (coming soon)
3. Adaptive (coming soon)
4. Invisible (coming soon)

--

--

Pete Smart

Head of UX and Strategy at muti-award winning design & innovation lab Fi. Author. Speaker. Travelled 2517 miles to try & solve http://t.co/nrUf9ypjGR