Artificial Empathy Systems and Design Research at Scale

Justin Tauber
UX for Bots
Published in
7 min readOct 2, 2017
Design Research at Scale. Art by Nadeem Haidary.

Melbourne was the final stop on this year’s Salesforce Basecamp tour, and the final keynote of the event was given by Shaun Paga from Soul Machines, a New Zealand company specialising in building what they call “Artificial Humans”. These are digital avatars with very detailed faces that respond to the emotional cues of the person they’re talking to.

Soul Machines was founded by Dr. Mark Sagar, who started out modelling body parts for medicine, then went into CGI, and has earned himself two Academy Awards for his work on Avatar and King Kong. After leaving the movie industry, Sagar set himself the task of building Artificial Humans, with the aim of radically advancing the interface between humans and computers.

We’ve known for years that babies faces and behaviours have evolved to maximise valuable interactions with adults. Babies are literally designed to elicit care and education. So, if you are intending to advance HCI, it makes sense to start with children, which is what Soul Machines did. And two years ago, they hit a major milestone when they released BabyX5.

Here’s a video from Bloomberg of BabyX in action.

Babies have evolved to elicit rich learning interactions with adults, which is just what AI needs

Since then, Soul Machines have worked on adult faces, as well increasing their fidelity and expressive range by building out a muscular and skeletal structure into their models. This gives each face a lot more subtle facial gestures to use, so they can express a wider range of emotions in a more convincing way.

They are now also asking celebrity actors to donate their voices. Cate Blanchett was the voice of the avatar Soul Machines created to support the National Disability Insurance Scheme (NDIS) here in Australia.

What’s interesting about these avatars is their ability to express and elicit empathy.

Now, I find the term “Artificial Humans” a little over-baked, and likely to create negative blowback. What I think is interesting about these avatars, what they have over other Artificial Intelligence systems like chatbots, is actually their ability to express and elicit empathy in their interactions with people. Hence the title of this post: Artificial Empathy.

There are some big, natural use cases for this technology: customer service and brand experiences in lots of industries, but especially in retail, healthcare and education.

There’s also a big, natural fear that this will lead to massive job losses, as Artificial Humans™ replace Real Humans™. What surprised me was that I felt that fear myself for the first time.

I did not expect that.

Here is an AI that could soon be a more efficient and reliable researcher than a human can.

As an innovation director on the Ignite team at Salesforce, my job is to lead businesses through a human-centered design process. Because my job is about generating actionable insights from user research, I’ve long believed it was safe from the onslaught of artificial intelligence. That was because a lot of what I do involves generating empathy and transferring that empathy from a research setting into a board room.

I was wrong. Here is an AI that could soon be a more efficient and reliable researcher than a human can.

An Artificial Empathy system could conduct literally thousands of interviews without getting tired, or going off script. And let me be clear, while user research is a wonderful activity — I suggest everyone with any connection to innovation should do it semi-regularly — it is definitely exhausting. I become a zombie after interviewing more than 4 people in a single day.

It’s not just the listening, it’s the suppression of your own personality which is tiring. You need to be focused on someone else, and curious about everything they say. You need to spot cues of a deeper motivation, and carefully coax them into articulating it in their own words. Research is not just having a conversation — it’s a structured and purposeful interaction.

So, first of all, I can see that an artificial empathy system will be able to simply execute more research interviews that I ever could. But at the same time, the artificial intelligence behind it will be able to identify patterns in people’s responses more rapidly and with more statistical rigour than I could as well.

Shushila — one of two new artificial humans revealed by Soul Machines in July 2017

As regular humans, we struggle to process large volumes of information — imagine searching for patterns of response across 60 or 80 hours of research videos. But an artificial empathy system which has time-coded verbal responses and emotional states, and is backed by something like IBM Watson could process 1000s of such videos in a matter of hours; and be able to provide a quantitative justification for its findings.

So, as well beating me on research execution, it seems that an artificial empathy system could beat a human at research analysis too.

Artificial empathy systems might empower design researchers rather than replace them

Now the standard response to the fear of job losses from artificial intelligence is to talk about the augmented worker, rather than the replaced worker. Peter Schwartz (our SVP of Strategic Planning, who’s famous for his work on two other great movies: Minority Report and War Games) was in town just last week, talking about the Future of Work in just these terms. The idea is that, instead of replacing jobs, AI will replace tasks, and free up human labour to do more valuable work.

So swallowing our fears, let’s take a look at how these artificial empathy systems might empower design researchers rather than replace them. And as I suggested above, it all comes down to scale.

Here’s what I think Human-Centred Design Research might look like using an Artificial Empathy system dedicated to that purpose:

  1. Jane drafts a discussion guide — maybe general discussion topics and a few specific questions
  2. Her AE suggests edits, based on questions that have elicited richer responses in the past
  3. Jane drafts a screener for the kind of participants she is looking to talk
  4. Her AE scours the customer database, starting with those who have their social network connected, to identify specific behaviours or interests, and then extrapolates from their to produce a large sample set (say 1000 customers)
  5. Her AE then runs an initial segmentation algorithm, to identify other key differences within this set that may be significant
  6. Her AE then uses a marketing automation tool to recruit and schedule a first round of 10–12 interviews, ensuring good coverage of the segments it found
  7. Jane conducts the first 6 interviews, with her AE watching. Jane is effectively training the AE on how she’d like the interview conducted.
  8. After each interview, her AE reports on her analysis of that interview. They remove ineffective questions from the discussion guide and add others to probe interesting responses.
  9. The last 6 interviews of the first round are conducted by the AE, with Jane watching. Jane is able to mark and annotate moments in the interview that she finds interesting (or uninteresting) to help train the AE further in what sort of insights Jane is looking for.
  10. They do a final review of the discussion guide together.
  11. The AE then takes over, and conducts the remaining 988 interviews across different timezones, in different languages. It might even conduct multiple interviews at the same time.
  12. Each day, Jane reviews the findings, as they develop. Given the scale, she is able to shift or even split the focus of research as new patterns come to light, adding in variations of sections for particular segments within the sample. For example, people who respond in a particular way to one question are asked more questions on that topic.
  13. Finally, the AE provides Jane with a draft overview of her presentation back to the board. It also provides a list of suggested snippets from the videos to illustrate each finding.
  14. Jane edits and adds to this presentation by querying the AE, based on what she knows will motivate the different board members to act.
  15. Jane’s presentation to the board is compelling, but the board members also retain access to the AE and to the raw video data, e.g. to see the snippets in their context, or explore other insights.

There is still a lot of skill involved in Jane’s role here. Jane can build herself a reputation for being smart about identifying what to study, and deciding what audiences to focus on. Also, being a better researcher herself means she’ll train her AE to ask better questions in a better way. Jane also plays a critical role in guiding her AE about what sorts of responses are interesting, providing critical data points to constrain analysis. Her understanding of humans and their behaviour is still critical, but she can now research on a whole new scale.

This also frees Jane to focus her time on deeper research methods, like shadowing customers in their own environment, and she can take the behavioural insights she gains from those ethnographic methods back to her AE to confirm or enrich them at scale.

Will the era of Big Data give way to an era of Big Empathy?

Scale is important. As Designer Researchers, we’ve struggled for years to remain relevant with the rise of Big Data. We’ve argued that data will tell you what people are doing, but rarely tell you why. And it’s in the why that you find opportunities to do human-centred innovation, rather than just optimisation.

Perhaps when the Artificial Empathy systems being built by Soul Machines become commonplace, the era of Big Data will give way to the era of Big Empathy. I think that’s something we might look forward to.

If you enjoyed this piece, please clap!

If you’d like to continue the conversation, please get in touch via Twitter or LinkedIn.

--

--

Justin Tauber
UX for Bots

Strategic Innovation Director @ Salesforce (though these are my own thoughts). Sydney/Paris. Bass player. Recovering philosopher. Twitter:@brtrx