The Coming Age of Empathic Computing

Mark Billinghurst
Super Ventures Blog
9 min readMay 4, 2017

--

Over 70 years ago psychologist Alfred Adler famously described empathy as “seeing with the eyes of another, listening with the ears of another, and feeling with the heart of another” [1]. Technology has now advanced to the point that it can be used to help people have greater empathy for one another. In this article we describe the emerging field of Empathic Computing.

Technology Trends

Since the 19th century, there have been a number of important trends in computing, communications and content capture . Over the last 60 years as computers have gotten more powerful, interacting with them has changed from punch card and keyboard input, to using natural speech and gesture, and even ongoing research into thought controlled computing. This represents a broad trend from Explicit Interaction (you tell the computer what to do) to Implicit Understanding (the computer observes your behaviour and implicitly understands what you’re doing).

Interaction Trends Over Time, from Explicit Input to Implicit Understanding

In communications, 40 years ago a person would have been lucky to have a 100 bits/second long distance network connection, while today 100 Mb/s is common with some networks providing Gigabit connections. This dramatic increase in network speed has changed the type of communication possible, from text only, to audio, video, and now 3D avatars. Faster networks mean a trend towards rich Natural Communication, and there is research from Google, Facebook and others to ensure global connectivity.

Networking Trends Over Time and the Communication it Enables

In terms of content capture, over the last 150 years technology has evolved from being able to capture single pictures, to recording moving images, live video and now 360 immersive content and 3d space scanning. Companies such as Ricoh make inexpensive 360 video cameras, while Samsung’s Project Beyond allows people to capture immersive stereo scenes and Project Tango can be used for 3D space capture. Overall, there is a trend towards complete Experience Capture.

Evolution of Content Capture Technology Over Time

When combined together, ongoing advances in computing, communications and content capture will dramatically change how people connect with each other. For example, Microsoft’s Holoportation records 3D copies of people in real time and uses high bandwidth networking and Augmented Reality to bring a virtual copy of them into a person’s real world [3]. Technology like this means that talking to a remote person will one day be almost as natural as having a face to face conversation, or even better.

Microsoft’s Holoportation

At the overlap of these trends towards is the emerging field of Empathic Computing, a new direction for computer interfaces that could enable people to communicate in new ways. Based on Adler’s original description, we can define Empathic Computing as computer systems that create deeper understanding or empathy between people. This requires a combination of rich natural collaboration, capturing the user experience and surroundings, and being able to implicitly understand user emotion and context.

Empathic Computing; Combining Natural Collaboration, Rich Experience Capture and Implicit Understanding

Empathic Computing

Talking about films, critic Roger Ebert said “I believe empathy is the most essential quality of civilisation” [2] and described how movies invite people to empathise with others. Clearly great movies can create a strong sense of empathy, but new types of interactive technology can be used to generate even more powerful empathic experiences.

In general, Empathic Computing systems can be broken down into three types:

  1. Understanding: Systems that can understand feelings and emotions.
  2. Experiencing: Systems that put people into the recorded world of others.
  3. Sharing: Systems that share the real time experience of others.

Important research has been conducted in each of these areas by groups from around the world.

Understanding

In the area of Understanding, research has been conducted for many years on how to develop systems that can recognise a person’s emotional state. This is often called Affective Computing [5], a term coined by Professor Ros Picard, and defined as “computing that relates to, arises from, or deliberately influences emotion or other affective phenomena” [4]. Affective Computing systems typically use a range of sensors to capture a person’s physiological state, software to process the captured data, algorithms to map this onto an affective state, and finally an application that responds to user input.

There are many examples of Affective Interfaces. For example, Professor Jun Rekimoto developed a system were a person had to smile to use common household appliances, such as a fridge [6]. This Happiness Counter [7] was based on the idea that if people act happy then they will feel happier and it used a simple camera based smile detector to measure when people were acting happy. Picard’s own Affective Computing research group at the MIT Media Lab has developed many examples of Affective Computing projects, such as being able to automatically recognise stress in real life situations.

Rekimoto’s Happiness Counter

These are typical of Affective Computing interfaces in that they are designed to recognise and respond to the emotion of a person interacting with the system. Research in this area has led to the development of physiological sensors and software algorithms for emotion and affect recognition.

Experiencing

A second type of Empathic Computing systems are those that allow people to view the recorded or simulated experiences of others. Artist Chris Milk describes how Virtual Reality is the “ultimate empathy machine” by enabling users to be immersed in another person’s view, and so gain empathy for them and their situation. For example, he created the immersive movie, Clouds of Sidra, that allows a person to visit the Zaatari Refugee Camp in Jordan and see it through the eyes of a 12-year-old girl from the camp.

In a similar way, VR pioneer Nonny de la Peña creates interactive 3D immersive virtual worlds that recreate current events. She worked with USC to create Project Syria where people can go into a virtual copy of a Syrian market and experience a devastating terrorist attack. This was modelled on real news footage from an actual bomb blast. In this way VR enables people to empathise with people in the Syrian conflict in a way that is impossible when viewing the events on TV or reading about them on the web.

A Virtual Recreation of a Terror Attach in Syria

Virtual Reality is ideally suited for Empathic Computing because the whole goal of VR technology is to fully immersive people into simulated experiences. There are hundreds of VR applications that allow people to experience things that they would never see or do otherwise, such as visiting Ancient Rome, becoming an astronaut, mountain climbing, or solving a murder mystery.

Sharing

The final type of Empathic Computing systems are those that allow a person to share the live experience of others. The aim is to allow a person to see what someone else is seeing and understand what they are feeling in real time. Augmented Reality, wearable computing and body worn sensors are key technologies for making this happen.

There are many examples of wearable systems that can live stream video to a remote collaborator. For example, the Google Glass wearable computer included a forward facing camera that enabled a user to share a video of what they are doing from a first person perspective. A more advanced version is Jun Rekimoto’s JackIn project that shares immersive 360 video from one user to another [8]. This is achieved by the sender (the Body User) wearing several cameras on their head, and streaming omnidirectional video to the receiver (the Ghost User) who can view the live video in a VR display, and so feel like that they are seeing the world from the senders perspective. In 2016 the JackIn system was used to allow people to experience running the Tokyo marathon while sitting on their couches [9].

The JackIn System, Streaming Immersive Video from One User to Another

JackIn provides one way video sharing, but the Machine to Be Another art project uses two way video sharing to enable two people to feel like they are inside each other’s bodies. In this case people wear head mounted displays with cameras on them and the video feed from each person’s cameras are swapped and shown in the other person’s HMD. This allows the artists to explore interesting questions about what it means to swap gender or body ability and type.

Examples of the use of The Machine to Be Another

These examples show how systems can be developed that enable a person to see from another person’s perspective. At the Empathic Computing Laboratory (ECL) we are seeking to go beyond this by enabling people to also share their feelings and non-verbal communication cues. For example, the Empathy Glasses project [10] combines a head mounted display with eye tracking hardware and sensors for detecting face expression to enhance remote collaboration. Using this hardware a remote user can know where a local user is looking and how they are feeling and so help them better complete a real world task. Other ECL projects include exploring how sharing gaze and physiological cues can be used to create mutual emotional experiences [11], and how hearing the heart beat of another person in a shared VR environment can create a deeper connection with them [12].

Using the Empathy Glasses to Enhance Remote Collaboration

These examples show how Empathic Computing systems for Sharing experiences have the potential to enhance empathy, because they enable a person to see through the eyes of another and understand their feelings. There has been a lot of earlier work on the first two types of Empathic Computing, recognising emotion, and creating emotive experiences, but much less work on how to create shared experiences. So this is a topic were a lot of work could be done.

Conclusion

Previous research in collaboration has often focused on enabling remote people to communicate together as easily as if they were face to face. Empathic Computing goes beyond this by helping people better empathise with one another, and share what they are seeing, hearing and feeling. Advances in Affective Computing, Virtual Reality and Augmented Reality makes it possible to create different Empathic Computing experiences and explore the Empathic Computing design space. However significant research still needs to be done in a number of areas, such as how to capture and convey what someone is seeing, how to measure emotional state, and how best to share this with another person. Conducting this research will enable people to connect together in ways never before possible.

References

[1] Clark, A. J. (2016). Empathy and Alfred Adler: An Integral Perspective. The Journal of Individual Psychology, 72(4), 237–253.

[2] http://www.rogerebert.com/rogers-journal/cannes-7-a-campaign-for-real-movies

[3] https://www.microsoft.com/en-us/research/project/holoportation-3/

[4] Picard, R. W., & Picard, R. (1997). Affective computing (Vol. 252). Cambridge: MIT press.

[5] https://en.wikipedia.org/wiki/Affective_computing

[6] Tsujita, H., & Rekimoto, J. (2011, September). Smiling makes us happier: enhancing positive mood and communication with smile-encouraging digital appliances. In Proceedings of the 13th international conference on Ubiquitous computing (pp. 1–10). ACM.

[7] https://lab.rekimoto.org/projects/happinesscounter/

[8] Kasahara, S., & Rekimoto, J. (2014, March). JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation. In Proceedings of the 5th Augmented Human International Conference (p. 46). ACM.

[9] https://josephta.me/en/tokyo-marathon-2016/

[10] Masai, K., Kunze, K., & Billinghurst, M. (2016, May). Empathy Glasses. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1257–1263). ACM.

[11] Ayyagari, S. S., Gupta, K., Tait, M., & Billinghurst, M. (2015, April). Cosense: Creating shared emotional experiences. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 2007–2012). ACM.

[12] Arindam Dey, Thammathip Piumsomboon, Youngho Lee, and Mark Billinghurst. 2017. Effects of Sharing Physiological States of Players in a Collaborative Virtual Reality Gameplay. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ‘17). ACM, New York, NY, USA, 4045–4056.

--

--

Mark Billinghurst
Super Ventures Blog

Augmented Reality Expert, Professor of Human Computer Interaction, Interface Researcher, Entrepreneur