Endangered Empathy

This is a thought piece on some ideas I have gathered since I started designing things for people.

James Z. B. Vanié
4 min readApr 6, 2016

At the risk of being categorized as a modern luddite, I’m drawing a line in the sand between emerging technologies and humanity. We’ve made great leaps in designing systems that mimic neural networks of the brain, but are they intelligent enough? We have yet to understand each other as co-workers, as neighbors, as nations. Can we expect algorithms to show empathy? When Google’s flawed April Fools prank and Microsoft’s chat bot missed the mark of automating emotion last week, it put into perspective the direction we are headed as an industry. To be clear, I am not talking about all forms of technology and AI— only the forms that attempt to mimic the right side of the brain — you know, the side that makes us creative, empathetic, and imaginative.

The design industry is not truly empathetic

The claimed “empathy” that designers practice is branded sympathy. Let me me illustrate this with stick figures:

User research can be awkward. Credits to Cyanide and Happiness for the comic inspiration.

To give this scene credit, the designer has good intentions. Once the interview is complete, he sympathizes with his interviewee (who is not part of the conversation at this point) by asking “what can we make for him that makes sense for us?” I digress, but the point I am getting at is that there is an inherent disconnect in the way designers have conversations with people about the conversations they want to have with each other.

The things we (designers) design are based partially on customer insight but also on business goals, competition, available resources, and the ever-growing demand of incremental innovation, a blog post of its own. Our web habits, moods, interests, likes, and retweets are being commodified to inform and perpetuate our experience. How much control do we really have of what we see and desire? The more I examine the design industry, the more parallels I see in advertising — creating a lot of things (products and services) that foster dependency and planned obsolescence (digital and physical).

Designers act like psychiatrists and psychologists by using active listening when talking to customers. The difference between us and them is the displacement of shared interest between entity and customer. On the mental health side, the interest lies solely in the well-being of the patient (customer). In the design industry, there are a handful of shareholders and external factors that influence the rationale for experiences created for customers.

Now let’s look at my example of empathy:

Which scene is closer to your definition of empathy?

While the customer has free will to pick and choose what they interact with, there is a lack of transparency from the customer’s point of view. They’re not asking and we certainly aren’t telling.

The internet of “things”

To distill everything above, designers introduce things into the world with an intention to create coherence. Most of these things significantly fall short of expectations, or worse, further complicate the lives of their target customers, causing incoherence. It seems that designing with this sympathy creates a psychological dissonance between humanity and the essence of a moment. If empathy can come in the form of a physical or digital product, please link it to me in the comments section. I feel that there are only a small handful of products and services that are not exploiting customer’s emotion or data in some way.

If you don’t like the card, we’ll iterate on another. Hope we didn’t make too much of a mess. #empathy

By sympathizing, we undermine vulnerability

American Scientist, David Bohm, explains how we are unaware of our thoughts in his book, On Dialogue:

We could say that practically all the problems of the human race are due to the fact thought is not proprioceptive. Thought is constantly creating problems and then trying to solve them. But as it tries to solve them it makes them worse because it doesn’t notice that it’s creating them, and the more it thinks, the more problems it causes — because its not proprioceptive of what its doing.

If human thought is not proprioceptive (self-aware), stumbles through the bliss of ignorance, and rationalizes harm, what is the opportunity cost in designing a system that does the same? Is there an alternate direction to consider?

Designing our fate?

Already, our generation is beginning to choose technology over human conversation. What was once awkward silence has turned into let me check my notifications. When we shy away from the unpredictable (conversation) and retreat to the predictable ( iPhone), we are sacrificing possibility for comfortability. Revelation be told, artificial intelligence will one day be self aware. And if we continue to automate our world based on sympathy and ulterior motives, I fear that our ability to empathize will be severely impaired.

Please share any thoughts below. Thank you for reading. Credits to Cyanide and Happiness for comic inspiration.

--

--