Photo by Ben Husmann on Flickr

How I learned to love robots — and you can too.

Peter Sigrist
Aug 24, 2015 · 4 min read

When I was at university, studying free will (part of my degree in what the University of Edinburgh used to call Mental Philosophy), I was completely convinced that there was something categorically different about the human condition from anything else in the universe. Now, I believe my own perspective is only categorically different for me; that this is about as far as our uniqueness goes. From a third-party perspective, I don’t think there’s anything categorically human about humans, except in the eyes of other humans.

If your journey was like mine, here is how it would go…


You start out thinking that there’s something different about people from all the other entities in the universe. You believe that we alone have the capacity for empathy and compassion that generates empathy and compassion in others.

But over time, you begin to find many counterexamples that run against that intuition — look at the way people bond with pets, for example; look at Diane Fossey and her interactions with gorillas; look at the way dolphins are used in therapy. Then you look at the way some humans are incapable of empathy and compassion, and you begin to prise apart these two things: human beings and human empathy and compassion. As these concepts are separated, you begin to think — well, if empathy and compassion are not objective states of affairs of the universe, then what are they?

When you think about why we so readily connect human beings and human compassion, you realise we take a great deal of comfort in certain beliefs, even though they are implausible. This is not to say they are provably false (or, in the words of Karl Popper, falsifiable), just that it takes peculiarly opaque thinking to conclude that they are true. One classic example is the existence of a transcendental deity or the worldly effects of such a deity. While such a “fact” certainly makes people feel better, it is either tremendously unlikely, as was argued by David Hume in his once-censored Of Miracles, or is without evidence, as was argued by Richard Dawkins in his book The Blind Watchmaker. The objectiveness of our human condition in the universe is similarly unfalsifiable yet, again, makes us feel better.

Along this journey, there are loose ends that seem to stand in opposition to the direction of travel. One of the biggest is free will itself. It seems more plausible to believe in free will than not, because you can intuit it every single time you think, can you not? You think, “I want to lift my arm,” and, hey presto, you lift your arm. However, at some point you discover theories of how free will is possible even if the universe — including what you call your mind or soul — is unfolding according to a set of determinable rules. The entire universe, including the workings of our brain, can be both predetermined and yet unpredictable. Quantum mechanics teaches us this and Roger Penrose has done amazing work looking at how this relates to human intelligence and free will.

Now you begin to look at this from the other point of view. You think, so what if it is hard to pin down what this human dimension is, in acts of empathy and compassion? Are computers or robots ever going to be able to exhibit such humanity? But now you begin to see how computers and robots are becoming able to exhibit very human-like behaviours. Whether it’s financial journalism, which is now being routinely delivered by robot writers; or the emergence of automated customer service bots that seem to understand our questions with ease; or even the many videos online of clever robots that can mimic human faces; whenever you experience a little bit of artificial intelligence, don’t you sense something “human” about it? If you allow for this, over time, you may suspect that our emotional response to what we call humanity may be little more than a complex compound of stimulus and response. Here is an example of what I mean — how does seeing this robot face make you feel — once you overcome the creepiness of it?


Eventually you begin to accept the possibility that the only thing that gives us cause to believe in human empathy and compassion (over and above the behavioural) is that we, as independent agents, each believe we see such humanity in the person exhibiting those behaviours. Specifically, we identify with these behaviours implicitly because we are hard-wired to do that, even though it does not follow that there is such a thing as compassion over and above a set of behaviours. This is anthropomorphism. For an example of how we all-too-easily ascribe human-like emotions or thinking to non-humans, listen to this BBC documentary on BF Skinner and his superstitious pigeons.

On your journey, at this point, you conclude it’s not empathy or compassion that we believe in, but same-as-me-ness. And what is most remarkable about this journey is that this is not a depressing realisation; it’s enlightening and inspiring! We are capable of showing both empathy and compassion to others. And it is even better that these can be decoded because it means that, in future, these soft interactive and behavioural techniques will make it into all of our human interfaces with technology. All those services that are currently provided by technology — from internet searches and computer games to medical or social care and tax returns — will become profoundly more rewarding, pleasant and humane than anything we have been used to in history.

And if you want to see how social care can be delivered by a robot, I highly recommend the movie Robot and Frank.


3 claps
Peter Sigrist

Written by

Looking out for medium-sized views on PR, communications, digital culture and education. Managing director of the technology and innovation practice at BCW.