The sci-fi geek urging us to question our relationships with intelligent machines

University of Cambridge
This Cambridge Life
5 min readFeb 22, 2018

--

Digital anthropologist Beth Singler looks at how ideas of science and religion are woven into our lives through the stories we tell. She’s particularly interested in our relationship with increasingly human-like machines. Will they replace us or enhance our lives?

Credit: Nick Saffell

Early one morning last November I stood in the foyer of BBC Broadcasting House. I was there as one of the guests on BBC Radio 4’s Start the Week, a discussion programme presented by Andrew Marr.

The conversation was about pain. I was invited because I’d won the AHRC Research in Film award for our short film Pain in the Machine, made at the Faraday Institute for Science and Religion, with Dr Ewan St John Smith of the Department of Pharmacology, and produced by Little Dragon Films.

Pain in the Machine

There’s huge public interest in robots and how they might change our world. From the 1970s onwards, robotics revolutionised manufacturing, replacing humans in doing simple, repetitive tasks. Advances in technology are bringing us closer to a new generation of robots that do tasks in an intelligent way — they learn.

My current research looks at human identity in an age of nearly-human machines. It explores the social and religious implications of technological advances in artificial intelligence (AI) and robotics. The film that won the award is one of four films that explore the questions raised by these developments. The next one looks at questions around companionship and friendship.

Friend in the Machine

A turning point in public awareness of just what AI can do occurred in March 2016. AlphaGo, an AI made by Google DeepMind, defeated South Korean grandmaster Lee Sedol at the ancient game of Go. Almost 20 years after IBM’s Deep Blue computer beat Garry Kasparov at chess, we were seeing a machine that could outplay a human in a game that requires a complex level of strategy.

No-one doubts that AI will play an increasing role in our lives. It’s more a question of how it meshes into our belief systems and worldviews. When people think of robots they typically think of the ‘terminator narrative’ from sci-fi movies or cute robots that might do tasks in the home. But AI is much more ubiquitous and potentially disruptive to society.

I’m interested in the big ethical debates that these developments raise. In the past, autonomous robots were the stuff of sci-fi. We’re now at a point in history when AI is part of our lives through social media and advertising.

We’re already having heated discussions about driverless cars — and how they might make decisions that conflict with those that humans might take. The ‘trolley problem’ has become a popular trope for thinking about this kind of thing but doesn’t get us much closer to answers.

The nature of work is changing. AI or robots will certainly replace humans in an increasing number of jobs. Already AI is quicker and more accurate than people at doing tasks such as pattern recognition — for example in cancer diagnosis. Will this free us up to do more interesting things with our lives — or will it leave people living purposeless, unfulfilled lives?

We need to think deeply about our relationship with machines. That includes our feelings about them. I do a lot of public outreach — talking at science festivals and so on. Giving talks allows you to engage with the public and I believe in sharing my research with as wide an audience as possible and canvassing responses.

One of the best questions I’ve been asked came from a ten-year old boy. He was in the audience at a talk I gave at the Hay Festival. I’d been talking about the Uncanny Valley — a term used to describe the sometimes eerie feeling we can get when looking at humanoid robots that are like us, but not enough like us. He asked whether in the future robots might experience the same feeling when looking at us.

My answer was that it might happen one day. I congratulated him on working from the opposite point of view and thinking about how robots might experience us. Such questions also raise the issue of robot rights — if we are aiming for human-like technology, should they also have our rights? We assume they should work for us, but with high levels of intelligence and autonomy, perhaps they should be free not to?

I’m a huge sci-fi geek. From my early teens onwards I devoured sci-fi books and movies. I read Isaac Asimov, Robert Heinlein and Charles Stross. I watched every Star Trek series and Star Wars film. At school in Portsmouth I particularly enjoyed Religious Studies, which was taught by a wonderful teacher called Reverend Grindell.

Religion, like sci-fi, involves story-telling. Reverend Grindell opened the eyes of his class to the diversity of ideas about the existence of higher beings. I was planning to take a degree in screen-writing but I decided to apply to Cambridge instead. My parents, neither of whom went to university, also advised me to get a degree and then think about going into film later on.

I took a degree in Theology and Religious Studies at Pembroke College. I got into student play-writing as a way of exploring ideas about human interactions. In my final year I wrote a dissertation on modern pagan witchcraft. It was exciting to be doing some original research into a topic that hadn’t been studied much by academics at that point. I became aware of how new religious movements use the internet as a powerful platform for finding each other and building communities.

After several years in London, working as a freelancer in the film industry, I returned to Cambridge to take a PhD. My thesis looked at the Indigo Children — one of the new religious groups that’s active online. Indigo Children see themselves as a spiritually, psychically and genetically advanced generation — one that’s here to usher in a new golden age by changing the way we see each other.

To work as I do, with speculative ideas and on the boundaries of science and religion, you have to be open-minded. Cambridge offers an extraordinarily rich and supportive environment within which to explore ideas that cross the boundaries between science and religion, as well as contemporary trends and movements on social media.

Beth Singler is at the Faraday Institute for Science and Religion and at Cambridge University’s Leverhulme Centre for the Future of Intelligence.

Read more about Cambridge University’s AI research in our research magazine: download a pdf; view on Issuu.

This profile is part of our This Cambridge Life series.

--

--

University of Cambridge
This Cambridge Life

Research from the University of Cambridge