Fifteen minutes with leading #AI specialist JOANNA BRYSON

Christoph Auer-Welsbach
Applied Artificial Intelligence
7 min readJan 10, 2018

Fake news, the limits of human evolution and why robot rights are dangerous

“How could you be anything but enthusiastic about AI?”

That’s Joanna Bryson’s response when I ask about her motivations. Joanna is an associate professor of AI at Bath University, and an affiliate at Princeton University, where she’s based. She took her first course in AI in 1991, having initially got involved in the field in the mid-80s.

At this moment in time, she says, it’s “Trump and Brexit that are motivating me. When he got elected I honestly didn’t drink for the next two months — I felt like I hadn’t been working hard enough! We are a very powerful species and we really need to understand ourselves and our world. It’s not only that I find these things intrinsically interesting; it’s that I believe it’s our duty at this point to understand what is going on.”

I sat down with Joanna to find out more about her work and views.

Tell us what led you to AI, and the inspiration behind that.

My first degree was in psychology. What really interested me was animal behaviour and understanding the intelligence of different species — but it just so happened that I was also a good programmer.

Five years out of college, I had a decent job and life but I didn’t feel fulfilled. I’d always wanted to do a PhD but hadn’t known what in. I was watching a beautiful sunset one day and it dawned on me that I could combine what I was good at (programming) with what interested me (animal — in which I include human — intelligence) and study AI. I got into Edinburgh to do a conversion masters. There are now masters degrees at leading universities in the US, but, at the time, Edinburgh was the best place in the world to do a masters in AI.

No-one has asked me about a first inspiration before, but it was actually Jane Goodall, the primatologist, and her work on the differences between a human and a chimpanzee.

The mistake humans make

Since you started out, there have been numerous stages of AI development. How does today compare to the most challenging/progressive times in the field?

People talk about AI winters, but from an academic perspective, it’s only been steady progress. I’ve been phenomenally bad at getting funding my entire career, and yet I still have a job, which must mean that teaching people to program is seen as a fundamentally good thing!

One ongoing issue has been the power of computation. The reason we’re going so fast right now is that computers are faster and cheaper, and we’ve got way more data since the advent of mobile phones.

The major hurdle today is that people over-identify with the concept of intelligence. There’s a lot of smoke and mirrors. I’m trying to do science but I worry whether I’m wasting a lot of time talking to people who have these marginal ideas like robots’ rights.

People think that if something is more intelligent, it must be more human-like and therefore you really can hurt it. They equate being intelligent with being alive and knowing suffering. And this isn’t new. Back in the 80s, there were internet chat rooms where people would say that future robots would be angry at us for turning computers off. If you know anything about computers, how can you think that? Just the other day I had to stop a presentation and pull up one of my simulations for the audience to show them what AI really looks like. There is no sign of it showing emotion of suffering. This was a simulation of something that “eats” and shares information, but you could see all its memory, there’s nowhere in its “mind” for it to know more than the food types in its world. There’s no memory of better or worse times, there’s no basis, space, or computation for suffering.

People find it very difficult to see how you can be intelligent and not suffer or have emotions. But of course there’s so much more to being a human than intelligence. We are essentially chimpanzees that are really good at computing because we’re really good at communication and cooperation. All apes are good at computing as individuals, but they don’t get as much from it because they don’t share outcomes with each other. We do share outcomes, and now we’ve built machines that can share everyone’s information, which is both wonderful and scary.

There was recently a study (of sorts!) on expert AI tweeters like you and general AI tweeters who spout received wisdom on AI. What impact do the latter have on your work, and does it mean you spend time on damage limitation?

There are a couple of things to say here. First, because people identify so much with AI, you can find plenty of true experts who still talk rubbish — “we’re designing a neural network and if we run it long enough, we’ll get a separate species”, that sort of thing. And people who really should know better still talk crap to get money. So it’s really hard to draw a line between “experts” and “non-experts”.

Second, this isn’t a problem specific to AI; it’s a problem with fake news. If all you’re trying to do is build something to be retweeted, then of course that’s going to be more retweetable than if your primary aim is to communicate something about the world. If you’re trying to do the latter and get it retweeted, you’re trying to achieve two things at once.

For scientists, the most important thing is to be correct. Not being so means losing prestige and position. Watering down a message to make it more retweetable is contrary to our aims.

Selling anthropomorphism

You argue that robots are manufactured artefacts, hence our responsibility as humans. You have also said that we author robots, which is fundamentally different to how we, say, rear children. To me, it seems that commercial applications are deliberately trying to mimic human traits to trigger people’s emotions. How do you see this trend playing out?

Seven years ago, two British research councils brought together a bunch of people (I was one of them) who developed something called the Principles of Robotics. The fourth principle is that robots should not be anthropomorphic; their machine nature should be apparent. But you’ll hear arguments that anthropomorphic robots are what people want, so that’s what we should build. It’s like cigarettes — how long will we allow ourselves to be addicted this time?

The ideal with anthropomorphic robots is that you get utility from them, even emotional utility, but still be aware that it doesn’t do it any harm to abandon it while you go on vacation. We should treat these robots as we do movie characters. You know that the movie character and the actor are not the same. You know that explicitly, but implicitly, you don’t. You feel things towards the actor that are derived from your experience of the character and movie.

On the point about the distinction between rearing children and authoring robots, humans are not the peak of evolution; we’re just a certain kind of organism. Children become what we are, and we are going to die. Robots don’t care about losing ten years in jail; they don’t necessarily have time constraints. You can influence your children, but so will their biology and the culture in which they grow up. Robots are restricted by both the laws of computation and by the very fact they’re built by humans, which have millions of restrictions. But you can have a lot more influence than you do over a child because you can make sure there are no other inputs. More here.

Ultimately, we need to remember that we are building commercial products. We could build a robot that suffers like humans, but why would we do that? We don’t even take care of each other. We’d create suffering and introduce competition for “human” rights. We should take care of our own species and all those that are going extinct first, not allow commercial products to compete for human attention. What Saudi Arabia has done in pretending to grant human rights to a robot isn’t ridiculous, it’s hazardous — because it demonstrates how badly people want that reality.

“Good vs evil”

When we think about building AI, we ask questions about robots gaining too much power and becoming bad. There is no reason why AI should do that, is there?

No. As humans, we’re obsessed about things going wrong. There are 8 billion people on this planet and although there is a big combative element to how we’ve evolved, there is also a big cooperative element — which is what many people reading Darwin miss. Life is, at its foundation, cooperative. Even if it were true that to be intelligent is to be human, most humans don’t try to take over the world. The fraction that do only increases when there’s not enough stuff — when resources are scarce. One of the things we should be trying to do is to figure out how to live sustainably, because that will reduce a lot of the competition we’re currently experiencing.

What can people in the field of AI do to ensure it develops in an optimal direction?

There’s no one solution to this. It’s important to go into educational institutions and engage people there. Joining a standards effort is also something people can do. And when legislators and politicians call, talk to them.

I don’t mean we all have to do everything. We don’t all need a twitter account; we should specialise at what we’re good at. But do make an effort. And trust other people’s opinions of you. I’ve recommended people for public speaking who didn’t want to do it but they’ve ended up making excellent contributions. If you get called on, go ahead and give it a try.

--

--

Christoph Auer-Welsbach
Applied Artificial Intelligence

Venture Partner @Lunar-vc | Blog @ Flipside.xyz | CoFounder @Kaizo @TheCityAI @WorldSummitAI | Ex @IBM Ventures