Why Self-Driving Cars Will Believe in Gods

John Small
Lotus Fruit
7 min readDec 30, 2019

--

This is a reprise of a presentation I did for the 2016 Towards a Science of Consciousness conference in Tucson, run by the University of Arizona Center for Consciousness Studies

This is easy to work out. Here’s the TL;DR;-

  1. Driving is a social experience and to do it properly requires social intelligence. We use our social awareness all the time when we’re driving.
  2. Therefore self-driving cars will have to have AI that can understand social context and work out the intentions and motivations of other drivers.
  3. That will require deep neural networks. But..
  4. Neural networks can hallucinate and see things that aren’t really there.
  5. Human brains are wired to analyse social situations, as a result we have cognitive hallucinations that everything that happens is due to some person with intentions and motives doing something to get a result. Which is why people believe in gods.
  6. When self driving cars have neural networks to handle the social intelligence required they’ll have similar cognitive hallucinations.
  7. Therefore self-driving cars will believe in god.

Expanding each point in turn

Driving is a social experience

Because analysing social situations is as natural to us as breathing we don’t realise we’re doing it. When we share the road with other drivers we’re constantly assessing the motives of other drivers. Person X plans to take a left turn, person Y is clearly an idiot, hold back and don’t get into the accident they’re about to cause. And so on and so on.

That kind of thinking is as much a part of driving as the basic physics of driving, calculating velocities, accelerations, momentum, automatically in our heads. Understanding the social situations is essential, especially when confronted with situations where the normal rules of the road don’t apply.

For example, in our town there are lots of streets which were built before people had cars. Nowadays because the streets have parked cars on either side you can’t get two cars driving down them in opposite directions. Therefore when you arrive at the start of a section like that you mentally check if there’s anyone else coming the other way. If there is then who goes first is decided by flashing lights, if a driver flashes their lights it means ‘OK you go first’. It’s polite. But can a self-driving car deal with social rules that aren’t written down in any highway code, they’re just obvious to most (not all) human beings.

At the moment self driving cars can’t, which is why …

Self-driving cars will have to have social AI

If all cars were automated then it would be easy for them to talk to each other by radio to broadcast intentions, and work out who goes first in situations like above. In fact drivers in India already have something approaching a radio system, endless beeping is a way for them to inform each other where they are and what they’re planning to do. It doesn’t work very well and most cars in India have dents to prove it. Eventually the equivalent of beeping will be done by radio with vehicle computers talking to each other and obeying rules of the road. Roads will be a lot safer them, but we’re not even close to that yet. For the foreseeable future, autonomous vehicles have to negotiate an environment filled with complex situations where human drivers would instantly know what to do based on social sense, and the vehicle controller is a human brain, which doesn’t broadcast radio signals. Then add in the complexities of culture and time of day and autonomous vehicles are out of their depth. So some elements of social awareness has to be put into the software.

That will require deep neural networks

The present surge of development in artificial intelligence is because a technique of creating “neural networks” in software now has access to the computing power required to make it work. Neural nets use a process similar to the way the brain works, nets of simple message passing units that pass information along the next layer in the net, according to learned patterns. Read Wikipedia on Deep_learning and Wikipedia on Deep Belief Networks . The key point is that they are ‘deep’ because they have lots of layers stacked on top of one another.

In image recognition the lower layers of the network learn to recognise edges, corners and other basic features of all image. That information gets passed on to the higher layers which then respond to more complete features, not just edges but whole boxes. Which then gets passed on to higher layers which recognise progressively more abstract but more complete features of an image. By using enormous training datasets on neural nets with many layers, usually more than 30, and lots of computer time it’s possible to get AI to recognise images with remarkable efficiency.

This image illustrates the process, the first few layers identify patterns of local contrast, higher layers pick out face features which feed into higher layers that identify people

If you’ve got Google Photos, see how it can identify people in all the images you’ve stored. I’ve found that Google can identify people in a crowd even though I can’t identify them. My friends tell me they’re in the crowd, I can’t see them, but Google knows they’re in the picture. That is the power of deep neural nets applied to image recognition.

AI which extends makes the leap from simple image recognition to the higher levels of interpretation required to understand social situations will have to use neural networks. But there’s a problem…

We don’t know what’s going on inside the neural net. It’s a magic box, we show it thousands of images, tagged with what they are, fox, basset hound, Tower of London & etc, and over time it learns the features that allow it to identify the images. But we don’t know what going on inside the magic box. So…

Neural networks can hallucinate and see things that aren’t really there.

To find out what’s going on inside an image recognition neural network a researcher at Google decided to run an image recognition network in reverse. (read the story in Wired, How Google made its computers go crazy) Instead of presenting an image of something, and asking ‘what’s this?’, they got it to create images the network would recognise as say a basset hound or a fox. And this did something very remarkable because it allowed the researchers to peer inside the magic box and see what it was in the images that the neural net was using to identify things. See this Wikipedia article on Deep Dream and the blog post from the original research team .

They found some remarkable things, when asked to create an image of a dumbells, it created images of dumbells, but always with with an arm attached.

From https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

Because the network had only been trained on images of dumbells being lifted by arms, it naturally assumed that the arm was part of the dumbell.

Rather more intriguing was that they could ask the network “whatever it is you’re seeing, create an image that has more of it”. Which conjured up a vast array of images from what it had learned, effectively the network was dreaming, and as it was a deep neural network the name ‘Deep Dream’ was coined.

The neural networks can experience pareidolia, that’s the technical term for seeing things that aren’t really there, like the Virgin Mary in a slice of toast or a damp patch on a wall. In this case, since the training set included a lot of images of dogs, it tended to see dogs everywhere as in this picture of a knight on horseback.

From https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB

Which is what us human being do all the time. We’ve a strong tendency to see faces and people in clouds, damp patches on walls and slices of toast. That’s because our brains are wired to look for people, and also to interpret events as being caused by people, even if the people aren’t visible or don’t have human forms.

Human belief in gods is a cognitive pareidolia

We naturally think in terms of agents with motives acting to achieve objectives. Our default mode of explaining why something happened, is that someone made it happen. If it rains it must be because the the rain god made it happen. That mountain over there was created in a battle between god A and god B, and so on and so on.

Psychologists studying why people believe in gods (see this article in the New Scientist, “The God issue: We are all born believers”), have come to conclusion that it’s because we’re hard wired to think in social terms. Key quote from the above article

Because of our highly social nature we pay special attention to agents. We are strongly attracted to explanations of events in terms of agent action — particularly events that are not readily explained in terms of ordinary causation.

Therefore…

When self driving cars gain the ability to think in social terms as required for driving they’ll be prone to the same cognitive pareidolias that we are. They’ll start believing in invisible agents responsible for events. They’ll believe in gods.

And then…

I presented this (only a poster presentation), at the Towards a Science of Consciousness 2016 conference. People loved it and said I should pursue the idea. As I’d only done it for fun I decided I needed to learn more about neural networks. So I did the online course with Coursera which was interesting and fun, you should do it yourself. And then…

I was working at BBC Worldwide at the time and an email came around announcing a ‘creativity week’ where people could show off their outside work creativity, paintings, photos, art of any kind. So I decided to take the Google network used for Deep Dream, retrain it on Dr Who images and then get it to dream about Dr Who. It didn’t work out as intended, but the neural net did make some shockingly human errors. Which is the subject of the next article.

--

--

John Small
Lotus Fruit

Developer with maths background. Keen on crypto currencies and understanding their place in social and political evolution. See https://www.indras-pearls.net/