How to create a mind: The secret of human thought revealed

Milo Spencer-Harper
Deep Learning 101
Published in
6 min readJul 6, 2015

In my quest to learn about AI, I read ‘How to create a mind: The secret of human thought revealed’ by Ray Kurzweil. It was incredibly exciting and I’m going to share what I’ve learned.

If I was going to summarise the book in one sentence, I could do no better than Kurzweil’s own words:

“The question is whether or not we can find an algorithm that could turn a computer into an entity that is equivalent to a human brain.” — p181

Kurzweil argues convincingly that it is both possible and desirable. He goes on to suggest that the algorithm may be simpler than we would expect and that it will be based on the Pattern Recognition Theory of the Mind (PRTM).

The human brain is the most incredible thing in the known universe. A three-pound object, it can discover relativity, imagine the universe, create music, build the Taj Mahal and write a book about the brain.

However, it also has limitations and this gives us clues as to how it works. Recite the alphabet. Ok. Good. Now recite it backwards. The former was easy, the latter likely impossible. Yet, a computer finds it trivial to reverse a list. This tells us that the human brain can only retrieve information sequentially. Studies have also revealed that when thinking about something, we can only hold around four high level concepts in our brain at a time. That’s why we use tools, such as pen and paper to solve a maths problem, to help us think.

So how does the human brain work? Mammals actually have two brains. The old reptilian brain, called the amygdala and the conscious part, called the neocortex. The amygdala is pre-programmed through evolution to seek pleasure and avoid pain. We call this instinct. But what distinguishes mammals from other animals, is that we have also evolved to have a neocortex. Our neocortex rationalises the world around us and makes predictions. It allows us to learn. The two brains are tightly bound and work together. However when reading the book, I wondered if these two brains might also be in conflict. It would explain why the idea of internal struggle is present throughout literature and religion: good vs. evil, social conformity vs. hedonism.

What’s slightly more alarming is we may have more minds than that. Our brain is divided into two hemispheres, left and right. Studies of split-brain patients, where the connection between them has been severed, shows that these patients are not necessarily aware that the other mind exists. If one mind moves the right-hand, the other mind will post-rationalise this decision by creating a false memory (a process known as confabulation). This has implications for us all. We may not have the free will which we perceive to have. Our conscious part of the brain, may simply be creating explanations for what the unconscious parts have already done.

So how does the neocortex work? We know that it consists of around 30 billion cells, which we call neurons. These neurons are connected together and transmit information using electrical impulses. If the sum of the electrical pulses across multiple inputs to a neuron exceeds a certain threshold, that neuron fires causing the next neuron in the chain to fire, and this goes on continuously. We call these processes thoughts. At first, scientists thought this neural network was such a complicated and tangled web, that it would be impossible to ever understand.

However, Kurzweil uses the example of the Einstein’s famous equation E = mc^2 to demonstrate that sometimes the solutions to complex problems are surprisingly simple. There are many examples in science, from Newtonian mechanics to thermodynamics, which show that moving up a level of abstraction dramatically simplifies modelling complex systems.

Recent innovations in brain imaging techniques have revealed that the neocortex contains modules, each consisting of around 100 neurons, repeating over and over again. There are around 300 million of these modules arranged in a grid. So if we could discover the equations which model this module, repeat it on a computer 300 million times and expose it to sensory input, we could create an intelligent being. But what do these modules do?

Kurzweil, who has spent decades researching AI, proposes that these modules are pattern recognisers. When reading this page, one pattern recogniser might be responsible for detecting a horizontal stroke. This module links upward to a module responsible for the letter ‘A’, and if the other relevant stroke modules light up, the ‘A’ module also lights up. The modules ‘A’ , ‘p’, ‘p’ and ‘l’ link to the ‘Apple’ module, which in turn is linked to higher level pattern recognisers, such as thoughts about apples. You don’t actually need to see the ‘e’ because the ‘Apple’ pattern recogniser fires downward, telling the one responsible for the letter ‘e’ that there is a high probability of seeing one. Conversely, inhibitory signals suppress pattern recognisers from firing if a higher level pattern recogniser has detected such an event is unlikely, given the context. We literally see what we expect to see. Kurzweil calls this the ‘Pattern Recogniser Theory of the Mind (PRTM)’. Although it is hard for us to imagine, all of our thoughts and decisions, can be explained by huge numbers of these pattern recognisers hooked together.

We organise these thoughts to explain the world in a hierarchal fashion and use words to give meaning to these modules. The world is naturally hierarchal and the brain mirrors this. Leaves are on trees, trees make up a forest, and a forest covers a mountain. Language is closely related to our thoughts, because language directly evolved from and mirrors our brain. This helps to explain why different languages follow remarkably similar structures. It explains why we think using our native language. We use language not only to express ideas to others, but to express ideas within our own mind.

What’s interesting, is that when AI researchers have worked independently of neuroscientists, their most successful methods turned out to be equivalent to the human brain’s methods. Thus, the human brain offers us clues for how to create an intelligent nonbiological entity.

If we work out the algorithm for a single pattern recogniser, we can repeat it on a computer, creating a neural network. Kurzweil argues that these neural networks could become conscious, like a human mind. Free from biological constraints and benefiting from the exponential growth in computing power, these entities could create even smarter entities, and surpass us in intelligence (this prediction is called technological singularity). I’ll discuss the ethical and social considerations in a future blog post, but for now let’s assume it is desirable.

The question then becomes, what is the algorithm for a single pattern recogniser? Kurzweil recommends using a mathematical technique called hierarchal hidden Markov models, named after the Russian mathematician Andrey Markov (1856–1922). However, this technique is too technical to be properly explained in Kurzweil’s book.

So my next two goals are:

(1) To learn as much as I can about hierarchal hidden Markov models.

(2) To build a simple neural network written in Python from scratch which can be trained to complete a simple task.

In my next blog post, I learn how to build a neural network in 9 lines of Python code.

Note: Submissions do not necessarily represent the views of the editors.

--

--

Milo Spencer-Harper
Deep Learning 101

Studied Economics at Oxford University. Founder of www.magimetrics.com, acquired by www.socialstudies.io. PM at Facebook. Interested in machine learning.