The walking, talking what-if machine

Toby Simpson
8 min readFeb 5, 2019

--

Two little neurones, sitting in a tree, imagining their many potential futures

You are a walking, talking what-if machine. Your brain is constantly playing out alternative future realities and imagining if they would be beneficial or not. What if I jumped off this bridge? What if I threw my glass at a wall? What if I turned left instead of right? What if I said yes, instead of no? What if I said what I was really thinking? You’re doing it right now. It is continuous and this rolling out of what might be operates in parallel to what you actually choose to do. Incredibly, you can picture these futures and you can actually feel whether they will be positive or not. This allows you to evaluate many potential realities and pick one that makes sense. Then, on the basis of a positive future that you like the look of, your brain works to adapt reality to match it: you do what needs to be done in order to end up in that rewarding position you imagined. On balance, it usually works¹, even if you need to constantly revise the steps and the goal along the way. It is a uniquely biological trait and is something computers simply cannot do. Yet.

Then there’s semantics, generalisation and the way the brain stores and recalls things. You see one thing and then you’re primed to see similar things, or at least those in context. Or, that moment of déjà vu where you remember one airport related dream segment and suddenly you can stroll around your mind and find all the other dreams that involved airports even if they are wildly disconnected in time and subject. Blue lights on the motorway and you conjure up all sorts of potential horror stories about someone else’s bad day (or, admit it, that feeling you’re about to be delayed). You expect to see doctors, nurses and medical equipment at a hospital, but you don’t look for them or expect them in a forest. Context is everything and armed with it, your what-if machine cracks on looking at the many paths ahead. Evolution has given you a brain that spots things that are out of place and at the same time brings everything relevant to the surface from the smallest trigger item. The ability to have a concept of “chairness” that allows you to plant your backside on things that may or may not qualify as chairs (but work as chairs) is a skill that is second nature to you. But get a machine to do that. Go on, I dare you. And when you give up, look up YouTube videos of machines trying. Honestly, it’s hilarious.

Artificial Intelligence has been particularly big in the last year or three. So big that you’d be forgiven for believing that we are a hair’s width away from terminator-grade AIs tearing human beings to pieces as they establish that they are the rightful heirs to the earth. Inevitably, they conclude that us meat-bags deserve to be used as Matrix grade batteries, compost or playthings for a new generation of rich robots whose only exposure to biology is via zoos where we roam enclosures to entertain them. The reality is very different on — at least — two fronts: 1) we’re light years away from true machine intelligence and 2) no, they won’t destroy us, they’ll compliment us. Putting aside “2”, as that’s mostly down to humans being super insecure and believing everyone’s out to get us², let’s focus on “1”. Everyone in the space of artificial intelligence has known for a long, long time that the gap between where we are and human level intelligence is somewhat vaster than one might be forgiven for believing… they just don’t go to any particular effort to highlight it as it could, well, turn down the money taps.

Now, cards on the table here. Being so far from human level intelligence doesn’t mean that AI (and particularly the field of machine learning) has not enhanced our lives, because it has. It is revolutionising the way we interact with machines, enabling automation that we couldn’t have dreamed of just a few years ago and is making great strides into areas like medical research, drug discovery, disease prevention and treatment that are hard to ignore. But we should separate the component parts in this field that deliver real things from the dream of what researchers call “artificial general intelligence” (AGI): digital what-if machines that can, amongst other things, generalise their knowledge to adapt to new situations, imagine the rewards of future options to make wiser decisions now and have a sense of “self” that would allow us to treat them as some form of intellectual equal.

Luminaries in AI such as Demis Hassabis and Geoff Hinton already know that the component technologies in the area cannot deliver AGI individually. They are also aware of their limitations. Back-propagation, for example, is pretty rubbish from an intelligence perspective. Geoff pointed out back in 2017 that neural networks won’t get smart on their own using such techniques and, as he put it “I don’t think it’s how the brain works, we clearly don’t need all the labeled data.” Indeed. But it took too many people far too long to get to that position. Those at the very forefront of today’s research know that the brain isn’t just one thing — it’s lots of overlapping things, evolved over many tens of millions of years that serve wildly different purposes. Bits of it, under observation, show reinforcement learning, transfer learning, pattern recognition, planning, generalised concepts, semantic storage and recall, spontaneous data sorting and so much more. To stick to current deep learning techniques and their artificial neural networks on the basis that “we’ll get there, folks” is naive and no serious research into general intelligence is doing so.

We are seeing the limitations of current approaches right now. You, for example, can solve something called “the Cocktail Party Problem”, and you can feel rightly proud of this. You can isolate a single person’s voice in a sea of music and chatter and piece together what is being said even if you miss the odd word or two. You do this from context, from an understanding of what is being said coupled with an awesome ability to separate out the wheat from the chaff. You can imagine several versions of what you heard and weight their likelihood according to how you’d feel if you heard each alternative. Is the context right? Is this something this person would usually say? Are there other circumstances such as alcohol or strong emotions to be factored in? Try asking Siri, Alexa or Google to play you something when you’re at a noisy dinner party. They, like all personal assistants, are dumb. We call them intelligent assistants but we should not: they use the latest and greatest technology to attempt to understand what they hear, cut away what is not relevant, and then having turned it into words break the resulting sentence down and attempt to act on it. There’s a lot of very clever technology at play here, particularly applying machine learning to audio processing. But audio processing aside, let’s dwell briefly on deep learning techniques being used for pattern recognition. Anyone and everyone who has had the misfortune to hear me speak in the last year will have probably seen my favourite slide, and it’s this one:

Blurring out the face of that cow to protect its privacy is generous, but clearly a mistake, and one that no human being would ever make no matter how much alcohol had been consumed. (Image credit: Google)

Using trained neural networks to recognise faces, animals, fruit, cancer or suspicious activity is effective — incredibly so — and our lives are already being vastly (and largely invisibly) improved by their relentless work. In many areas, they already significantly outperform humans and the number of areas that this is the case grows every week thanks to ongoing research: machine learning is, and will continue to be, a gift that keeps on giving. However, the neural networks of today do not comprehend what they are doing, so in some areas they make mistakes that we, as humans, would never make. A few pixels here, a few there, and that dog is recognised as a giraffe. Artificial neural networks, deep or otherwise, are pretty hopeless when it comes to this stuff: they fail the “chairness”, or in this case the “dogness”, test that we instinctively pass from comprehending what we are seeing. Our ability to improve the results of these networks to make them useful took huge leaps when we had access to extraordinary amounts of training data and the computing power necessary to “run the numbers”. But there’s a limit to what we’ll pull out of the hat with such approaches. Whilst we’ll converge on perfection, the lack of understanding and comprehension of what’s being seen will prevent perfection being reached.

Having said all this, I’d humbly suggest that 2019 will be the year when an increasing number of people are prepared to admit the limitations of current techniques without fear that they’ll push their funding off a cliff by accident. This is a huge positive step as it means that we’ll start to see increasingly numbers of fascinating new approaches being talked about in public, such as agents actively modifying their world to match a desired, imagined, model of the world-state that is a rewarding position. This show-and-tell of stuff that, if we’re honest, last had a fair airing in the mid-90s is a good thing. Amongst other benefits, it will bring investment and minds into the next growth areas of digital intelligence and will support ideas and approaches yet to be discovered. As a bonus, it mitigates the risk of endlessly flogging dead horses on the off-chance of squeezing out that extra fraction of a percent of accuracy at a cost that increasingly exceeds the benefit.

The human mind is amazing. But it is a machine, albeit a biological one. Digital computers are universal machines. As such, they can model any other machine and that includes biological ones: there is no reason why one cannot model the other to the point of replication, we just need the right approach, and many of the ones that have worked effectively in the past three decades are running out of steam.

“If you take a cat apart to see how it works, the first thing you have on your hands is a non-working cat.”
– The greatly missed Douglas Adams

The next generation of semantic what-if machines, more closely related to the only working example of true intelligence that we have, are on their way.

And in 2019, you’ll learn more.

-

[1]: But clearly not always: how many times have you spent imaginary lottery winnings in your mind to justify entering?

[2]: For the avoidance of doubt, caution is a good thing, but it does feel a touch like pouring money into the moral issues of teleporters on the basis of seeing an episode of Star Trek

--

--

Toby Simpson

Opinions my own, yours may be different, and that’s cool.