The Equivalence Question

Dev Chakraborty
Ideas and Words
Published in
5 min readMay 27, 2018

--

As I was growing up, I watched countless movies and TV shows that introduced robots who live among us.

They’d walk and talk. They’d hate and love.

They were, in essence, humans trapped in metallic bodies. This was happening as early as the first Star Wars movie, though there are even older examples. C-3PO is a prime example of such a robot. He’s far closer to being a real person with conscious thoughts and feelings than he is to being a household appliance.

The idea of computers behaving like humans has been ingrained in our culture for a while now. This isn’t surprising if we consider the fact that we’ve always tended to assign human qualities to animals, plants, inanimate objects, and even abstract concepts like nature. This kind of personification is ubiquitous in entertainment, showing up in stories as ancient as the Odyssey, all the way to modern-day Pixar films like Finding Nemo. In this context, computers are just another kind of thing we added arms and legs to in order to tell funny and interesting stories.

In the decades since then, the framing in which humans understand computers has drifted. 1977, when the first Star Wars came out, was also the year the Apple II came out, becoming one of the first highly successful personal computers. It took a few more decades for people to decide that computers belonged at home as well as at work. Still, even as late as the 1990s and 2000s, human-like computer behaviour remained a silly idea, being at the time a product of science fiction, not engineering.

Computers were still essentially glorified calculators.

Fast-forward to 2018 and things are alarmingly different. Computers have become inextricably embedded in human life, taking on pocket-sized and wearable form factors, steadily becoming a more intimate part of our daily routines. Moreover, they have begun to trespass on the boundaries of what have traditionally been tasks that demanded a human touch. Law, medicine, and finance are fields where, in many cases, access to data and the advancement of learning algorithms have allowed computing to surpass and supplant the decision making ability of humans. Sophisticated speech recognition and generation techniques are allowing them to carry out some of our conversations for us as well.

Despite all this progress, it would be reasonable to think this is as far as we can get. It seems like at some point our progress should stall because computers are not, in fact, humans.

As a result, there is a massive canyon, a seemingly irreconcilable difference, between today’s machine learning tools, which aspire to be rational decision makers that learn from their mistakes, and humanity, whose behaviour is dictated by the chaotic harmony of many biological processes. While we now accept computers as assistants capable of highly sophisticated tasks, it’s still ludicrous to say they’re truly thinking.

I’ve been fascinated by the human-computer relationship for a while now. A particular question I’ve been interested in concerns the possible equivalence of the two — the bridging of this aforementioned canyon.

The question: is it possible for a computer to be structurally equivalent to a human mind?

This entails much more than just having computers mimic human behaviour, which is already possible in several (albeit limited) cases. This is about whether computers, as we traditionally understand them — microprocessors with storage and input/output devices attached — can be made to simulate the computation the human brain performs biologically when it processes information.

With TV shows like HBO’s Westworld, popular culture has already begun to wonder about some of the implications of this question. In Westworld, robots have become behaviourally indistinguishable from humans, leading to difficult philosophical and ethical questions. An answer to this equivalence question would likely be a precursor to the setting of this show.

This is a question most closely aligned with the field of computational neuroscience, which is strikingly distant from the more popular field of machine learning. Instead, it’s concerned with understanding the behaviour of animal nervous systems in a mathematical way. It‘s this sort of formal modelling that could, in theory, pave the way for complete computer simulations of the human neuroanatomy someday. Some success in brain simulation has already been found for smaller mammals like mice, but it may be a long time until such success is had for the human brain due to the unknowns about its wiring along with the logistical difficulties of conducting neuroscience research.

Even though we’re ridiculously far away from achieving such a simulation, it’s fun to think about how it might work.

Most of the research I’ve seen is focused on adapting some model of neuronal structure, chemical conditions, and electrical conditions into software that runs on your everyday Intel chips. On the other hand, it’s possible that a human brain simulation will arise via some other exotic method. I think some form of this will be necessary if the tech needs to be made compact—the state of a human brain takes a petabyte (million gigabytes) to store, and CPUs aren’t getting much faster, so we’d need a full warehouse of machines unless we found a better way.

The use of quantum computing could be one solution to this problem. Another is DNA computing (or some other biological basis for computing). They each have severe limitations preventing their widespread use today, but the common themes that make them possibly useful for brain-like computation are parallelism (doing many things at the same time) and stochasticity (doing things randomly). These are traits that a single CPU cannot exhibit by itself.

To put that last paragraph more simply, suppose a human and a CPU are both asked to look at a picture of an animal and decide whether or not it’s an elephant. The human looks at the whole picture at once and can decide fairly quickly, though they might not say the exact same thing each time they’re asked about that picture. The CPU, however, needs to read the picture one pixel at a time, then run millions or even billions of simple operations one at a time, before it can decide — but it will say the exact same thing each time it’s asked. This distinction exists because the brain’s matter is fundamentally built in a way that allows multitasking, whereas individual CPUs aren’t.

This is oversimplifying a little bit because CPUs can be used together in parallel and randomized ways, but the point is that a given cubic millimetre of CPU is probably doing one specific thing, whereas a cubic millimetre of brain is probably doing many random things. Hence, swapping our CPU for something that distributes the workload more evenly and randomly like a brain does might be the key to better brain simulations.

Nothing has changed the course of human history more than the evolution of computing in the 20th century. Yet now, we’re in the midst of perhaps an even greater change, where robots are starting to mimic their masters, just like in the old movies.

Will they ever become us, though? It’s hard to say. Hopefully, if that happened, it would go a little better than Westworld.

--

--

Dev Chakraborty
Ideas and Words

Indo-Canadian-American. CS student & sriracha enthusiast.