Brain Computer Interfaces: The reciprocal role of science fiction and reality

Nick Halper
The Startup
Published in
17 min readJun 23, 2020

The brain is the final frontier of human anatomy. That isn’t to say that we understand all that is happening in every other part of our body. But for the most part I can tell you how the muscles in my arm work and what my stomach does when I eat a burrito. We can build an artificial kidney, a robotic arm, or even grow a new heart, but this is not true of the brain. The brain is an incredibly complex organ. Each of its 100 billion cells connect to thousands of others, creating over 100 trillion connections. This complex web depends on precise timings and electro-chemical processes that we barely understand the basic science behind. It’s no wonder that we haven’t yet grasped it all.

This article covers the history of brain computer interfaces in science fiction and compares it to the science of the time, showing the interactions between the two.

Early Medicine

Naturally, humans have always been fascinated by the brain, at least as since we have understood its importance. Early Egyptians regarded the brain as ‘cranial stuffing’. It was something that could become problematic and cause headaches but not worth any other thought. It is similar to how we think about the appendix now. Instead, the idea at the time was that the heart was responsible for our thoughts and feelings.

It wasn’t until a bit later (162 AD) that the physician and anatomist Galen looked at the soldiers coming in for treatment and thought, “All these people getting hit in the head with swords aren’t thinking straight. Maybe the brain is responsible for our thoughts.” He was, of course, banned from pursuing that line of thinking.

Later in the 1500’s, Vesalius published De Humani Corporis Fabrica (The Fabrics of the Human Body). This book was considered the foundation of modern anatomy. In it he proposed that the brain was responsible for sensation and movement. He proposed that it acted through the network of nerves that stretched from the brain and throughout the human body. This was a monumental milestone in the development of neuroscience.

Early Machines and Stories About Them

Humans have been thinking about thinking for a long time, but we have been captivated by machines even longer. In fact, the use of tools stretches so far back into human history it technically predates homo sapiens as a species. But what do humans do to the things we love? We personify them. From the beginning of history, we see men making machines in their own image. Throughout history, we see the creation of puppets and complex statues that use mechanics to mimic human movement or sound. We also get fantastical descriptions of mechanical beings, or automatons, that mimic people.

The relationship between our understanding of machines and our understanding of our own body is reciprocal. Humans created machines based on heat and movement, so it was natural to assume that our bodies functioned in the same way. This is likely why the heart was originally thought to the center of our being, it was mechanical and moved stuff we could see. Thus, when man started to imagine machines that looked, acted, or even thought like a human being, those machines needed to have a heart…or at least some blood. Talos, the bronze defender of Greece and one of the earliest mentions of an intelligent machine, is said to have a vein of blood that gave it its human qualities.

Talos

It is clear that the idea of machine being controlled by the thoughts or soul of a human is a concept that has permeated humanity ever since we have had even a basic understanding of both parts.

Brain Computer Interfaces

What is a brain computer interface (BCI)?

It isn’t just the idea of controlling computers with our body. It’s a common joke in the neuroscience community that the best interface between a computer and the brain today is the human body. After all, I interface with my computer every day.

Similarly, while assistive devices like turning off the lights with the snap of my fingers or controlling the position of a cursor using a camera that tracks my eye movement is closer, these devices still require control of my body. This article is about those devices that require no control over any other part of the body to operate.

A Shocking Discovery — It’s Alive!

Nearly all modern brain computer interfaces rely on the electrical activity from the brain, but we didn’t always know this. There was no association between electricity and the brain until Luigi and Lucia Galvani, a husband a wife researcher couple, discovered in 1780 that the muscles of dead frog legs twitched when they applied electricity to them. While it had been understood that nerves did carry signals to induce muscle movement, it was not understood exactly how they did this. The discovery that this ‘animal electricity’ was involved was ‘shocking’.

What was ‘science fiction’ doing in 1780? Well, humanity was sitting in the age of reason, so most science fiction was focused on speculation of new scientific discoveries. A lot of these were expansions on our current set of knowledge: discoveries of strange new continents, travel to the moon or other planets, or the conceptualization of time travel. Not a lot of brain stuff. In fact, one of the first significant mentions of the brain in science fiction comes from the writer widely regarded as the founder of modern science fiction, Mary Shelley. Her 1818 description of Dr. Frankenstein using a disembodied brain to control another body is the closest we get to the core concepts behind brain machine interfaces. Her thought of being able to ‘wire up’ the brain into another body was novel. After all, a brain transplant was not a real surgery.

The Amazing Discovery of EEG

It was quite a few years later before science got its next advancement in brain interfacing technology. About 50 years later in 1830, Richard Caton did his first recordings using a galvanometer (named after our friend Luigi) to find that animal brains had electrical activity on their surface. More importantly, he found that this electrical activity changed depending on the activity that the animal was doing. Sleeping, for example, had different electrical patterns than eating food. This was the first demonstration of brain activity interacting with the body.

Unfortunately for Caton (and us), nobody cared. Despite reporting his findings to the British Medical Association his work was broadly ignored. That is until Hans Berger picked it up in 1924, using it to drive his field-defining discovery of electroencephalography (EEG): the ability to read ‘brain waves’, as he called them, from an intact, human head.

First human EEG traces from Hans Berger. The bottom is a timing signal.

Immediately, sci-fi was on it. A relatively obscure piece of sci-fi work was published not two years later, on the other side of the globe, by a now not-so-obscure author named Edmond Hamilton. He was the first person to mention the concept of a brain interface in fiction. ‘The Metal Giants’ was published in Weird Tales in 1926 and described an artificial brain that completely mimicked a biological brain with all of its complex patterns of electrical activity that made it conscious. Further, he goes on to describe an artificial eye, a machine with an artificial retina that could transmit electrical pulses to this brain and allow it to see.

Edmond outdid himself just two years later when he theorized something even more akin to today’s brain machine interface. In ‘The Comet Doom’, a short featured in Amazing Stories, he describes the process of placing a human brain into an artificial solution to keep it alive while carefully wiring up to a metal robot. These connections allowed the brain to control the metal robot body as if it were its own.

Brain In a Vat

Edmond’s description is reminiscent of a common trope. This concept of ‘the isolated brain’, or ‘brain in a vat’ for the more sophisticated among us, describes exactly what the name suggests: a living brain, all by itself.

A brain in a vat believes it is walking outside.

The concept of a lonely brain in a fluid-filled vat was popularized by an author we all know and love: H.P. Lovecraft. His novel, The Whisperer in the Darkness, is known for the introduction of the Mi-Go, the winged fungal creatures from the Cthulhu mythos. More importantly, though, The Whisperer in the Darkness describes the act of taking a human’s brain and putting in a cylinder where it can live indefinitely. While this is widely credited as the first literary example of this concept, we can actually find mentions of it as early as 1860 in Louis Ulbach’s ‘Le Prince Bonaficio’ where he described scooping out brains with spoons and putting them in jars to promote ‘rest’. Of course implying that the brains could be put back into the body, leaving one well rested and ready to work. It was what the brain did in the vat that made Lovecraft’s story so pioneering. A quote from the novel: “My brain is in the cylinder, and I see, hear, and speak through these electronic vibrators.”

So where was neuroscience when sci-fi writers were dreaming about controlling robots and creating artificial brains? We knew electricity was involved, but our tools for reading those electrical signals were still in their infancy. Hans Berger, for example, was using a device that had only been invented a few decades prior. Caton and Berger were just beginning to demonstrate that brain activity even related to action of the body. Thus, the idea of having a sophisticated array of sensors that can read and interpret these signals was leaps and bounds ahead of our understanding at the time. This is especially true considering that electronic computing, as we know it, did not yet exist.

Telemetry and Octopus Arms

So where did sci-fi venture to next? Hamilton and Lovecraft had already described the maximally sophisticated brain computer interface, one that used only a brain and could process all of the inputs and outputs required for full interaction with the world, no human body needed.

Over the next few decades, we saw many creative adaptations that built upon the early thoughts of Hamilton and Lovecraft. In the 40’s, we got Captain Future, another story by Hamilton. In that series we get Professor Simon Wright, a brain in a vat that controls a robotic vehicle and non-human arms. This is an interesting extension. It is the first demonstration of using a brain to control something we don’t normally possess.

Not long after, in the 50’s, we got Call Me Joe, by Poul Andersen in which an artificial lifeform is controlled remotely by brainwaves from a human. While this story goes beyond science with elements of telepathy and similar non-technological features, it advances the concept to use the brain to control things at a distance. It is around this time we see one of the first movies that features a brain computer interface: Donovan’s Brain. In this film a brain is kept alive in a vat and is capable of communicating and interacting with the outside world, much like we saw in Whisperer in the Darkness.

ENIAC and the First BCI

That said, none of these sci-fi developments even came close to the world shifting change happening then. The invention of the digital programmable computer was happening. The field of computation soared during the 30’s, 40’s and 50’s. It was during this time we saw the first electromechanical computers and then the Electronic Numerical Integrator and Computer (ENIAC), a computer for doing simple math using a series of patch cables and switches. Not long after, the Manchester Baby was the first computer that allowed a stored program to be run. Similarly, neuroscience was moving a breakneck pace. While there were many developments in general neuroscience, we also saw a drastic increase in knowledge of mechanisms of action in the brain. Scientists were beginning to get a better understanding of action potentials and their physiology, leading to a much better understanding of the brain, its structures, and how they communicated with each other. Given their progress, MIT was the first to coin the term Neuroscience. Afterwards, we saw dozens of programs at major universities come in to existence to study this ‘new’ field. More importantly, we also saw the use of the first brain computer interface.

Surprisingly, the first BCI didn’t blink to life in the basement laboratory on some professor’s tinkering bench. The first human to move a machine with their mind wasn’t sitting in a military research facility, nor were they at the top of some mad scientists tower. No, the first brain computer interface came from a musician. It makes some sense. Many of the tools designed to record, filter, or playback audio data are directly applicable to the signals we record from the brain. In fact, the most important tool for most neuroscientists were their ears at the time. Alvin Lucier was a composer (and musical professor and my personal hero) that created an experimental piece he called ‘Music for Solo Performer’ (1965) in which he played his brain waves out of speakers and the movement of the sound would move other mechanical instruments. He made use of alpha waves, one of the earliest discovered brain wave types. These easy to detect waves are used as the ‘hello world’ of brain interfaces today. While pioneering and impressive, Alvin’s interface was less about leveraging the benefits of a computer’s number crunching capabilities and more about the use of sound as an effector, so some may argue that this isn’t a brain computer interface as the term was intended. This is especially true given that it wasn’t yet coined.

Alvin Lucier in ‘Music for Solo Performer’

In fact, the phrase still wasn’t in use when we started to see books describing brain computer interfaces in more accurate scientific detail. For example, Michael Crichton’s book, The Terminal Man, describes a man that experiences severe seizures that cause blackouts and amnesia. In the book, a doctor proposes that he be the target to receive a new brain implant technology in which 40 electrodes are placed into the brain. Then the medical staff tests each electrode by stimulating the brain with it and seeing which electrode will stop the seizure. After the initial test, a small computer is used to determine when he is having a seizure and stimulates the electrode at the appropriate time, like a brain pacemaker. The implant goes on to be focal point of the book. The man finds he can cause stimulation to elicit pleasure and goes on a sort of rampage because of it, but neuroethics is a different article for another time. What’s impressive is Chrichton’s premonition.

While scientists throughout the 40’s, 50’s, and 60’s had learned a lot about the brain by stimulating it with electricity, many of the effects were widespread and unpredictable. The first scientist to really demonstrate precise control was a man in Spain named José Delgado. He performed ‘theatrical science’ in which he controlled a variety of animals, and even people, using radio frequency activated electrodes in the brain. His work culminated in the terrifyingly titled paper ‘Physical Control of the Mind: Towards a Psychocivilized Society’ in 1969. When Crichton was writing his novel, though, the only published results on using these procedures as chronic implants for disease were written in Russian, and US scientists were only just beginning to attempt some of these procedures. Despite that, Crichton’s approach nearly perfectly describes modern day deep brain stimulation systems. Boston Scientific just got approval for theirs in 2019, NeuroPace has a model more similar to the book. If you went to the hospital right now to get a device like these put in, they would approximately follow the procedure Crichton laid out.

Of course, physical control of the mind, as Delgado would put it, is all about input. That is, we are talking about creating sensations or stimulations, but when we think of brain computer interfaces, we usually think about output. How do we influence the world? That is what Jacques Vidal answered when he invented the term in his 1973 paper titled ‘Toward Direct Brain-Computer Communication’. In this paper, Jacques explains the concept of reading information from the brain, interpreting it with a computer, and using it to control things like ‘prosthetics or spaceships’.

The ARPA network; he published right around the time DARPA gained its “D”

Of course, Vidal didn’t start out by controlling spaceships. His first BCI challenge, as he put it, was to move a cursor on a screen. Just a year later, DARPA revealed the Close-Couple Man-Machine Systems (or Biocybernetics) program. So while Vidal was challenging scientists to create brain controlled prosthetics, DARPA was aiming to make them a reality. How was science fiction taking this news?

There’s an Implant for That

Probably one of the most famous works of brain computer interfaces in science fiction, or maybe even one of the most famous works of sci-fi in total, was the novel Neuromancer by William Gibson. Neuromancer came out in 1984, a fantastic year to release a book about brain hacking, and described a whole host of new concepts and inventions that are the cornerstone for what sci-fi, and therefore society, thought about brain computer interfaces at the time. There are too many to name, but some of the big ones include implants that can project the subject’s thoughts visually, a virtual reality world accessed by implant, and all sorts of ability enhancing implants for reflexes, memory, vision, etc.

via Unhide School, Rafael Moco and Milton Menezes

Around the time Neuromancer came out, scientists were really beginning to branch out with their end points. It was during this time that researchers demonstrated simple two or three degree of freedom control of basic mechanical arms and small robots. Not yet the control of humanoid robots we had hoped. One of Neuromancer’s most pronounced inventions was the vision implant. It wasn’t until 1999 that we saw a big breakthrough in vision research. A group of scientists at the University of California, Berkeley were able to reproduce images that a cat was seeing just by looking at electrodes on the brain. Not necessarily akin to being jacked into the matrix, but it was a start.

The State and the Black Mirror

This early work in brain computer interfacing fueled public and government interest, causing DARPA to take a deeper dive with their 2002 Brain Machine Interface program. This program, and subsequent brain machine interface programs, would go on to fuel a number of developments that expanded upon those first breakthroughs. These included increasingly higher resolution data to get finer control of neuroprosthetics, allowing control of hand movement, for example. Similarly, they lead to neural interfaces capable of giving the user sufficient vision to navigate a room. Many of these technologies would go on to be commercialized. Some successfully.

With scientists desperately trying to make dreams of telepathy, robotic bodies, cognitive implants a reality, what is today’s sci-fi take on brain computer interfacing?

One of the most recent examples of brain computer interfaces are those that show up in Black Mirror, a show popular for its showcase of the negative consequences of future technology. One of the episodes rated most highly for its ‘terrifying’ factor was the episode called ‘Men Against Fire’. In this episode, soldiers all receive a neural implant called MASS. It controls sensitivity of the senses, allows communication between soldiers, and most notably distorts the faces of the enemy targets to help ensure that the soldiers won’t have second thoughts about following their orders. So how close are we to something like this today? Unlike a lot of these super-futuristic matrix like technologies, this one is closer than you might think.

One of the most hotly pursued aspects of brain computer interfacing is communication. Several companies, labs, and governments have programs in place to increase communication speed using BCI. The first goal, of course, is to help those that have disorders or injury affecting their ability to communicate, but the second step is the healthy public. After all, isn’t that what most technology has been about? Getting things done faster. The phone was faster than a letter, the internet is faster than the library, and your smart phone is there to deliver this all to you on demand. There have been several demonstrations/competitions of virtual typing speed using a BCI, but most innovators seek to skip the keyboard entirely and allow internal speech to text. They do this by leveraging recent progress in natural language processing technology like that used in Google Home and Amazon Alexa.

What about the other features? Enhanced senses? Several companies and labs have various forms of vision prosthetics, but the most advanced ones are just taking form now. New prosthetics directly stimulate the visual cortex of the brain, drawing shapes over the cortical surface. The result is a low resolution visual representation of what was drawn onto your brain. The goal is to draw a video feed of obstacles in one’s environment, allowing a blind person to navigate. Future developments will call for more and more electrodes providing finer and finer resolutions, eventually correcting the low-fi nature of the implants. Even with current technology, these companies have demonstrated use of infrared vision by swapping the color camera for an IR capable device.

So what about the creepiest aspect of that MASS implant: the deformation of enemy faces? This can be put in the ‘plausible’ bin. A region of the brain known as the fusiform face area control’s human perception of faces. It is theoretically possible to block activity in this area and experience perceived deformation of other human faces. Luckily, doing this selectively to your enemies doesn’t even seem feasible, so we’ll leave that part to the producers behind Black Mirror.

Your Brain on Industry

Brain computer interfaces are entering their golden age of development. This is due to a confluence of factors, but the biggest is simply the focused interest. Humans are excellent at picking a problem and drilling into it until its solved just like we did with space travel and the human genome. In 2013, the US government announced a grand challenge called ‘The Brain Initiative’. In 2014, the working group for Brain Initiative set up a 10 year plan, the first half of which focused on creating new platform technologies to access and study the brain, and the second half of which was targeted to actually apply those technologies. Within a year or so, most other major goverments including Australia, Russia, China, Japan, South Korea, Israel, Iran, Canada, France, Germany, and the rest of the EU all launched brain initiatives of their own. In 2016, all of these groups united under a worldwide brain initiative to further human-kind’s understanding of the brain. This year marks the halfway point, the transition period from technology to application. Over the next 5–10 years, we will see the most influential work in accessing and utilizing the human brain that the world has ever seen.

So what does the future hold for brain computer interfaces? Hopefully some sci-fi writer can dream it up for us. Actually, it is more likely some silicon valley executive will do it instead. The majority of the technology described above was funded by public research, but most of it was done in collaboration with private companies. Every piece of intellectual property generated here, from the mundane and clinical to the futuristic and consumer, is available to private companies. Now, more than ever, there is a driving industry interest in seeing brain computer interfaces become a reality. Several of the major tech giants, such as Google and Facebook, have shown their commitment to bringing brain computer interfaces to consumers. Similarly, billionaires with ‘extra time’ on their hands, such as Elon Musk and Bryan Johnson, have thrown their hats in the ring. While one can speculate at the motives behind this private interest, one thing is for sure: brain computer interfaces are coming.

--

--

Nick Halper
The Startup

Neurotech Founder| Product Manager | Systems Thinker | Medical Device Developer | Musician | Boardgamer