How to build a brain interface — and why we should connect our minds
A guide to making the Neural Lace a reality
Communication helped humanity progress to where we are today, from evolving the ability to speak and understand gestures, to building tools —such as writing instruments, the printing press, radio, computers and the internet. Now, in the present day, the majority of humanity’s technological and creative output requires dexterously tapping one’s fingers onto a keyboard — and this is where things are poised to change.
Enter the ‘neural lace’, a fictional device from Iain M. Banks’ Culture novels, that can connect to the brains of humans to allow them to communicate, store and retrieve information, interface to machines and regulate their biological functions. Similarly, the author Ramez Naam describes a wirelessly linked nano-drug called Nexus that allows opt-in mind-to-mind communication, collaboration and an augmentation of humanity’s intelligence and abilities.
Why am I writing about this? Today there are already some incredible companies seriously working on creating this technology, including Kernel, Neuralink, and numerous research groups that I will mention. This could be the most significant technological leap that humanity would take in the coming years, and the enormity of both the engineering challenges and social implications of such a technology can be extremely daunting, yet as you will see, can also be equally as inspiring.
In this article I will pull the ‘neural lace’ out of the realm of sci-fi and explain how we could create a brain interface, from the first-order principles, to the technical constraints and what is possible with today’s physics, the breakthroughs currently being developed to how we would extract semantic data from our thoughts. And lastly why connecting our brains with this technology could contrarily make us more human than ever.
Let’s dive straight into the technology… (to skip straight to the why, jump to the second half of this article)
Starting at the neuron
First off, we have to be able to directly send or receive data from a part of our brains, and to do this, we’d need to both sense and trigger the firing of individual neurons. With these constraints, I will put aside techniques such as EEG, which can only measure the aggregate signal from the synchrony of firing of billions of neurons, and fMRI, which measures the neuronal activity indirectly through blood-flow, and so is read-only, and has relatively low spatial and temporal resolutions.
Neurons have a near perfect quantum and thermodynamic efficiency, which is great from an evolutionary standpoint for mammals like us with a high brain-to-body-mass ratio, but it also means that they have extremely weak signal levels. One reliable way to detect a neuron firing is through direct contact and measuring the action potential, or ionic current, across the cell membrane. The tool we could use to do this is called a patch clamp, which uses suction to physically attach to a neuron, often intentionally breaking the cell membrane in the process. As these are not suitable for the large arrays, non-invasiveness and permanence required for a brain computer interface, we can rule these out for now and move on.
An alternative is to use an electrode placed close enough to a neuron. While these electrodes could pick up the firing of multiple nearby neurons, the sources of these signals can easily be isolated, especially if the electrodes are placed in a fixed array. But getting the signal from such electrodes out of the brain is a challenge — and one such way is with wiring.
The wired approach
In the book “Excession” by Iain M. Banks, we can get some inspiration for what seems to be some incredibly fine wiring, when one of the characters stumbles upon a neural lace in person:
“It was a little bundle of what looked like thin, glisteningly blue threads, lying in a shallow bowl; a net, like something you’d put on the end of a stick and go fishing for little fish in a stream. She tried to pick it up; it was impossibly slinky and the material slipped through her fingers like oil; the holes in the net were just too small to put a finger-tip through. Eventually she had to tip the bowl up and pour the blue mesh into her palm.”
Looking at multi-electrode arrays being used in the field today, we can get an understanding of some of the wiring requirements and restraints.
These arrays are commonly spikes grouped together into tight grids, with electrodes either at the tips or at multiple contact points per spike. Typical arrays, such as the Utah, typically have up to 100 points, so the wiring here is complex — but not the limiting factor. With a rigid array, the limitation is mostly how many electrodes can be manufactured into a small enough structure that can rest in the brain without causing damage, scarring or inflammation.
Thin-film probes can offer some flexibility and can be easier to manufacture, but still being limited to a two-dimensional plane limits the number of electrodes and traces significantly, and by folding or layering one would quickly lose the benefits of the flexibility.
Another type of more ‘distributed’ and less dimensionally constrained electrode array being tested is a syringe-injectable mesh. This has an open, flexible structure similar to brain tissue — allowing neurons to penetrate through the structure and have less risk of inflammation or scarring.
While these seem like promising signs of progress, the difficulties of inserting these wired arrays safely and more permanently rises exponentially with the number of points we are trying to interface and the connecting wires required, leaving it hard to imagine how we could ever match the roughly 100 million connections that the human eye has.
While the hopes of addressing almost every single neuron directly seem dashed, there is some light at the end of the tunnel. The brain already has a near perfect physical pathway providing access to just about every neuron. This network of pathways co-evolved as the brain developed in order to ensure every neuron has a fresh supply of oxygen, nutrients, and waste removal — the cardiovascular network.
This is currently the pathway we use to send drugs and medication as one-way instructions to alter the operation of our brain. Despite all the advancements in the field of drugs, for the purposes of brain communication, it is an inherently limited approach that can be compared to the non-discriminate dropping of propaganda leaflets, that take years to go through the printing press, where what we’re instead hoping to achieve is instant messaging.
Using this cardiovascular network for wiring access to neurons may not be that inconceivable. The diameter of the finest capillaries are roughly 10µm, whereas carbon nanotubes, an extremely strong structure that could carry signals from nearby neurons, can be roughly 10,000x narrower, meaning that as long as these do not block blood flow, there could be ample room. We could imagine a self-constructing and self-healing network, with nodes situated inside larger vessels, with the ability to aggregate, amplify or digitize signals, and with the ability to both grow, or harmlessly breakdown nanotube branches into stable and flexible structures using nothing but insignificantly small amounts of carbon and energy from the bloodstream. And of utmost importance, they would need to ensure healthy operation of the cardiovascular system, or even maintain it at a state healthier than without the presence of these networks.
This effort would require an unprecedented mastery of nanotechnology and synthetic biology and involve some of the most complicated wiring schemes we could imagine. While this may one day be possible, there may be some intermediary steps that can be taken until then.
A company called Synchron is already going into clinical trials for a device they have developed that can be less-invasively delivered through a vein via a catheter, and is designed to self-expand into one of the larger blood vessels where it can take measurements of nearby neurons.
Another approach for brain interfaces to avoid the complexity of wiring could be to use some type of transducer, that can be more easily placed in the proximity of a neuron, and then convert the neuron’s weak signal into another medium that can more easily be transmitted to and from a device outside the brain. Let’s entertain some of the options…
Why not really small Bluetooth chips?
Let’s imagine millions of nano-scale chips, each able to wirelessly communicate. We already know a lot about protocols for addressing billions of nodes, mesh networking and techniques to share the EM spectrum. And it turns out that the most interesting neurons to interface with for higher level thought processes would likely be in the outer layers of the brain, closer to the surface, which should hopefully help with the signal.
But even if we could compress all this capability into such small packaging, we will quickly bump into some limits of what can be achieved with physics, i.e.
- The wireless signal strength drops off sharply with distance, especially in the human body which is composed mostly of water
- The antenna size and efficiency is dictated by the wavelength of EM, putting a hard limit on the size of these chips
To get the desired signal strength, size and bandwidth, we would need to significantly increase the power, and/or the frequency — both resulting in greater amounts of energy being required, and dissipated.
This is where things quickly become problematic. The brain operates at unprecedented thermal efficiency compared to our current computing systems, and so providing and safely dissipating this energy from all these nodes would prevent this approach from getting off the ground for now.
A sound approach
There is another wireless approach to communicating with our neurons, that doesn’t have some of the pitfalls of electromagnetic radiation. That is, using sound — or specifically, targeted ultrasound, with the two main benefits being:
- Sound waves can travel much more easily through body tissue than electromagnetic radiation so there is less drop-off of signal
- As the speed of sound is much lower than light, the wavelength is much smaller, and so devices may have a smaller ‘antenna’, allowing lower theoretical size limits
This is the concept behind Neural Dust, an exciting project out of UC Berkeley, which consists of devices, or motes, about a quarter the size of a grain of rice. These motes contain a piezo crystal for converting these small movements due to ultrasound waves into an electrical voltage, and a transistor to sense or stimulate an attached neuron.
One can measure the ultrasound backscatter from one of thousands of motes in a given area using beam-forming to focus ultrasound waves to each precise location in turn. A disadvantage with sound however is that as it is significantly slower than light, the round trip time taken for sound to reach the mote and return is significant. This means enumerating through individually addressing multiple nodes and waiting for a response can take considerable time, limiting how many can be effectively read from. Fortunately, like many problems, this can be tackled with math, using a technique called polyadic decomposition, separating the signals and enabling multiple (around 1000) nodes to be addressed at the same time.
These neural dust motes are impressively small, but still significant in terms of the scale of neurons. The team is reportedly working on getting them as small as 50 cubic microns which would be a dramatic reduction, yet still too large to be safely delivered to the brain through blood vessels, and so would likely still require surgery for precise placement into the body.
There is another method to build so-called “transducers” onto neurons that could be less invasive…
Lighting up the way
Light may be even better as a wireless transmission medium through the brain and skull. While it is still electromagnetic radiation, there is an opportune “optical window” of wavelengths that body tissue is largely translucent to, on the border of visible and non-visible light, called the near infrared (NIR) range. This window is in-between the absorption bands of hemoglobin and water.
So let’s entertain this idea: What if were could send in nano-machines to each individual neuron, that could upon receiving this light, trigger neuronal activity and also turn neuronal activity into detectable light? Or better yet, do this non-invasively by sending in the assembly instructions for these nano-machines, and have our neurons build these units directly onto themselves?
This is called optogenetics, and works thanks to light-sensitive proteins called opsins. The instructions for creating these proteins can be inserted into targeted types of neurons in the brain by gene therapy, creating light sensitive ion channels. And so instead of surgery, one can insert this “light transducer” into the brain via genetic instructions carried in a modified virus.
Opsins have already been designed that are sensitive to this NIR range of light, which cannot only penetrate tissues deeper, but also be more easily focused. So interestingly there are two approaches to being able to accurately address what neurons you’d like to address — firstly by targeting the genetic specificity of cell types, you can select which neurons receive will become sensitive to what light, and secondly by precisely focusing light, using digital holography techniques.
While we are here and speaking of ultrasound and holography, there is another approach to BCI’s that combines these approaches, and may be ready for showtime as soon as next year.
Combining light and sound for a better fMRI
A company called OpenWater is working on using both targeted ultrasound and digital holography in a unique way to essentially measure brain activity by detecting blood flow changes in the brain with potentially higher resolution than existing fMRI’s, yet with a non-invasive, wearable device.
The key breakthrough is in using reverse holography to reconstruct light from a point source, even reversing the scattering that occurs as the light bounces off particles on its way through the human brain and skull. Normally this scattered light would just be treated as noise, and so the signal would decrease exponentially with depth. As scattering is a deterministic process, it is reversible through a technique called optical phase conjugation (also being worked on here). Thanks to the relentless advancements in screen and camera manufacturing, it’s a lot easier to bring this level of manipulation of light into affordable and wearable devices.
The challenge here is that one would have to both send light to a specific point of interest into the brain, before then reconstructing the light reflected back from that point. The workaround, and where sound comes in, is to non-discriminately pulse light of a certain wavelength into the brain and then use focused ultrasound on the point of interest, causing the light reflected from this point to be color-shifted to a slightly shorter wavelength. By then filtering by wavelength, only the light originating from the current point of interest returns to the sensors. (The order here is actually reversed — as sound travels slower, the ultrasound has to be sent out first, to arrive at the destination at the same time as the light pulse)
Earlier on I wrote off fMRI based methods due to the fact that they can only take indirect measurement of regions of neuron activity. But if this technology is able to achieve the resolutions theorized, it could unlock the ability to extract extremely detailed insights into brain activity. (The founder, Mary Lou Jepsen, commonly cites some work done to map fMRI activity to what someone is viewing, to inspire imagined possibilities of what can be done with even finer resolution ‘wearable fMRI’s’) And as far as write-access to the brain, Mary Lou mentions that the high power focused ultrasound used to shift the light wavelength, can also be used to activate precise regions of the brain.
If this approach does not end up being the fore-runner in the development of brain-computer interfaces, the benefits they would be able to bring through the cheaper and earlier detection of tumors, clots, as well as the increased insights into diseases, disorders and the functioning of our bodies, could be even more commendable and awe-inspiring.
Following this, it may also soon be possible to read/write neuronal activity directly using light, getting rid of the constraint of low temporal resolution. This is something that Facebook’s Building 8 was reportedly working on to create a “silent-speech interface”.
Now that we have some idea of how we could address individual neurons, we’d have to figure out how we would communicate to and from the mind. Luckily, our brains are fairly good at making sense of how to use inputs and outputs, and has been doing this since birth to help us interact with the world around us.
How to connect to a mind?
Discovering the incredible neuroplasticity of the brain has led to some progress in using this reconfigurability to add new inputs or senses, such as using the tongue’s surface to send light signals to the brain, or a wearable vest that converts speech into vibrations, and the ability to control outputs such as a computer cursor or prosthetic limbs. The promise here is that with these new physical connections to our brain, we can learn to communicate and interface directly to a computer or datastream as if it was an extension of the body or another sense.
In addressing possible routes for the emergence of ‘superintelligence’, Nick Bostrom makes some points about the role brain-computer interfaces may play. He argues that to improve the information bandwidth, we will need to not only plug in a high bandwidth input but also upgrade the brain, as our current limitation is not the speed of input but rather how quickly the brain can make sense of data. An example he uses is the human eye, which currently takes in 10-million bits per second, and has specially evolved and optimized wetware to process this data into meaning.
While this is true, I would argue that our current method for inputting data into our minds is via projecting data onto our retinas as visual symbols, or onto our cochlea as sounds, which requires some processing-heavy conversions due to how we have to make sense of and interpret this data from its representation in the real world.
For example, the processing and layers of neural networks required to turn raw color signals into edges, textures, shapes, features and eventually to an abstract concept such as an “excited golden retriever” may not be required if we could bypass that and just communicate the abstract concept directly. This abstract concept also needn’t stay abstract, or unvisualized, when output — using a generative process like GAN’s we could render our imagined “excited golden retriever” in full glorious detail for others, potentially iterating the subtleties to our satisfaction quickly enough that the effort to conjure up such a detailed image may be imperceptible to the conjurer. This way we can offload some of the generation and decoding of lower level features to external hardware, limiting our raw data bandwidth requirement.
When considering direct communication with a neural lace, we need to consider that brains do not use standardized data storage and representation formats, rather each of our brains develops its own distinctive representations of higher level content. So to map neuron firing patterns in one brain onto the semantically equivalent firing patterns of another, will require decomposing them into symbols according to some shared convention that allows them to be correctly interpreted — which is the function of language. This may require or result in us developing some new form of language, spared of the requirements to be able to easily encode it into interpretable sound waves via the larynx.
Why we should we create a brain interface?
Humanity would not be where it is today if it wasn’t for a few extra layers of cortical matter. The expensive bet that evolution placed on developing this extra gray matter, made all the difference that allowed us to solve innumerable problems and reach new potentials. Extrapolating forward, there may be no limits to what expanding our mind’s potential, and being able to connect and cooperate with each other, may allow us to achieve.
Ramez Naam, in his previously mentioned novels, succinctly summarizes the virtue of such a technology being released into the world through a quote by one of the protagonists:
“We think of ourselves as individuals, but all that we have accomplished, and all that we will accomplish, is the result of groups of humans cooperating.”
We could imagine how these extra cognitive abilities can allow us to collaboratively brainstorm, design or compose new levels of invention and expression. Or how they would enable us to visualize, interpret and feel data about the world around us, to create and share new understandings, higher level abstractions and mental models that were previously impossible to fathom. We can also imagine a world without unnecessary screens or interfaces, and instead knowing the temperature of your home, health of a crop, price of a stock, status of a project or location of a loved one upon just requesting the information in your mind. And these cognitive abilities can keep the human mind in the loop as we continue to make progress in building artificial intelligences with higher level thought abstractions, ensuring a greater probability that we are aligned with and in control of the future we are building.
Beyond just the extra cognitive abilities, we can try to empathize how it might feel, by drawing parallels to where we, or others, have experienced an expanded sensory input or a sense of connection to others. Experiences such as emotionally connecting with another mind, physical touch with a loved one, or conversely the aversion we have to solitary confinement, loneliness or trapped-in syndrome. We can look at the value we place on the diverse sensations we can experience, from the mundane to the profound, and conversely the agony experienced by those that have lost the ability to feel through injury or paralysis, or emotionally through depression. Or by empathizing with the extreme joy and bewilderment when someone that was blind or deaf, can see or hear for the first time.
We recognize these feelings as being the peak of human experience. If connecting and upgrading our minds is in pursuit of heightening and expanding these experiences, creating such a technology could help make us more human than ever. And before we reach that point, we will undoubtedly discover new ways to cure diseases, prolong life, and understand more about how the most complex structure in the universe, our mind, works. This is why I believe it is so important to work on this.
The path forward
Humanity has never just played by the rules, but rather worked to rewrite them to change what is physically possible. This holds especially true when expanding our cognitive abilities with brain interfaces. As we create this technology that will fundamentally change the playing field, we need to be critically vigilant that we are creating the best future for all. Watch this space.
If you enjoyed or found helpful please clap away below or share, it’ll help this topic reach a broader audience.
Feel free to contact me with questions, comments or feedback at email@example.com or @justLV on Twitter.
Also, if you enjoy the technical and philosophical aspects of creating hardware products, check out some of my other writing