The Ins and Outs of Brain-to-Brain Communication

Sam Brinson
Connecting the Dots
7 min readMay 19, 2017

Can you imagine a world in which we can think thoughts at each other? No talking, no texting, typing, or hand-waving. You just think something to someone, then they experience that thought.

The world would be a very different place. But it appears we might one day see it. The pieces required for brain-to-brain communication are already in use. With a few technological innovations here and there, the final product might come along sooner than you think.

How exactly would this technology work? The basic premise goes something like this:

1: read the signal from one brain

2: figure out what it means

3: convert it into a signal another person’s brain can understand

4: introduce the signal to said person

We’ve already accomplished a fair bit. We have robotic limbs and mouse cursors that have been controlled by thought alone. We have cochlear and retinal implants which take sensory experience and send sparks down our nerve endings, allowing the blind to see and deaf to hear.

However, despite the relative success of these technologies, brain-to-brain communication requires more than has currently been achieved. The brain is still poorly understood, massively complex, and the technology is only just starting to grapple with it.

GETTING AT THE BRAIN

We’ve been able to measure brain activity, in one way or another, since the 1880s, when Italian physiologist Angelo Mosso designed his ‘human circulation balance.’ This device looked like it was of the torture kind, but worked in a sense like a seesaw. The person whose brain was to be measured would lie on a wooden plank, which would keep their balance until, Mosso surmised, they began to think, at which point blood would rush to the brain, making them top-heavy, and upsetting the balance.

While we don’t know how well this machine actually worked, (Mosso claimed that it did) the idea that blood flow is indicative of brain activity was a sound one. Like a muscle, when certain areas of the brain are tasked with performing some function, they increase demand for the energy that blood provides. Modern technologies such as fMRI (Functional magnetic resonance imaging) rely on this principle to gain much more accurate insights than Mosso accomplished.

Recent technology has also allowed us to measure electrical activity. The brain uses energy to fire neurons, which make use of small electrical impulses to communicate. That electrical data can be read using technology such as EEG (electroencephalogram), which consists of a cap with many electrodes that attach to the scalp.

Both these technologies are still rather limited, however. fMRI and EEG are still crude representations of an area of the brain and typically require large machines or unfashionable hats with more wires than the back of a PC. What’s more, they must deal with the skull, skin, muscle and inner membranes that encapsulate the brain and can muddy the signals.

To record activity in high resolution requires invasive surgery. When a portion of the skull is removed and the cortex is accessible, certain devices can be introduced that will record activity in much finer detail, even to the level of individual neurons. But even these invasive devices come with other issues.

For instance, the body kicks the immune system into gear when foreign objects are found, which can form scar-tissue around the device, limiting its ability to read signals or causing the device to breakdown. And then, the devices must communicate with the outside world, which today often means cables hanging from your head.

Then there is the issue of size. There are more than 100,000 cells in each square millimeter of cortex (this includes neurons and glial cells). Despite the wealth of images of neurons by themselves, hanging out in space all spindly-like, they are actually completely surrounded by other cells, blood vessels, and the dendrites and axons of other — sometimes very distant — neurons. That’s a lot to consider if you want to shove a device in there.

Can we get at the neurons in high fidelity without so much chopping and drilling? The possibilities of brain-to-brain communication are endless, but if we have to go under the knife first, well, as neuroscientist Blake Richards points out, “People are only going to be amenable to the idea [of an implant] if they have a very serious medical condition they might get help with. … Most healthy individuals are uncomfortable with the idea of having a doctor crack open their skull.”

For brain-to-brain communication to become a reality, innovations will need to made in the field. Here are some possibilities…

THE FUTURE OF MIND READING

Neural lace

This thin mesh-like material is injected into the brain, making it slightly less invasive than other techniques. When the lace leaves its tiny glass syringe, it unravels itself and becomes lodged in place, at which point the minuscule electrodes contained within it can begin recording the activity of nearby neurons. Neural lace has been successfully applied to the lateral ventricle and hippocampus in rats and promisingly didn’t cause an immune response.

Neural dust

These ultrasound powered sensors can both read and stimulate nerves wirelessly. As they contain something called a piezoelectric crystal, the dust-sized sensors can convert the ultrasound vibrations into electricity which can power an on-board transistor. Currently they are too large to function in the brain but have been used in the peripheral nervous system. Researchers also estimate that they can shrink the devices down to 2 thousandths of an inch, which would allow them to be placed inside the brain, but how they will get there is another question — perhaps surgery is still required, maybe we can get them in through the blood stream.

Both neural lace and dust require a device to be nestled in amongst your neurons. Some people aren’t going to be comfortable with that, even if they are far superior to current technology. Brian Johnson, the founder of Kernel, a company aiming to produce neural lace, says “There’s no tech that exists in the world that allows you to be outside the brain and gain access to critical data. … You need to be inside the brain, inside the skull.”

And yet, another trail-blazing techno-entrepreneur is aiming to do just that.

Wearable MRI

Mary Lou Jepsen, who has formerly worked at Google X and Facebook’s Oculus, along with cofounding One Laptop Per Child, has founded a new company named OpenWater, which is working on a wearable device that will use light to record brain and body activity.

Using an “utterly unconventional” approach to opto-electronics, the device could be the size of a ski-hat or bandage, and will contain LCD’s that use infrared light to measure activity bit by bit, or voxel by voxel (essentially 3D pixels).

“I’m putting screens on the inside of a ski hat to read your mind,” says Jepsen. “We can look at oxygen flow really easily, because it’s LCD’s illuminated by invisible light, infrared light — the type of light you can see with night vision goggles.”

WHAT’S IT ALL MEAN?

An accurate readout of neurons is one thing, understanding what that information means, what it represents in that persons mind, is another thing.

Everybody’s brain is slightly different. It started different, and got more different as you grew and experienced the world in your own unique way. So, we can’t just assume that the neural code for the word “bubblegum” is the same for everyone, we’ve got to find out what it is for both sender and receiver.

That means scanning every brain that wants to converse in thought, and figuring out where their vocabulary is stored. The average American knows upwards of 42,000 words, not to mention phraseology specific to their domain of interest — how would you share a word of which the receiver has no knowledge? There’s no neural code to stimulate, and this isn’t exactly sound, it’s experience, how would you elicit a specific yet unfamiliar experience?

However, while full, articulate conversations between brains are still a ways off, there are some things we have extracted. For instance, a commercial EEG headset has been used to guess peoples’ passwords — this $800 device was worn by participants as they entered random PINs and passwords, with each key-stroke giving the device a better idea of what each brain signal corresponds to. After about 200 characters had been entered, the device was capable of accurately guessing new characters the participant entered — lowering the odds of guessing a four-digit numerical PIN from one in 200,000 to one in 20.

Here’s another interesting tidbit: we’ve already achieved brain-to-brain communication, via the medium of email. That’s right, email.

I should clarify, the email was sent between two people, one in France, the other in India. The message — ”hola” or “ciao” — was first converted from brain waves into binary, then sent via email to the receiver, where the process was reversed. The receiver experienced the message as flashes of light thanks to transcranial direct current stimulation.

Converting signals to binary and experiencing messages as flashes of light hardly constitutes what we think of as true brain-to-brain communication. I want to think a message to someone and get a message back. No binary conversions or strobe lighting.

In sum, full articulate conversation is going to take time to accomplish, and innovations to be made. But we’re on the way, we’ve started the race. Elon Musk is in the game. Bryan Johnson has chipped in with $100 million. It seems it’s not a matter of if but when. And, when it does happen, well, what exactly will a telepathic future look like? What implications and consequences are likely to arise? Will we still talk? Will our minds all converge into a single entity!? That, friends, is a topic for the next article.

--

--

Sam Brinson
Connecting the Dots

An emergent property of billions of chaotically firing neurons. Currently thinking about thinking. http://sambrinson.com/