Mind Control and the Computer-Brain Interface

Computer brain interfaces once again hit the mainstream media news after it was raised in April 2017 at the Facebook 8 developer conference. Traditionally, the idea of mind control over objects has its provenance in science fiction and horror. Nonetheless, research into computer-brain interfaces in technology is still of great interest in science as it can also provide very profound methods to improve the quality of life for quadriplegic patients, by providing them with, as a simple example, a technique they can use independently to operate a mouse interface to interact with a computer through the power of thought.

If this all sounds a bit science fiction, it really isn’t and has been around for some time albeit with the severe disadvantage of intrusive brain surgery. This was because of electrodes required to be installed into specific areas of the brain where they could intercept electrical signals leaking from the neurons that were conducting the signals being generated by the brain. The way this worked was that the neurons have a long arm or extension called an axon as shown in figure xxx.

The axon has a myelin insulation coating but interestingly it does not completely cover the axon. Instead, there are regular breaks in the myelin called a node of Ranvier. What this means is that some of the charge being conducted along the axon leaks out through these gaps in the myelin and can be captured by electrodes on the exterior of the skull. Unfortunately, with the skull being thick, most of the signal is either lost or distorted but in some cases, it can be amplified to control a prosthetic limb for example.

The solution to this dilemma was to use Brain Imaging Technologies to scan the brain for the areas that experience electrical activation when the patient thinks about a given task such as moving their fingers. Brain imaging tests like PET (Positron Emission Tomography) and MRI (Magnetic Resonance Imaging) can highlight the precise areas of neural activity that correspond to the task, which allows the precise placement of electrodes into the brain through a surgical procedure. By strategically placing the electrodes in the specific parts of the brain dedicated to the given task the electrical command signals can be captured far more reliably. This procedure has been around since the late 60’s where work with monkeys revealed the ability to manipulate objects through mind control. Similarly, advances in brain-machine interfaces as they were known then allowed breakthroughs in working with human patients that enabled them to operate prosthetic limbs via electrodes implanted in the brain and wired to nerves in the arm. The first sensor was implanted in the brain of a paralyzed man named Matthew Nagle in 2006. Since then, only about a dozen people have received similar implants.

Recently, there has been much success in improving brain-computer interfaces to control prosthetics, wheelchairs and a computer mouse simply by the patient’s thoughts alone. Yet, in February 2017 a team of researchers at Stanford University released a clinical research paper that demonstrated that a brain-to-computer interface could enable people with paralysis to type on a computer application displayed on the screen via direct brain control. Furthermore, they could do this at the highest speeds and accuracy levels reported to date.

The report involved three study participants with severe limb weakness each had one or two miniature electrode arrays placed surgically in their brains to record signals from the motor cortex, which is a region controlling muscle movement. These electrical control signals were detected and transmitted to a computer via a cable. The computer then analyzed and translated the control signals using algorithms into point-and-click commands guiding a cursor to characters on an onscreen keyboard.

The participants in the experiment here able to record up to eight words per minute using this point and click method which was without the help of any autocomplete predictive algorithms and is around the speed most people can type text on their smartphone screens.

Nonetheless, the Stanford experiment may well be a significant improvement on other methods of BCI to manipulate a cursor — a mouse interface there was still a requirement to operate a keyboard. However, in another experiment, this time carried out by NASA participants were able to type without any keyboard.

The NASA experiment involved a computer program which could read silently spoken words by analyzing nerve signals in the subject’s mouths and throat. The way they did this was to apply small nonintrusive sensors under the chin and on the side of the Adam’s apple. Hence, it is possible to detect and recognize the nerve signals and patterns from the tongue and vocal cords that correspond to specific words.

This phenomenon occurs because biological signals arise even when we are reading or speaking silently to ourselves and the sensor doesn’t even require there to be any actual lip or facial movement. All that is required is the faintest movements in the voice box and tongue which seemingly do occur when reading or silently talking to ourselves and that is all it needs to work.

To test this theory NASA’s scientists trained the software program to recognize just six words and 10 numbers. Sensors were attached to the participants in the experiment to capture the signals in the throat and mouth when the subjects silently said the words to themselves and the software correctly picked up the signals 92 percent of the time. Moreover, the researchers then made the task more complex by presenting the letters of the alphabet in a matrix with each column and row represented by a unique pair of number co-ordinates. These were used to silently spell “NASA” into a web search engine using the program. What’s more, this was all done by translating the electrical impulses being sent to the subject’s vocal cords, with non-intrusive sensors and no additional interface or translational device such as an onscreen keyboard.

Despite this being a compelling example of typing thought control to a computer the practicalities might not be so scalable in practice and the utility somewhat limited to those unable to speak aloud due to injury or disease. Those able to speak allowed can already control objects and type far more easily using state of the art IVR technology, though it could well have a purpose in noisy environments where it is difficult to communicate acoustically or conversely where privacy, and not being overheard is an essential criterion.

To return to Facebook and their revelations at the F8 developers’ conference they revealed that they had a team of around 60 engineers working on building a brain-computer interface. The purpose of this device will be to facilitate a Facebook user type posts and updates with just their minds and without any intrusive implants or sensors. The Facebook strategy appears to be that their research team plans to use optical imaging to scan the user’s brain a hundred times per second in order to detect the signals created when they are speaking silently in their head, and algorithms will translate the captured signals into text.

Facebook’s tactics appear to be based upon the user wearing a cap or headband embedded with some sort of optical sensors. These sensors presumably will be using something like the technology underpinning functional near-infrared spectroscopy, which is currently used to measure brain activity. However, intrusive surgical implanted electrodes that can capture signals in far higher definition than non-intrusive sensors only have around 40–50% success at correctly resolving the speech signals that people are thinking about. Therefore it is a mystery how Facebook will move forward using non-intrusive external sensors on a wearable cap or band.

Regina Dugan, the head of Facebook’s R&D division Building 8, explained to conference attendees that the goal is to eventually allow people to type at 100 words per minute, 5X faster than typing on a phone, just using their mind. This interestingly is the entrepreneurial type of research that the internet giants immerse themselves in and unsurprisingly the end purpose of the BCI is eventually, to be the interface that lets people control augmented reality and virtual reality experiences with their mind instead of a screen or controller.

The plan according to Facebook is to eventually build non-implanted devices that can ship at scale presumably at a price point aimed at the mass market. Nonetheless, in order to address the obvious questions regarding privacy — and that would be a fair point as Facebook and the other internet giants do not have a strong record on respecting their users' private data — and on the inevitable fear, this research will inspire. Facebook claims: “This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech center of your brain.”

This may well be true but it is strange that it hasn’t crossed their collective minds to perhaps sneak a view at what their users are thinking when they view an advert? What begs the question is though, why would anyone allow a company that makes almost all of its money from harvesting their users' personal data, free access to their most intimate thoughts, it is downright madness.

Indeed Facebook likened it to how you take lots of photos but only share some of them. And that is a tad disingenuous because often we form the words in our minds and say them silently to ourselves, yet do not utter them, especially when in anger or in lust — and that’s a good thing on the whole.

The problems facing Brain-Computer Interfaces are daunting and not made any easier due to the asymmetrical purpose and demand for the product. On one hand, there is an extremely important and profound medical use for CBI to help those suffering from severe paralysis and disease such as locked-in syndrome. The huge benefits, which advanced CBI technology could bring to patients are incalculable and will certainly provide patients with a better quality of life. In these cases, the option to go down the surgical route is almost always going to be the preferred method simply because of the precision with which electrodes can be implanted into the specific points in the subject’s brain which have the highest probability of detecting high definition brain signals. Unfortunately, in many countries, the costs of surgery may well be prohibitive and therefore only available to the wealthy.

On the other hand, there is the entrepreneurial research into CBI such as Facebook’s research which is striving to develop solutions with a somewhat less profound purpose such as controlling augmented and virtual reality and to be able to type at 100 words per minute. Nonetheless, it is still vitally important research as not every patient that could certainly benefit from CBI will have the opportunity or the financial resources to be able to undergo surgical implants. Hence, it is vital that Facebook and other entrepreneurial researchers do strive to find technologies that can bring non-intrusive sensors and techniques to the mass market. As this looks to be the only way we can hope to bring life-changing technologies to all those people that have a profound need for the technology.

Conversely, the goal of Elon Musk’s NeuraLink company’s research into the Computer Brain Interface is to admirably concentrate on bringing the technology initially for the benefit of the severely disabled and then to concentrate on mass-market availability. Hence, NeuraLink’s announcement back in 2015 that their focus on BCI would involve brain implants which nonetheless despite their microscopic size will still require a surgical procedure to install them into the precise target areas in the brain. This, of course, will ultimately be a problem when trying to shift the technology into the consumer market as its unlikely there will be a large market for a technology requiring brain surgery. But it is also not without its benefits as developing a working solution for the disabled leads to fast-tracking of legislation and standards for the introduction to the consumer market.

NeuraLink claim to be working on a timeline of 3–4 years for medical use and up to 10 years for the commercial market however most experts in the field believe that to be preposterous bearing in mind the fact that Musk’s visions for a BCI go well beyond mere object movement control. Indeed Musk’s visions for his BCI are to integrate the brain with A.I. systems and applications as well as for a form of telepathy much like NASA’s experiment where the unuttered but silently spoken words in the mind are detected and relayed to another in telepathic conversation.

Of course, it is wonderful that these entrepreneurial powerhouses are entering the fields of research in neuroscience generally and in CBI specifically, but we shouldn’t get our expectations skewed by their phenomenal wealth and the resources that they can throw at the problem. After all, this is not an area that has been starved of funding indeed quite the reverse with up to $200 million being spent on brain-controlled prosthetics alone in the last 15 years. Furthermore, it is not a field short of star players as currently, research groups at the University of Pittsburgh and Brown University are working on brain implants for medical purposes. Stanford University is also involved in CBI research as are DARPA the research wing of the Pentagon. Indeed DARPA’s government-funded program REPAIR -’ The Reorganization and Plasticity to Accelerate Injury Recovery’ aims to understand the way that neuron work and how they can organize and importantly reorganize connections within the brain in order to improve both our modeling of the brain and our ability to interface with it.

The point that has to be made is that just because the entrepreneurs have suddenly shown an interest, it’s not like there has been a shortage of brilliant minds and money assigned to solving the puzzles of the CBI. Indeed the barriers to cracking the brain-computer puzzle are not to be scoffed at as they have so far denied the best efforts of all the research teams on the planet in this specialized field.

The initial barrier to entry is that we simply are astonishingly ignorant regards the working of the brain. Neuroscientists know a lot about the brain but not in the detail at which this level of research requires. Fortunately, we do not need to understand how the brain works in order to interact with it and we can for now, adopt a black-box technique were we deal only with known inputs and observed outputs. Thankfully this is made easier as we are not even at this stage interested in input just presently the electrical outputs of the brain in response to a call to action such as move the fingers. Hence, researchers do have high hopes that newly evolving machine learning models such as deep neuron networks will be able to take these signals from the brain and discover inherent patterns of activity. This already works in practice today, however, moving a prosthetic arm or a computer cursor is relatively easy to train a subject to think about doing. For example, asking a patient with a prosthetic or a neural bypass to concentrate hard on grasping an object — closing the fingers to grip — is straightforward and can be accomplished due to the immediate visual feedback, they can see what works and what doesn’t. Thinking in NASA’s very limited 6 words and 2 coordinate alphabet is also along the same levels of complexity but thinking in a natural language is off the scale of complexity. Fortunately, advances with those same multi-layer deep neuron networks are producing some astonishing results in natural language processing, which will accelerate the BCI learning process and facilitate its utility from simple object manipulation to natural language telepathic conversations.

The second technical issue that stands as a formidable barrier to reading the brain’s signals is actually having the capacity to capture neuron activities using electrons. Currently, the ability is tied to a relationship of one electrode for every neuron. To scale to any level state of the arm prosthetic technologies implant arrays of electrodes in the scale of one hundred individual probes. Nevertheless, that is nowhere near the required numbers as for thought control the activity from at least 100,000 to 1,000,000 neurons will need to be detected. And that is a big problem for advances in scaling the electrode to neurons dilemma has been painfully slow with “Stevenson’s law,” which measures the rate of progress in this field pointing to the trend that the number doubles only about every seven years. Indeed DARPA recognizes the magnitude of this obstacle and has started a challenge to get scientists to work on the same project of recording a million neurons simultaneously.

Lastly, there are a couple of poignant obstacles in the way and that is surgical implants. The only people likely to undergo such an invasive operation are those that desperately need to gain that token form of independence so they can once again browse the web of their choosing and watch the TV channels they want to watch and even type the private letter and emails on their own. For the vast majority, the rewards will pale in comparison to the horrendous risks and vast expense. And let us not forget these procedures are being done on an organ — the brain — that surgeons, neuroscientists, biologist, and psychologists will tell you candidly that they know very little about.

Consequently, all the entrepreneur researchers are convinced that eventually, it must be a non- intrusive method for detecting the brain signals but that returns us back almost full circle with no way to presently do this with high definition or any precision at all.

Sensory Substitution

Another interesting topic from Facebook’s research department that was revealed to the world at the Facebook 8 conference in April 2017 is a technique to enable a person to hear acoustics through vibration on their skin. This is not something new or initially nearly as ambitious as thought typing as there are already live projects underway for what is termed sensory substitution. Indeed back in 2015, David Eagleman and Scott Novich of Neosensory, presented at the TED2015 conference a concept for exactly that, a sensory substitution vest that was designed to allow the profoundly deaf to learn to hear through vibrations delivered through actuators in a vest to the skin on their torso.

In March 2017, Timandra Harkness of the UK’s BBC Radio 4 documentary “SuperSense Me” wore the vest as a test for several weeks as she went about her normal business and explored the device's potential with some quite encouraging results and observations. Indeed even after just a few weeks, Tamandra’s brain seemed to start to recognize familiar patterns particularly those that were pretty unique in themselves such as railway station announcements. Some, however, question whether this is a new sense or simply the remapping of the sense of hearing. The relevance being that remapping is not something outstanding or revolutionary as the brain does it naturally itself and has been replicated many times in experiments with both sound and vision. Indeed the oldest sensory substitution tool still around today is the blind person’s white stick.

The astonishing thing about sensory substitution is that it works at all s for many years the common perception was that the adult brain was fixed and no longer had the plasticity of childhood. However, research has determined that the brain contrary to these misconceptions retains its plasticity throughout its life span and is continuously altering and making new neural connections and reorganizing neural nets all the time. Therefore, it is unsurprising really that it should perform a maintenance task of remapping sensory nerves from an area dedicated to a failed sensor to the parts of the brain controlling an active sensor. This phenomenon has been shown to happen in blind people where areas of the visual cortex are instead used by senses such as hearing by feeding the neuron networks in the visual cortex with information coming from the auditory senses thereby taking advantage of the fact that the primary area for perception is the nervous system rather than one specific area or clump of neurons.

This is the area of sensory substitution (SS), where touch or hearing, for example, can transport information that is otherwise not available such as from a damaged sense such as vision. Sensory substitution devices (SSDs) have been available for a long time. The white-cane for the blind translates environmental structure into haptic feedback and sign language translates visual stimuli into a language. In Braille, “verbal” information is conveyed through haptic stimulation through touch in the fingertips. Many more SSDs have been developed and they have become increasingly sophisticated with the advances in technology and understanding in neuroscience.

There are several technologies that aid the blind using a method to translate optical signals into audio representations. One such device vOICe uses a camera attached to a pair of sunglasses to send pictures to a smartphone which can then translate the visual signal into audio sounds that it sends to the wearer’s ear-buds. The principle works on sensory substitution and the wearer of the vOICe device can learn over time to associate the patterns of sounds with an image in their mind. Images are converted into sound by scanning them from left to right while associating elevation with pitch and brightness with loudness. In theory, through extensive training, this could possibly lead to synthetic vision. What does happen though is that initially the sounds are passed and processed in the auditory cortex but after about 15 hours of training the brains visual cortex starts to light up registering activity and around this time the users start to become more adept at understanding the soundscapes that vOICe is sending to their ears and start to recognize objects.

There are many other SSD technologies that help people that have lost a sense adapt by conveying the information using a working sense’s mechanisms. Nonetheless, despite the major advances say from the white-stick to the vOICe the whole concept of SSD does not seem to have the expected uptake or demand that you might expect in the 39m blind people around the world. The reasons for this appear to be a mixture of quite basic issues. Firstly it appears that training can be a big issue as many claims it is not a 10 -15-hour training course it is more like having to become expert in a completely new language. Secondly, there is the issue the SSD’s do not restore sight they substitute vision for sound or tactile feedback. That may sound obvious and a bit strange but if you think about it from the blind person perspective the SSD technology does not let him know what the item is before him, whether it is a fork or a spoon. Now eventually they might be able to work it out by listening intently to the soundscapes but practically he could determine what it was by simply touching the object. Consequently, instead of learning to use and relying on SSDs, the blind person returns to relying on well-established coping mechanisms and sticking to their usual routines may be far more effective in everyday life. Additionally, the restricted resolution of state-of-the-art SSDs also aids little in orientation and mobility (O&M). In other words, it does not help blind folks to go about. Instead, established methods such as the white-stick and guide dog are far more effective for O&M.

One highly important task consistently reported to be impaired following sight restoration in adulthood is visual parsing; i.e., the ability to segregate a visual scene into distinct, complete objects. Consider for instance a typical office desk, with a computer screen, a keyboard and some stationery on it. When looking at the scene you do not perceive a messy collection of areas of different hues, luminance levels, textures, and contours, but rather see separate meaningful objects. While this parsing task seems trivial to the normally-developed sighted, it is very complex and demanding, sometimes almost impossible, for a person with limited visual experience as it requires interpreting the visual input based on previous knowledge and visual concepts which have no intuitive parallel in other sensory modalities (e.g. shadow, transparency). It is worth noting that visual parsing is extremely difficult even for most computer-vision algorithms, as they are based on basic image-driven features such as continuity of grey-level and bounding contours and lack higher-order feedback input, which has an important role in object perception