AI interprets the mind’s eye. Image: Nightcafe Creator Gallery

Artificial Vision: Curing Blindness with Neural Implants

A new era of the mind’s eye is upon us.

Tanvi Reddy

--

Lt. Commander Geordi La Forge in Star Trek: The Next Generation. Image: Star Trek

36years ago, Star Trek made a prediction. The creators crafted a universe filled with inventions nothing short of mystical — teleportation, holograms, warp drive, antimatter propulsion — which streamlined life in the 24th century. Perhaps the most memorable innovation, however, was that worn by Lt. Commander Geordi La Forge.

La Forge’s vision was stolen by an unknown congenital disorder, rendering him blind from birth. At age 5, his days in darkness were over the first time he wore the VISOR (Visual Instrument and Sensory Organ Replacement). This device decoded electromagnetic signals from the environment and transmitted them directly to Geordi’s brain via neural implants in his temples — letting him see the world with artificial super-vision.

Sounds crazy, right? Think again.

What if I told you we don’t have to wait until 2335 to achieve this — that the idea of bypassing the eye to directly stimulate visual perception in the brain is an approaching reality? That we could potentially restore vision to the blind, and give superhuman sense to the sighted? That we could cure the incurable, by interfacing artificial intelligence with the human brain?

By entirely avoiding the problem areas and creating the visual experience directly in the brain’s occipital lobe, modern healthcare could realize the possibility of restoring sight to those whom it’s been stolen from, whether at birth or later in life. The key to unlocking this potential? An emerging technology born from humanity’s desire to understand and augment our source of intelligence: brain-computer interfaces.

We’ll first take a deep dive into blindness and our natural visual process to be able to understand why this solution matters. Then we’ll transition into making sense of brain-computer interface technology as a whole and understanding its potential for restoring sight (along with limitations and future directions). Use the links to navigate as you please.

Understanding Blindness
What does it mean to be blind? What preventative measures are in place— and why are they not always enough?

A Tale of Two Intelligences
What happens when you directly interface artificial intelligence with the human brain?

Synergy
How can BCI and neural implant technology create artificial vision?
| Spotlight:
Neuralink

Looking to the Future…
Super-vision for the sighted.

A Cautionary Tale
Limitations of invasive neurotechnology.

Sense and sensibility
Concluding thoughts.

What does it mean to be blind?

Could any sighted person truly ever answer this question? No. But most, when asked, will respond with something along the lines of a loss or absence of vision. This is true—but what underlies a loss of vision? And what does this mean for those affected?
To understand a lack of vision and even how to bring it back, we must first understand the presence of it as a result of collaboration between the 3 actors of the visual pathway: the eye, the optic nerve, and the brain.

Let’s talk about how you’re able to perceive my words right now.

The path of light as it enters and leaves the eye (left to right). Image: UCSD

It all starts in the eye.

Visible light waves from your computer screen first hit the cornea, the clear, dome-shaped, and frontmost layer of the eye that bends and focuses the light. From there, the iris (the colored part of your eye) expands to let light pass through the pupil, the opening that determines how much light can enter your eye. Once granted entry, these light waves are put into focus by the lens, which accommodates its thickness to focus near or far images onto the retina — basically adjusting size and distance to help you read ˢᵒᵐᵉᵗʰⁱⁿᵍ ᵃˢ ˢᵐᵃˡˡ ᵃˢ ᵗʰⁱˢ·

The retina is where the visual process really begins. At the very back of the eye, it houses clustered neurons called photoreceptors which absorb light to sense brightness and color. Here, the light interacts with a molecule called photopigment, kicking off the process of phototransduction — the transformation of visual stimuli (like light rays from your computer screen) into electrical signals that now speak the language of your brain.

The journey to the brain begins…

We’re not done with the retina yet. These visual signals are transmitted to retinal bipolar cells and then to retinal ganglion cells sending action potentials down their axons, which converge to form the all-important optic nerve. At last, we leave the eye, and the propagation of visual signals to the brain is set in motion.

The pathway electrical signals take from the eyes to the visual cortex. Image: Perkins School for the Blind

The optic nerve: a cranial highway of sorts, made up of more than 1 million nerve fibers, that serves to carry those electrical visual signals from your eye to the brain. Just as you have left and right eyes, visual fields, and sides of the visual cortex, you also have two optic nerves — one on the left, and one on the right — that must encounter each other before the process can go on. At the optic chiasm, the intersection point of the two nerves, visual information is broken up and sorted into components to then be organized and dispatched through your brain’s personal switchboard operator, the thalamus. Finally, your visual information is on its way to the occipital lobe.

The (almost) final destination

After a long journey on the Yellow Brick Road, the images from your retina have at last made it to the Emerald City at the very back of the brain: the visual cortex.

The visual cortex, at the back of the brain, leads the perception of visual sensory stimuli. Image: Dartmouth

The visual cortex’s six layers begin your brain’s process of interpreting and finding meaning in the images transmitted to it as neural impulses. Neurons in the cortex called feature detectors respond to elementary features of images, like bars, edges, and gradients of light, while parallel processing aids in interpreting color, depth, form, and movement. Think of it like your brain’s ways of helping you perceive the words and images of this article. Information then travels to supercell clusters on other points of the cerebral cortex, where instant analysis finally helps you to form a reaction.

Let’s recap our basic & natural visual pathway from stimulus to reaction:
Light → Retina → Optic nerve → Thalamus → Visual cortex → Brain regions

Although that process is a long one, for those with all 3 key actors in perfect condition, the entire pathway takes ~150–200 milliseconds to traverse and almost never fails.
However, the same can’t be said for those with damage or deficiencies along the way. So, now that we understand how vision works, let’s talk about what happens when it doesn’t.

How do we define being blind? Why does this matter?

The loss or absence of vision is incapacitating. Many would define the condition as something to be scared of — an endless night, with no sense of direction, constant fear of the unknown, and oftentimes no hope for a cure.

The scientific world defines blindness as a partial or complete loss of vision, as a result of any damage or deficiencies within the above described pathway. Most often, it’s due to some type of incorrect development or damage to either the eye or the optic nerve. In fact, the leading causes of blindness around the world are all results of inconsistencies or defects in these two parts of the visual pathway. To prove it, take a look at the most prevalent issues that develop into blindness, and how those affected are forced to see the world.

1] Refractive errors: an inability to correctly focus light onto the retina

Image: FOCUS Eye Centre

2] Cataract: clouding of the lens of the eye, which is normally clear

Image: Northwestern Medicine

3] Diabetic retinopathy: damage to the retina due to high blood sugar

Image: National Eye Institute, NIH

4] Age-related macular degeneration: damage to the macula, in the retina

Image: Northwestern Medicine

5] Glaucoma: damage to the optic nerve due to pressure on the eye

Image: Northwestern Medicine

It’s pretty clear now where the problems usually lie. Those conditions of the eye and optic nerve are what cause 43 million people worldwide to be blind, and those affected are costed a lot more than just their vision. For children, the delayed motor, language, cognitive, social, and emotional development create detrimental lifelong consequences. For adults, blindness means depression, anxiety, risk of injury, and early entry into nursing homes. It takes an incredibly strong person to deal with those consequences short- and long-term; a lot of us wouldn’t be able to, and it isn’t hard to see why. Just imagine not being able to recognize and remember the faces of loved ones, your favorite paintings, or a beautiful sunrise.

Blindness is debilitating in many more ways than one.

The good news: there are treatments able to target the damaged areas to prevent and restore impaired vision.

The most common ones:

1] LASIK surgery
Reshaping a part of your cornea with a laser to improve how light hits your retina.

2] Cataract surgery
Removing your eye’s cloudy lens and replacing it with an artificial one.

3] Laser treatment
Using a laser to drain fluid from your eye and reduce the pressure on it.

The bad news: vision loss progresses quickly, and these treatments are neither accessible to everyone in need nor successful at curing all conditions.

Types like macular degeneration, diabetic retinopathy, and glaucoma are incurable, and treatments to reduce their effects only have a chance of working when the conditions are diagnosed and cared for early.
However, timely detection and treatment are rarely the case. Even despite their curability, only 36% of those with refractive errors and 17% of those with cataracts actually receive quality treatment or surgery (World Health Organization). As for the incurable types of vision loss, effectively avoiding the point of no return proves difficult to achieve due to a lack of early detection coupled with the general inefficacy of laser treatment.

All of the above treatments focus on fixing the problem areas — the eye and optic nerve — and so, are rendered useless if those areas are already irreversibly damaged due to ineffective detection or care.

The bottom line:
When essential action isn’t taken, any of the conditions above can and will progress into a complete lack of vision — irreversible total blindness.

Where could we possibly go from there?

Enter: the brain-computer interface.

It’s exactly what the name makes it sound like: directly interfacing technology with the human brain. But it’s also much, much more than that; for people who’ve lost the ability to sense the simplest yet most beautiful things in life due to a disorder of the central nervous system, this technology is a source of hope.

So what are BCIs, really, and how do they work?

Brain-computer interfaces (BCIs) connect the activity and signals in your central nervous system (made up of the brain and spinal cord) to computing devices able to interpret and transform them into artificial output, which can then be used to restore, replace, or even enhance your natural neurological function.

Feeling surreal? Me too! Let’s break it down and discover what underlies the sci-fi abilities of a BCI. ↓

Artificially intelligent BCI technology utilizes this functional pathway:
1] physical hardware that receives signals from the brain → 2] software that processes and interprets the data into what the computer can understand → 3] generation of action in a device or machine.

Understanding the physical hardware

The invasiveness of neurotechnology is a spectrum. While some devices don’t even need to touch the skin, other devices are implanted or have electrodes that reach farther and deeper than the brain’s outer layers. So, there are two main categories BCIs can fall into: non-invasive, or invasive.

Non-invasive BCI like the Neurosity Crown (pictured right). Image: Neurosity

Non-invasive BCIs are the type you’re probably picturing as you read this article: a bunch of electrodes and wires hooked up to a person’s scalp, or headgear that allows them to control things with their mind. Because of their surface-level nature, and medical safety due to their non-surgical implementation, non-invasive BCIs can be deployed to thousands more people outside of the lab than their invasive counterparts can — including you. That’s phenomenal, but what’s the tradeoff?
This external approach to brainwave data collection suffers from limited signal capture due to distance from neurons, the inability to precisely target a specific brain region, and a lack of pure and interpretable data due to environmental noise. Safety is prioritized, but at the cost of precision.

For our cause, it’s time to introduce the invasive BCI.

Invasive direct data interface (DDI) technology developed by Paradromics. Image: Paradromics

Invasive BCIs are characterized by the implantation of electrodes on the surface of or directly into the tissue of the brain, designed to capture the utmost pure, interpretable, and precise signal activity by pushing the limits of what’s currently possible for us to observe. Getting this close to the innermost workings of our mind lets us target specific regions of activity and collect data with an infinitely improved signal-to-noise ratio. For these reasons, it’s not a surprise that invasive BCIs are at the forefront of efforts targeting the deep brain to restore critical function to patients who’ve lost motor capacity, audition, vision, or other fundamental ability as a result of a disease or birth defect.

Invasive BCI tech surpasses the outer layers of the brain, going directly to the tissue. Image: NeurotechEDU

Millions of possibilities — curing the incurable, transcending to superhuman sense — are opened up with this technology, but at the cost of safety and scalability.

More on the limitations later. For now, let’s discuss how the data collected by these cutting-edge devices is interpreted and deployed.

The language of the human brain…as translated by a computer.

Nobody likes noise. It’s even worse when you’re actively trying to listen to something, and unwanted warbles hinder your understanding. Now, you’re forced to sift through and tune out those disruptions to pick up on what you’re searching to hear.

This applies to the interpretation of brain signals, too. No matter the type or sensitivity of the neurotechnology you employ, you’re bound to have to deal with noise from any part of the monitoring environment: arbitrary, unwanted electrical signals that cloud the capture of what’s really being signaled.

Think about just how easy it is for this noise to accumulate, too. The electrical signals captured by BCIs, born of the communication between neurons in the central nervous system, are so minute that any external charged actions — like a pulse or light bulb — can muddle your data collection with non-neurological signals. For this reason, at either the hardware or software level, BCIs utilize a series of filters to clean up brain activity data and isolate what really matters.

Raw brainwave data (top) compared to notch-filtered data (middle and bottom). Image: ResearchGate

The most commonly used would be the band-stop filter, which is exactly what it sounds like: a filter that stops, or removes, a specific frequency band from showing up in your data. To get even more granular, BCI tech will use the notch filter, a type of band-stop filter, to remove bands with frequencies of specifically 60 Hz (which, unsurprisingly, is also the frequency at which most U.S. power lines operate)!

Okay, so now that we know about filtering data, how exactly are we able to extract and make sense of the meaningful information? As with countless other problems being solved today, the answer is machine learning.

A high-level overview of how an ML model would turn brain signals into computer commands. Image: IntechOpen

There are two things a BCI’s machine learning (ML) model cares about: the time at which brainwaves fire, and the frequency at which they fire. A time domain graphs successive brainwave data points over a period of time, allowing the algorithm to figure out what was going on in the brain leading up to or during an event (like a seizure, imagined motor task, or visual sensation). A frequency domain lets the algorithm determine the rates at which certain groups of neurons fire, as well as when they fire together or desynchronize, leading to discoveries of different brainwave patterns that signal different things (like sleep stages or motor movement).

Image: ResearchGate

Why does this matter? By extracting and classifying the features of this data, an intelligent machine learning model is able to make associations between certain brain activity and things like imagined/voluntary movement, or external stimuli that evokes a reaction. The algorithm can then either 1] generate a mind-controlled action in the connected device, or 2] play an Uno reverse, and directly stimulate a part of the brain to generate a response much like the naturally observed one!

Let’s take a deeper dive into how a model is able to understand the process we’re discussing today: vision.

Steady State Visually Evoked Potentials (SSVEP) are brain signals generated naturally in response to visual stimuli at specific frequencies. Brain signals in our visual cortex, like SSVEP, should correspondingly peak when the subject is looking at light waves flickering at a specific frequency. To understand our brain’s response, let’s walk through a brief example.

Imagine you’re looking at a machine with two lights flashing at two different frequencies: 6 Hz and 9 Hz.

If you look at the left light flashing at 6 Hz, your visual cortex will emit brainwaves at the same frequency, 6 Hz. If you look at the right light flashing at 9 Hz, your visual cortex will emit brainwaves at 9 Hz. The broader concept to understand is that by recording these signals, we’re able to fully understand which brain activity specifically corresponds to which visual stimulus, whether it be a flashing light or a picture of a dog. Once we know that, we can figure out how to generate those responses through an entirely different and artificial approach.

The 🔑 takeaway here: Ultimately, we’re able to determine how signals in the brain achieve critical neurological function like movement or vision, and can then use that information to reconstruct the same function. Only this time, it’s as a result of meaningful stimulation from the machine, rather than natural sensory neurons (which, in the case of blindness, are where the problem lies).

The moment of truth: using all of this technology to stimulate artificial vision.

Now that we know how close a BCI can get to the underlying functions of our brain, the possibility of curing blindness doesn’t seem so far-fetched. Let’s talk about what reaching this goal entails, and why those at the intersection of artificial and human intelligence are starting to pursue it with promise.

Seeing light…without light

At the beginning of this article, we explored the pathway of transforming a mere stimulus into visual perception and thereby eliciting a response.

But what if I told you we’ve long since figured out how to perceive light, with no visual stimulus at all? Introducing: the phosphene.

Phosphene-based representation of an image by a visual prosthesis

By definition, a phosphene is a flash of light that’s actually classified as a hallucination, due to its perceptibility despite a lack of stimulus. You can create one in your visual field, right now, by putting pressure on the side of your eyeball (not recommended to do for long, though).

How else could a human create and perceive them naturally?

1] Retinal disease/traction
Unhealthy traction (or pressure) on the retina that’s caused visible damage to the papilla or macula can stimulate the photoreceptors into producing phosphenes.

2] Optic neuritis
Inflammation of the optic nerve (or even the retina) can cause phosphenes or flashing lights to be seen, without any visible mechanical irritation.

3] Cortical-based causes or sound-induced retinal traction
Issues stemming from the brain, like migraines and occipital seizures, or overly quick reflexive reactions to loud noises, could also cook up a phosphene or two.

In the grand scheme of these conditions, naturally created phosphenes are pretty inconsequential. So, let’s talk about creating phosphenes with a purpose…unnaturally.

Baylor College of Medicine: generating the alphabet

Conventionally, electrodes in neural implants would rely on static currents to stimulate perception in the visual cortex, creating images with phosphenes that look like this:

Static-current-induced phosphene-based representation of man waving. Image: The Explanatory

The neural implants at Baylor, though, utilized dynamic currents to not only generate flashes of light but also synthesize them into coherent letters for the subject to perceive. This allowed both blind and sighted subjects to see the shapes of letters purely from the BCI technology employed…

Static vs. dynamic electrode currents. Image: The Explanatory

…and demonstrate phenomenal results like this.

Source: The Explanatory

Imagine being able to read this article with your eyes closed. Revolutionary, right?

Despite its promise, the study lacked commercial funding and has yet to be continued. But with that, we can transition into the BCI company of the moment — the manifestation of Elon Musk’s goal to merge artificial intelligence with the human body and give critical function back to those living without it: Neuralink.

Seamlessly (and wirelessly) linking mind and machine

Neuralink’s mission to “create a generalized brain interface to restore autonomy to those with unmet medical needs today, and unlock human potential tomorrow” is a visionary one. They plan to do this with an ultramodern device no bigger than a coin: the N1 neural implant.

Image: Neuralink

To be clear, this BCI is invasive. So, what exactly makes their implant so revolutionary that the potential benefits outweigh the costs?

1] It’s got electrodes. Lots of them.
All the historical advancements in the field of BCI — real-time cursor control, operation of robotic prosthetics — have all been done with a couple hundred electrodes. How many are in the N1? 1024, spread across 64 highly flexible and ultra-fine threads to listen in on the “conversations” between about 1,000 individual neurons.
For context, the conventionally used Utah microelectrode array of around the same size employs just one-tenth the amount of electrodes (100), and most clinical studies in general have utilized a maximum of 200–300. The N1’s sheer capacity for data collection would completely eclipse what we’ve accomplished so far.

2] It’s wireless.
Erase the image you’re picturing of a bunch of wires being hooked up to points on a person’s head. Instead, visualize a fully implantable, battery-powered, and standalone chip in your brain that operates entirely over Bluetooth protocol. Imagine the technological craftsmanship needed to transmit spiking information from over 1,000 neurons wirelessly over a radio.

3] It’s implantable by Neuralink’s very own surgical robot.
The electrode threads are so thin that it’d be impossible for a human hand to insert them correctly, efficiently, and in the perfect location to avoid any vital tissue or blood vessels. Neuralink’s chip is implanted using the autonomous R1 surgical robot, which employs a needle thinner than a human hair to insert and release threads into the brain with precision and reliability.

Their implants have already made headway on one of the company’s main goals: allowing paralyzed patients with quadriplegia (due to a spinal cord injury or ALS) to control computers and mobile devices with their thoughts. You might be familiar with the promising results of their research process — allowing Pager the monkey to play mind-pong, recently getting FDA approval for human trials — which have made monumental waves in the field. However, their progress on another visionary goal deserves a spotlight in this article: the development of neurotechnology that will cure the blind.

How do they plan to accomplish this? Let’s unpack their train of thought, starting all the way back at our Emerald City — the visual cortex.

In the primary visual cortex, V1, every neuron plays a role in the perception process. Each individual neuron fires in correspondence with a specific and small part of your visual field, called that neuron’s visual receptive field. Picture this: if you have that neuron marked in green on the left…

Image: Neuralink’s 2022 Show & Tell

…it might fire when you see something at the spot marked in yellow on the right. The same holds true for every single other V1 neuron, all of which may correspond to different dots on the screen (meaning different individual receptive fields).

The visual up there depicts a monkey’s brain, but the same connection can be observed in humans. Our V1 lies sheltered inside a fold called the calcarine sulcus, highlighted in red below.

Image: Ihm Curious

And if you’re anything like me or Neuralink, your intrusive thoughts will tell you to unfold that wrinkle and see what’s inside. Take a look.

The unfolded left V1 shows activity in a location corresponding to that of the stimulus in the right visual field

Once unfolded and “laid out,” the distribution of brainwave activity on V1 is seen to display a map of whatever you’re currently looking at. Check out how closely it tracks!

A monkey’s visual cortex matching up with the moving line on the screen. Image: Ihm Curious
A mouse’s visual cortex matching up with the moving dot. Image: Ihm Curious

That sunset-looking visual on the left is called a retinotopic map — but there’s no rule that says images have to be generated with input from the retina, and not from somewhere else. Our brain’s visual cortex is fully capable of accepting and perceiving visual input from places other than the retina, including tactile or auditory senses. So, why not do it with a machine?

Let’s explore how Neuralink plans to accept that challenge.

We’ve already illustrated that each neuron maps to its own special visual receptive field, and how this lets us know that a certain visual stimulus will cause a certain corresponding neuron to fire. Now reverse engineer that process, and you’ve got the power to stimulate a certain neuron and cause the brain to see a flash in the spot corresponding to it — a flash that only the subject can see. How cool is that, creating visual stimuli out of thin air?

Neuralink’s goal is to make enough of those flashes to cover an entire visual field. Theoretically, a visual prosthesis using their implant would work a little something like this:

Image: Neuralink’s 2022 Show & Tell

A camera would capture a visual scene, outputting data to a smart device (like an iPhone) that would then process and transmit it to the neural implant. There, the image would be converted into a pattern of stimulation for the implant’s electrodes to use directly on the neurons of the visual cortex and induce a phosphene-based representation of the scene. A representation like the image on the left may be possible with a 1,000-electrode implant (like the current generation N1). But, Neuralink’s next-generation implant is projected to have and use 16,000 electrodes — meaning that if a blind subject had one on each side of their visual cortex, we would have 32,000 points of light at our disposal to generate an image for them.

So, where does this technology plan to go from here? Firstly, there’s a barrier to cross. Generating individual flashes of light is understandably possible, but generating a whole, accurate, and cohesive image would be very difficult as of now. The biggest reason for that would be the sheer nuance and complexity of neurons that don’t act like organized pixels, but rather like enigmas that could react in completely unprecedented ways to different stimuli, due to subtler features that we don’t have a full grasp of yet. To get closer to the goal of understanding how best to use neurons to generate images, the BCI field will simply need to develop devices with 1] more electrodes and 2] deeper artificially intelligent algorithms. It’s safe to say we’re getting there.

However, they have no shortage of future ventures to set their sights on in the process: Neuralink and other BCI frontrunners have on the horizon ideas of not only restoring vision to the blind, but enhancing it to a superhuman level for the sighted. Sounds incredible, right? Let’s discuss how this could go.

Transcending to superhuman sensibility

This is one of the more nebulous goals of neural prostheses. Really, the route to achieving it is an amalgamation of exponential improvements and creative approaches to our current hardware, software, and implementation techniques.

1] Enhanced perception of visual stimuli
As previously mentioned, it’s common for neural implants in the medical field to employ the Utah microelectrode array, made up of 100 electrodes in a 10x10 array with the potential to enable rudimentary pixel vision. But, as we already know, Neuralink’s N1 implant utilizes more than 1,000 electrodes and aims to scale up to 16,000 in the future. Imagine the visual capacity of neural implants with exponentially higher numbers of electrode channels — 102,400, maybe even 256,000? That could solve the complexities of our visual field and take us to uncharted sensory heights.

2] 360° visual scope
Non-ocular humans (the blind) exist. So do monocular ones (those with cyclopia), and, obviously, binocular ones. Theoretically, couldn’t we achieve trinocular vision? Imagine a camera, with a 360° view of the world, being connected to our visual cortex through a neural implant — we could potentially perceive visual images of anything in the camera’s scope (including things behind us) by using it as our “third eye.”

3] A “smart” visual field
Imagine those virtual reality or bionic superhero movies in which you’ve seen people arbitrarily put two fingers on their temple and generate a database of information in front of their eyes, only visible to them. This is quite far-fetched, but it’s interesting to think about a time far in the future when the intersections of fields like AR/VR and neurotechnology could let us all have our superhero moment.

Of course, these ideas are currently limited to speculation, and that’s largely because there are still significant barriers to cross before creating a future where AI and BCI are fully integrated with humanity. Let’s dive into what those limitations are, and how we can work to eclipse them.

How far is too far?

Any technology has its downsides, especially the newest ones out there. Neuralink’s implant and other prostheses using an invasive approach are currently exercising extreme caution to avoid overstepping the boundary between, ultimately, life and death. Here’s why.

1] An implantation method that’s pretty hard to imagine:
The first steps require cutting through the skin of the scalp, reaching the skull, and drilling a hole in the skull to expose the dura (the brain’s outer layer of protective tissue).

Neurosurgery to expose the brain’s inner layers

The dura must then be cut and folded back to expose the surface of the brain, where the electrodes can be implanted.

Neuralink’s R1 surgical robot simulating the insertion of electrode threads directly into the brain

Neuralink’s surgical robot employs an intelligent targeting view of the brain that determines exactly where to insert the electrodes to avoid sensitive tissue and vessels — something human surgeons can’t yet do. Although risky as of now, further development of both the device and its implantation could allow for safer future integration with the brain.

2] Risks of damage to the brain and beyond
Okay, so implantation’s scary-looking but can be improved through technological development. How about once the device is in?
The main associated risks are, simply put, infection, bleeding, and tissue damage. Due to the brain’s reaction to foreign bodies, or even basic wear and tear, inflammation and formation of scar tissue pose dangerous whole-health risks to the subject, especially in the case of extended use.

In addition to the potential medical dangers, invasive BCIs, compared to their non-invasive counterparts, will require much further developed design and construction to become reliable, user-friendly, and accessible to the public. These advancements will occur in years to come, during which the technology can iterate itself, tailor to the human body, and earn our trust.

The current limitations to getting there for this almost embryonic area of innovation are 1] longevity, and 2] scalability. Because of its location submersed in layers of the brain, the hardware can easily wear down over a relatively short period of time due to mechanical stress, lack of biocompatibility, or the brain’s potential reaction to foreign bodies. It’s predictable, therefore, that this technology will probably call the lab its only home for a while, delaying mass production capabilities and remaining inaccessible to the general public until a 100% safe, portable, and usable product has been developed.

Here’s the bottom line, something already understood by our spotlighted neurotech pioneers:
If we want to make progress, prioritizing a human-centric and biologically sound approach will be absolutely essential in the coming years of BCI innovation.

Sense and sensibility

Restoring one, and enhancing the other. These possibilities will be made realities at the intersection of artificial intelligence, brain-machine interfaces, and of course, our central nervous system. We still have a long way to go not only in the power of our hardware and software, but also in our understanding of the human body and what it truly means to have or lose autonomy.
Whether it’s more electrodes, enhanced imaging techniques, safer implantation methods, or striking the balance between distance and intimacy with the brain, neurotechnology can and will evolve into something that heals and elevates humanity every single day. Pay attention to it, how it progresses, fails, succeeds, and adapts — this will be the closest interconnection achieved between mind and machine in human history. There are causes for concern, but by exercising caution and putting the lives of subjects first, this innovation could one day be the light switch for those living in the dark. Or, you know, make the light a little brighter for all of us.

Sources

--

--