A Brighter Tomorrow
The Case for Preemptive Neural Implant Regulation
February 24, 2051.
You open your eyes to a world of color and light.
You know that what you’re experiencing isn’t real. But it might as well be.
Cliffsides and sheer rocks rise from thick ocean mists. A radiant sunrise gleans through the bluff faces. You can feel the seawater on your skin, taste the lingering salt in the air. You’re conscious inside of this controlled lucid dream. You can meet up with online friends here. You can live, work, explore in peace… anything you want.
A year ago, Big Tech released its first cloud-linked temporal implant. The world has never been the same.
Implant technology has been compared to clean energy storage, the internet, the car, the plane, and the telephone. This is the technological revolution of the 21st century. There’s no longer any doubt about its transformative power. In just a few short months, we’ve changed what it means to be human.
The cliffside fades as your drab pre-dawn apartment comes into focus. But the real world, too, comes alive in a stream of color and light. Like an internal heads-up display (“HUD”) for a fighter plane, your field of view starts to teem with information in real time. In a matter of moments, you enjoy thought-controlled access to:
- Your calendar, schedule, & goals;
- Comprehensive biometrics & health data;
- Finances & budgets;
- Social & professional networks;
- The internet — which has accommodated to the new technology;
- Options to join the best VR environments the internet can conjure — some for work, others for more imaginative pursuits;
- Your favorite memories — stored in the cloud, replayed in full 3d, with all five senses, as if you were reliving them;
- A 3d map, with distance to favorite hangouts, shops, & friends;
- Your favorite emotional states — conjured on-demand, as-needed; and — last, but certainly not least;
- All the knowledge and processing power of the cloud, and any AI developed in the last three decades.
You control it all with your thoughts — inputting, reviewing, and manipulating mental displays as fast as you can think.
“Hard” tech is a relic. Old-style phones & computers have evolved to include transparent and holographic displays. But even privacy activists generally opt to use hard tech with an eyepiece like next-gen Google Glass.
Why this Technology is Inevitable
Think this is too intrusive? Think this would never happen? Think you’d refuse a cloud-linked implant?
“We can save and replay memories. The future is going to be weird.”
-Elon Musk, on Neuralink
The advantages of neural implants are too extensive to ignore. Without strong planning and a robust global regulatory framework, brain-computer interfaces (“BCIs”) will happen as soon as they can happen. They will come suddenly, and they will come at the whims of Big Tech.
Once they reach the public, the decision to refuse an implant will come with massive costs — social, and economic. Imagine competing with someone who can design emotions to achieve a desired result — someone whose thoughts are seamlessly enhanced by all human knowledge: with eidetic memory, perfect confidence, a faster mind, and a real-time mental link to anyone and anywhere in the world.
Welcome to Internet 2.0.
“Why shouldn’t people be able to teleport wherever they want?”
-Palmer Luckey, Founder, Oculus VR
In a future dominated by Big Data, globalization, and AI, the prospect of being left out of a ubiquitous mental tether to the cloud will be downright punishing.
Enhancing oneself is a seductive proposition. No matter what side effects or downsides may present themselves alongside a new prospect at advantage, there will always be those looking to push the envelope and get an “edge.” Even today, we regularly screen star athletes for performance-enhancing substances, and we regularly catch them breaking the rules. An entire cottage industry has now developed around the concept of nootropics — substances claimed to enhance cognition. Can we expect humanity to overlook the advantages of a thought link to the internet?
Without preemptive international steps, not a single nation on Earth will ban neural implants on its own.
This will leave privacy or security-oriented individuals with an impossible Faustian Bargain.
On one hand, you could refuse an implant. But you’d be outcompeted at every turn by a world moving in a different direction. No other technology would offer comparable data input speed, cooperative potential, possibility for innovative self-enhancement, or sheer efficiency. Everyone else would be linked, and you would not. This scenario is a bit like not having internet access in 2021.
Why We Should Act Now
The downsides of this technology are worse than anything Orwell or Huxley could ever have dreamed up.
They fall into three categories:
First, Big Tech will have a powerful incentive to sell emotion and thought-derived data to advertisers.
Big Tech will scan your thoughts — targeting advertisements to things you don’t even know you want.
For convenient access, every memory will be shared with a cloud server via live link. Even if there is an option to turn this off, you probably won’t want to. Big Tech’s business models revolve around the collection and resale of large quantities of data from users. It will be in their interests to make data collection as cost and feature-attractive as possible, at least when collection forms the product's core.
But what’s happening with data won’t always be transparent. Despite massive advances in global data privacy regulation (largely prompted by the EU), not everyone understands what data these companies collect, when or how they collect it, or what they do with it. (In December 2020, the FTC launched an investigation into several large technology companies’ policies involving the collection of user data.)
In a future world, data will be collected from all five senses, in all hours of the day — about everything from habit to consumer preference to hobbies to relationships to sensory data. There will be a powerful motive for technology companies to collect, parse, analyze, and sell this information in ways that consumers may not fully appreciate.
Which parts of your private life could the manufacturers (and their clients) see? When? And why?
These questions need to be answered in plain English before the technology is launched. This type of power and responsibility needs to rest on a solid foundation of trust.
Second, a neural implant system presents disturbing security problems.
With unfettered access to the brain, IT security issues and health problems could conceivably become one and the same. No encryption is perfect — even quantum cryptography will have its vulnerabilities. In a linked world, any data breach by hostile actors would leave your body, your mind, and your life subject to cyberattack (illness), ransom, or voyeurism.
Cyberattack, and Neurological Illness
With the flip of a switch or the press of a key (literal or metaphorical), a hacker (state or non-state actor) might alter the brain function of millions or billions of people all at once. Depending on the nature of the device, dysfunction could kill.
In the future, every clever new IT security threat might have the potential to become a neurological pandemic.
This has already become a problem with some implanted and internet-linked medical devices like pacemakers. But only a tiny fraction of the population wears these devices. With widespread adoption of neural implant technology, the incentive to hack or disrupt would grow alongside the userbase.
For state actors, this wouldn’t just be a new arms race — it would be a race to develop and defend against new WMDs. If the entire world were linked, state actors would direct substantial portions of their defense spending to new offensive and defensive systems designed at protecting and intruding upon the brains of unsuspecting linked individuals: civilian and military.
For non-state actors, there would exist the constant potential for simple havoc. A dysfunctional brain can produce anything from disproportionate swings in mood and personality, to the most painful headaches imaginable, to paralysis, blindness, or seizures.
Hackers will always be innovating.
What if the goal were something more banal (i.e., financial gain)? These implants would rest inside the brain. Presumably, hackers could access the most personal information in existence: memory, personality, emotion, and relationships.
What kind of ransomware might such a future scenario bring?
Imagine being locked out of your memories, emotions, and relationships — unless you paid a one-time fee.
What would your price be?
It’s hard to even think about this. But without a comprehensive regulatory framework, we are vulnerable.
What if hostile actors weren’t after anything concrete? What if they were just voyeurs?
Today, hackers use malware called remote access trojans (“RATs”) to insert themselves into unprotected technology and stealthily listen to/observe phones & PCs via their already-installed microphones and webcams. Compromised systems often remain compromised for some time without their users’ knowledge.
Consider a world in which your own five senses — your memories, preferences, thoughts, and emotions — could be subject to a RAT.
The ethics of human augmentation have been debated for a long time, but the issue will come to the fore when we perfect implants.
The most obvious ethical issue for neural implant technology is access.
Who gets the technology, and at what cost?
Historically, internet-linked technology can be paid for in two ways: in hard currency (i.e., Apple), or via advertising data (i.e., Facebook, Google). If this dichotomy continues, there will be more than one way of paying for a neural implant.
Wealthy people will be able to afford the technology outright, with fewer data collection and privacy concerns. This is similar to Apple’s business model with the iPhone and iPad. The upfront cost covers the device and future updates. Advertisers aren’t directly subsidizing hardware costs.
Those less well-off may not be able to afford access, which risks widening the chasm of economic inequality and squandering human potential. But to the extent the lower rungs of the socioeconomic spectrum can afford a link, that cost will be borne by advertisers — and indirectly by the consumers in the form of data. This “digital divide” already exists in the privacy and security realm, and it should not be allowed to persist into a future of implanted technology.
The idea of augmented supersoldiers has been a staple of science fiction for decades. But how will the world’s militaries react to the ability to permanently enhance their soldiers — in morale, reflexes, recall, and mental acuity?
It seems clear that such a technology will find applications in the military sphere. Consider the advantages to situational awareness. When a threat emerges in any theater of war, every soldier will become aware straight away. They’ll know its location, its nature, how the threat is changing in real time, and whether anyone needs help dealing with it. This information could be presented visually, intuitively, without delay. Communication will happen by thought — without the need to stop or use any external device.
The technology could aid morale and efficiency in dire scenarios. It could help prevent PTSD and other trauma-related conditions. It could help prevent or heal neurological combat injuries.
But there are disadvantages, too.
All members of the military have an obligation to break ranks and disobey an illegal order. What if superiors could use the ubiquity of neural implant technology to impair free will — or otherwise undermine the ability of an individual to deviate from the group? Of course, this would be wildly unethical and subject to global outcry. But the world isn’t necessarily becoming more democratic. At least some countries on Earth would be willing to use soldiers in this way — even if in a clandestine manner.
Whatever is happening in the civilian world, military technology is always secret — and it’s usually a step ahead. The public wouldn’t necessarily know about misuse. Without legal consensus about what’s acceptable and what’s not, we’ll soon be in uncharted waters.
It pays to anticipate problems. Whatever is coming, a global, preemptive regulatory framework will help confront these issues before they arise, and before they become insurmountable.
In 2021, we’re in a position to establish precedent, harness the potential of this technology, and prevent pitfalls. Like Asimov’s Laws of Robotics, we need to start thinking about these issues in advance.
- What can the technology be used to accomplish? What is off-limits?
- What hardware and software security measures must be built into the technology?
- What privacy measures need to be implemented to protect users?
- What parts of the brain can these devices access? Why, when, and how?
The next three decades will be some of the most exciting in human history. But the problems that await if we’re not careful could be worse than even our ancestors’ dystopian Sci-Fi.
We need to be ready.