We need to be having a much more serious conversation about Neural Enhancement right now.

On July 5th, Wired published an interview with Bryan Johnson, the founder and former CEO of BrainTree, an online payment company that he successfully sold to Paypal for $800 million. One of Johnson’s new ventures, Kernel, aims to develop neural implants that can be used to treat conditions like Parkinson’s and Alzehimer’s. But Johnson does not equivocate about his desire to develop neural implant technology that goes far beyond that which would treat and normalize patients’ neurodegenerative conditions. In fact, his ultimate aim is to produce neural implants that elevate human intelligence, cognitive capacity, and learning ability to superhuman levels.

Johnson is far from being the only person with such a vision. Elon Musk recently founded Neuralink, which claims it wants to develop a “wizard hat for your brain.” Facebook also has skin in the game, and so does DARPA. On the whole, tremendous amounts of resources are being devoted to this branch of neural research, and they’re being invested by some outrageously powerful interests.

Make no mistake that neural enhancement is not a pseudoscientific trend like Phrenology was in the 19th century. Real progress is being made. People who suffer from full-body paralysis can now type with their minds. Prosthetic limbs can now be controlled directly by the brain and even transmit rudimentary touch information. Memory can be enhanced or impaired via deep brain electrical stimulation. And kids can even buy the Backyard Brains RoboRoach Kit to cybernetically take control of a cockroach’s brain and movements with a bit of hardware and a smartphone app, ethics be damned.

Given the parties involved and the progress being made, it is reasonable to assume that at some point in the near future, some number of people will be able to cognitively enhance themselves beyond normal limits. We are being told by the tech elite that neural enhancement is the way of future — that it will cure devastating conditions and unlock the brain’s full potential. Some power players, like Johnson, are even already going so far as to indicate that neural enhancement will be a downright necessity for us to be able to keep pace in the twenty-first century:

We are currently developing a new form of intelligence in the form of AI that is increasingly capable, whether it’s conscious or not. For humans to be relevant in a matter of decades, there is no choice other than to unlock our brains and intervene in our cognitive evolution. If you try to imagine a world where we are happy 30, 40, 50 years from now, there is no version of that future where we have not been able to figure out how to read and write our neural code.

So, the message out of Silicon Valley is clear. Neural enhancement is humanity’s future. One can conceptualize it as a beacon of hope or as requisite for survival, but it’s coming, no matter what.

What Silicon Valley is of course failing to mention is that there are some potentially devastating, society-reorganizing pitfalls to the prospect of neural enhancement that are not being discussed right now. We need to be having a much more serious and deliberate conversation about these pitfalls in order to mitigate their potential danger. Just as our harnessing of electricity begat both the electric light and the electric chair, and our harnessing of the atom begat both nuclear power and the nuclear bomb, so too will cognitive enhancement beget both terrific potential and terrific danger.

Three of the dangers that I’ll address in this piece are (1) worsened socioeconomic inequality, (2) dehumanized users, and (3) technological vulnerability and privacy concerns. Whether or not these dangers are mitigated in the future is for us right here in the present to determine. Accordingly, I’ll wrap up by offering a few brief sketches of preliminary solutions.

Neural Enhancement and Worsened Socioeconomic Inequality

Perhaps the most obvious potential danger surrounding neural enhancement is the possibility, or rather, the likelihood, that it will only be available to the class of elites that is already steadily pulling away from the have-nots in terms of resources, education, and life expectancy. We know that new forms of technology tend to be highly expensive and are thus typically available at first only to those with the most wealth. Cars, stereos, personal computers, cellphones, and HDTVs were the toys of the elite class before they become ubiquitous in modern life. There is currently no good reason to assume that the introduction of neural enhancement technology would play out any differently: those with the most resources tend to get access to nice things first.

This simple fact poses lots of potential problems — particularly in the areas of socioeconomic stratification and social control. The advent of neural enhancement could turn a widening socioeconomic gap into a veritable social chasm between the haves and have-nots. Those in power would have a much tighter grip on it.

As a thought exercise, imagine that a version of a simple cybernetic neural enhancement not unlike Adderall exists. It enables the user to focus better, to learn more quickly, to stay awake longer, and to think more clearly. Its owners are quicker on their toes, have better recall, and can master new technical skills with comparative ease. It has no side effects.

Now imagine that your boss has it, but you don’t. She or he is one or two steps ahead of you in every conversation and at every meeting. All of the ideas she or he has are better than yours, no matter how much effort you put into brainstorming. She or he is performing better than you and making more money than you, and it’s all relatively effortless thanks to that neural enhancement.

Do you anticipate that that would be a fair and healthy working relationship, or do you think your day-to-day existence would be rife with insecurity, self-doubt, and a feeling of powerlessness? Call me neurally unenhanced, but the latter seems to me like it would be the much more likely possibility. It’s not unreasonable to envision a future in which those who are neurally enhanced may more readily advance in society, while those who aren’t may not.

A scarier prospect is that those who are neurally enhanced would more directly exert power and control over the non-enhanced. Please recall that DARPA is currently working on neural enhancement technology. In fact, it’s doing so on multiple fronts — ranging from prosthetics, to memory enhancement, to learning improvement. It’s true that the potential security that a cognitively enhanced military could afford us is one that might seem attractive. Foreign militaries wouldn’t stand a chance against our cognitively enhanced fighter pilots and Special Forces teams! But, once more, consider the ramifications: the prospect of being under military control in a state of martial law is already bleak enough. Being under the boot of a military that’s cognitively enhanced seems like an utterly hopeless prospect.

So, what do the elites of Silicon Valley have to say about the likelihood that only some people in the future will be neurally enhanced while others will be behind? In his interview with Bryan Johnson, Wired author Steven Levy attempts to broach the topic of the society-reorganizing potential of neural enhancement. They have the following exchange:

Levy: But if some people raise their abilities by brain augmentation, wouldn’t people who don’t change be at a disadvantage? They might not be able to compete in education, in jobs, and even in cocktail conversation. So it really wouldn’t be a choice, would it?
Johnson: Well, how do you feel about some people getting a private education and other being stuck in inner city schools?
Levy: I don’t feel great about it.
Johnson: So it’s already happening. People somehow think that a cognitive enhancement is something new to the scene. It’s not. We just simply have different forms. A private education is a form of enhancement. Humans always do whatever they can to maximize their well being [sic]. If we simply add technology to the brain, it’s a continuation of what humans have always done.

Johnson’s argumentation here is both fallacious and alarmingly cavalier: Not only does he deploy a ridiculous false analogy in equating a private-school education with neural enhancement, but he also makes a false appeal to the status quo while doing so. Essentially his argument is, “A select group of privileged people already has better schooling. Therefore it’s okay for a future select group of privileged people to have unrestricted access to the most powerful technology in human history.”

This argument doesn’t hold up against basic rhetorical and ethical scrutiny, and yet, amazingly, Levy lets it slide. And others in the tech media are guilty of similar crimes, as we’ll see later on.

Neural Enhancement and Dehumanization

Another conversation about neural enhancement that we need to be having revolves around the potential for neural enhancement to change people’s personalities. Some entrepreneurs and researchers may dream of a future in which conditions like social anxiety and depression are treated via neural enhancement. Even I, for one, have found myself fantasizing about a future world in which certain high-functioning, high-power sociopaths could be made to feel more human empathy and emotion. But the main issue here is that there’s a ton of gray area — where do we draw the line in deciding what to treat? When it comes to human personality, the lines between trait, quirk, and defect can often be drawn arbitrarily.

Further, what does our society look like when we treat our Plaths and our Kafkas as bugs in the system, rather than individuals to be cherished? The people who control neural enhancement technology will be the ones with the incredible power to make these determinations. If someone has pressed the Silicon Valley elites on these issues, then they seemingly haven’t yet deigned these issues worthy of a cogent response yet.

When we enhance one cognitive strategy, other cognitive strategies will be relatively diminished as a result. Perhaps in the not-too-distant future, some people may have a simple cognitive enhancement that facilitates quantitative reasoning. That may ultimately cause those people to use quantitative reasoning to solve problems that they would have otherwise solved using, say, spatial reasoning or emotional reasoning.

In other words, it’s fair to suggest, I think, that those with neural enhancements could lean on those enhancements to the detriment of other cognitive domains. Just as the ubiquity of mobile devices has changed how we handle the situation when it comes time to figure out who needs to tip what at the local café, so too will neural enhancement change how we handle the situation when an argument between children needs to be resolved at home, or when soft skills need to be used in the workplace.

Given its potential for changing the way we think and reason, neural enhancement can rightly be conceptualized as potential form of deep and pernicious social control: quantitative reasoning and computational thinking have already run amok in Silicon Valley and elsewhere to the detriment of more humanistic forms of qualitative and moral reasoning. Neural enhancement has the real potential to worsen this very real problem.

Technological Vulnerability and Privacy Concerns

Futurists and tech elites may wish for a world in which the human brain is comprehensively neurally enhanced — one in which all forms of reasoning, from the quantitative to the emotional, would receive a boost, ushering in the utopia that tech evangelists have been heralding for however many centuries now. But even in that best-case scenario, there would still be technological pitfalls that we would need to look out for. And, yet again, nobody is pressing the who’s who in the push toward neural enhancement for more details or a more comprehensive vision.

For one, it’s a reasonable assumption that cognitively-enhanced minds would be networked in some way. And, as we already know, networks have numerous vulnerabilities. For one, networks and devices on those networks can be penetrated and hacked. We saw this in 2016 with a massive hack that co-opted hundreds of thousands of internet-connected home devices in order to cripple Dyn’s servers via distributed denial of service. And we’ve also seen that Tesla cars, for all of their top-of-the-line technology and security, could be remotely hacked, as well.

It’s true that neural implants, if networked, would almost certainly have never-before-seen levels of security and state-of-the-art security protocols. But it’s common sense to assume that lots of individuals with plenty of technical skill and few moral scruples would work very hard in order to bypass that security. The current digital security arms race that we see taking place every day would simply transfer a much more high-stakes domain: access to (or control of) human cognition. It’s one thing if hundreds of thousands of home thermostats get hacked for the purposes of a DDoS attack. It’s quite another if hundreds of thousands of human brains receive the same treatment.

Perhaps in five or ten years some major and unforeseeable breakthroughs in cybersecurity will be made — breakthroughs that render completely obsolete the concerns I’ve voiced above. But even then, neural enhancement is still absolutely saturated with moral and ethical problems.

Consider, for one, the privacy and data-gathering concerns we’ll have to deal with. The modus operandi for new hardware and software out of Silicon Valley is to gather as much data as possible by default. Beyond that, we know that the United States government is collecting tons of data on its citizens, as well. Just so we’re clear here — this isn’t tin-foil-hat conspiracy stuff. This has been covered extensively in the mainstream media.

It’s reasonable, given what we’ve seen from Silicon Valley thus far, to assume that neural implants would track and record tons of intensely personal cognitive data. But are any of the power players in Silicon Valley leading the charge in having a conversation about how potentially problematic this is? Not exactly. Take, for example, this excerpt from an interview between Tim Urban, of waitbutwhy.com and Elon Musk about his Neuralink venture. Here, Urban brings up the potential for privacy invasion via neural enhancement, and writes this of summary of the exchange:

‘So, um, will everyone be able to know what I’m thinking?’
He [Musk] assured me they would not. ‘People won’t be able to read your thoughts — you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.’ Phew.

Never mind that the technology Musk is describing doesn’t exist yet (and therefore any potential safeguards it might have don’t exist yet either) — one assurance from Musk that everything will be hunky-dory in the future is seemingly sufficient for Urban. Of course, perhaps it’s unrealistic of me to think that a person who penned an article titled “Elon Musk: The World’s Raddest Man” would press him on ethical issues.

Neural enhancement is, at the end of the day, a form of digital hardware. And all digital hardware is prone to failure and obsolescence. Neural implants will glitch out and fail and need to be replaced. What will it look like if a neural implant becomes dated and gradually decays? What will it look like if and when a neural implant simply fails? None of these questions have answers yet. The soft reassurances of Elon Musk and his ilk are sadly insufficient, given the awesome potential of this technology.

Solutions

Hopefully by now I’ve convinced you that we need to be thinking much more critically about what the best way forward is for neural enhancement. To be clear, I’m not a Luddite: I don’t think it’s remotely realistic or practical to suggest that we should ban this emerging technology. The direction in which things are trending is readily apparent. However, I do think it behooves us to develop and agree upon a set of ethical guidelines for neural implants such that if and when they are introduced, they don’t do devastating damage to both the haves and the have-nots of the world.

What will this set of guidelines look like? For starters, obvious concerns about data and information gathering need to be addressed. Granted, many people have tried to have this conversation with Silicon Valley and the government and not gotten terribly far. Unless something fundamental shifts in how we handle the tech sector, it’s fair to assume that that the very troubling data collection trends we’ve already seen will continue.

Another consideration — and, for my money, it’s a much more intriguing one — would be imposing legal limits on the amount of neural enhancement afforded by these implants. You can think of these as “cognitive speed limits,” in a way, and they could be relatively easily defined through empirical research. Psychology and cognitive science have already afforded us with troves of information about how people generally fare on various cognitive tests. We can and should apply cognitive science in order to develop a set of cognitive performance benchmarks and work around those when developing neural enhancements. If we fail to do so, we could witness some extreme power shifts and consolidations in the twenty-first century.

Finally, if our technological overlords must decree that we’ve no choice but to enter this brave new world of neural enhancement, then we should establish spaces, communities, or even intervals of time in which neural enhancement simply cannot be used. We’ve been successful in setting up protected physical spaces in order to retain and shepherd the natural beauty and resources of Earth, and perhaps it’s time to start thinking about protected cognitive spaces, as well — lest we lose touch with that which makes us fundamentally human.

If Silicon Valley and its elites like Musk and Johnson are guilty of one thing above all else, it’s failing to understand and cherish the precious, unquantifiable humanness that we collectively share. Those of us who believe in humanness and the humanities have a responsibility to hold the tech elites to our moral and ethical standards, rather than letting them hold us to their cold, numerical ones.

Understand that it’s not a question of if we are going to see neural enhancement, but when. So, let’s be prepared for it. How much of the damage of neural enhancement is mitigated in the future is for us right here in the present to determine. So let’s make sure we’re holding Silicon Valley to a higher standard.