A new wave of transhuman technology is emerging that promises to bring humans and computers ever closer together. But are we ready for it? Is it ethical to give others access to our minds? How would it affect our evolution as a species? Is this the future we want?
Transhumanism, the philosophy of utilising technology to enhance human intellectual, physical, and psychological capacities, is promising a future where the lines between man and machine will be blurred; where humans will be more than humans, ‘post-humans’.
Sarwant Singh predicted in 2017 that “the coming years will usher in a number of body augmentation capabilities that will enable humans to be smarter, stronger, and more capable than we are today.” Elon Musk’s recent unveiling of his company Neuralink’s wireless implantable device, has taken this to a new level. ‘Threads’ smaller than a human hair are implanted by robot into the brain and can then detect the activity of neurons. In the immediate term this has great promise medically, and Neuralink is looking for FDA approval to test on humans in 2020. In the longer term however, Musk envisages “symbiosis with artificial intelligence” as the only way for humans to compete with AI and ensure our future survival.
Not to be left behind, Facebook has also jumped on the ‘mind-reading’ bandwagon. It recently announced its plans to “build a non-invasive, wearable device that lets people type by simply imagining themselves talking.” Initially this would be promoted to help people with paralysis to ‘speak’ their thoughts, but in the long term Facebook wants everyone to be able to control their electronic devices using their brain signals. Other companies such as Kernel, Emotiv, and Neurosky are also developing brain machine interfaces, and it is only a matter of time before more come on board. So how should we feel about this technology in general, and more importantly, in the hands of private companies?
The medical implications are not to be underestimated, promising doctors a much better way to interact with the brains of patients with paralysis or Parkinson’s for example. Musk claims that the Neuralink device has 1000 times more electrodes interacting with the brain than the current leading FDA approved device for patients with Parkinson’s. And those people who already view their phone or tablet as an extension of themselves, might well celebrate taking this relationship to the next level. But there are some uses of the technology that may not sit so comfortably.
Take BrainCo for example, which recently caused a social media storm over pictures of Chinese students wearing ‘Focus headbands’. These devices were measuring their brain activity and would light up in different colours to show the levels of concentration of the child. BrainCo’s CEO Bicheng Han claims the product is designed to improve concentration, and is very much centred around raising academic achievement. However, BrainCo’s aim to “build the world’s largest brainwave database” has not done much to dampen people’s fears of the potential for misuse, with no clear statement or policy from BrainCo about what will be done with student data long term, or how a parent can request their data be deleted.
China are at the forefront of mining data directly from people’s brains, using such brain surveillance devices to monitor staff in factories, on public transport, in state-owned companies and in the military. The questions this throws up are intimidating to say the least…
In a reality of augmented humans, what are the ramifications for people that choose not to engage with such technology? How will they fare against employees who are happy to wear brain reading caps? What will the schools of the future make of parents who decline to allow such technology to be used on their children? How will people who readily sign up to such wearables be able to protect themselves against the unpermitted use of their data, or worse still, hacking?
Globally, countries are starting to wake up to the threat of brain data breaches, in what has been dubbed ‘neurorights’. Some governments are already trying to make brain data protection a human right. The right to privacy was developed by the 1995 EU Data Protection Directive (95/46/EC), and aims specifically at protecting individuals with regard to the processing and transfer of personal data. The EU is planning to adapt the data protection rules to extend this to the new digital environment. But will they be able to keep pace with the technology?
A major advocate of such human rights is neuroethicist and researcher Marcello Ienca, whose 2017 paper outlined four specific rights for the neurotechnology age which he believes should be enshrined in law: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
This is a view shared by Rafael Yuste, a professor of neuroscience at New York’s Columbia University. He is leading the Morningside Group, a collective of twenty-five scientists, ethicists and engineers, who have released a report that also argues for “neurorights”, to protect citizens. Yuste has been working with the Chilean government to amend the country’s constitution to enshrine brain data as a human right. It is due to be voted on in parliament later this year.
Imagining a world in which these rights are not protected can read like a science fiction horror film — a world where soldiers can be programmed to be less empathetic and more likely to blindly follow orders, where suspects’ thoughts can be read without their consent during police interrogations and where political dissent can be quashed by governments. These are extreme scenarios, but not beyond the realms of possibility until the laws are in place to protect us.
In May 2017 the Economist ran the headline, “The world’s most valuable resource is no longer oil, but data”. We know that the major players in the digital world trade in data harvested from their users. Now they have an opportunity to literally get inside our heads — is this a commodity we really want to trade commercially?
Elon Musk has argued that AI “is the single biggest existential crisis that we face”. Perhaps a more imminent danger we are facing is a break down of the last wall of privacy many of us have, the innermost thoughts within our brains. George Orwell said in 1984, “Reality exists in the human mind, and nowhere else.” We should be very careful who we allow to access this core part of ourselves.
The potential of a brain-machine interface is vast and seemingly limitless. The parameters of its use and the guidelines that it will operate within are still being developed. We will be watching this space, not without some trepidation….