Neuroprivacy: learning from past privacy failures to protect the future

Juhi Farooqui
SciTech Forefront
Published in
5 min readJul 21, 2022

Juhi Farooqui and Samantha Pettersen

Neurotechnologies for a range of medical and consumer applications have developed rapidly over the last decade. While breakthroughs in assistive neurotechnology show great promise for the future of prosthetic devices, a recent surge of corporate advancement into this landscape raises privacy concerns. The vulnerability of individuals’ neural data to exploitation for profit, discrimination, or prosecution has prompted a need for proactive protections of neuroprivacy rights.

Neuroprivacy: What is it and why does it matter?

Neuroprivacy refers to the right of individuals to keep their neural data private. Neural signals can encode sensitive information, ranging from health status to preferences to concealed knowledge. The full extent of information that can potentially be extracted from neural data is still unknown, and will likely change as methods of data collection and analysis advance. These features make it imperative that we take proactive steps to ensure the privacy, safety, and security of our brain data.

Neurotechnology extracts information (e.g. intended direction of movement) from neural data to control an end product (e.g. a prosthetic limb). However, loss of ownership over one’s own neural data can lead to major privacy risks.

As companies like Neuralink and Facebook/Meta begin to promise neurotechnology to consumers, it becomes ever more critical to ensure that individuals’ neural data are protected. Unlike medical data, which is regulated by HIPAA and the FDA, there is no comprehensive policy framework to regulate the use of consumer neural data. As the neurotechnology landscape continues to expand, it is critical to protect individuals and ensure neuroprivacy.

Risks: Lessons from Neurotechnology and Genetics

Many of the risks posed by neural data mirror those posed by another form of sensitive data over the last few decades: genetic data. The case of genetic data privacy offers unique insight into the potential risks that neural data may present, and possible approaches to mitigating them.

  • Loss of ownership over one’s own neural data

The proliferation of consumer genetic testing in the United States has created an ongoing conflict regarding data ownership, with large corporate stakeholders lobbying to retain control over data collected from individuals. In fact, in one chilling example, Myriad Genetics patented the genes known to cause breast cancer, thereby creating a monopoly over breast cancer screening for 15 years. This is a dangerous precedent to set for the neural data landscape, where the risks of losing ownership over one’s own data are already becoming clear. For example, recent studies have shown that sensitive information such as financial information or personal preferences can be extracted or “hacked” from neural data. Neural data are also of great interest to marketers for this reason, creating incentives for this highly personal form of data to be sold or shared by corporate holders.

  • Discrimination on the basis of neural data

Due to the sensitive nature of information that can be extracted from neural data, the danger of discrimination is of deep concern. Researchers have pointed out the potential for health information, attention data, etc. extracted from brain data to be used in discriminatory ways by employers or insurers. Similar concerns regarding genetic data led to the passage of the Genetic Information Nondiscrimination Act (GINA) in 2008, 13 years after it was first introduced. The long timeline of passage for this legislation highlights the need for proactive action: to have safeguards in place by the time such discrimination becomes possible on a large scale.

  • Privacy risks of law enforcement usage of neural data

Forensic genealogy, or the practice of using genetic data from direct-to-consumer genetic testing companies for law enforcement business such as tracking suspects, has led to major privacy concerns, comparable to unreasonable search and seizure. These concerns led to the establishment of an interim policy by the Department of Justice to limit the use of the practice, as well as the practice being restricted in Maryland and Montana. As with genetic data, policing technologies leveraging neural data are a potentially powerful tool for law enforcement, but carry great privacy risks due to the personal information that neural data can reveal. Moreover, these technologies are less precise than they are perceived to be, leading to potentially catastrophic errors.

Recommendations

We recommend the federal adoption of a comprehensive regulatory framework for the protection of consumers’ neural data as collected by corporations via commercial devices. Concerned members of the public can write to their representatives, particularly senators who sit on the United States Senate Subcommittee on Consumer Protections, Product Safety and Data Security, to advocate for the following legislative actions:

Classify brain data as sensitive data. Recognizing the unique risks and threats posed by misuse of neural data by corporate actors, it is important to explicitly recognize the special status of brain data as a form of sensitive data, as modeled by the General Data Protection Regulation (GDPR) in the European Union, safeguarding its use for research advancement while highlighting the need for protection from misuse.

Assert consumers’ ownership over their neural data and right to choose how their neural data are used and shared. An individual’s ownership of their own brain data must be closely protected, and the sale, sharing, or use of neural data must be subject to the voluntary choice of the person from whom they are collected. There is substantial precedent for such an approach to genetic data, and recent legislation passed in Chile sets an early precedent for a similar approach to neural data.

Prohibit the collection and use of neural data for coercive or discriminatory purposes. Employers should be prohibited from gathering neural data for purposes such as productivity monitoring or gathering information that can be grounds for termination of employment. Moreover, insurance providers must be restricted from discriminating in coverage on the basis of information gleaned from neural data. Such protections are the basis for the Genetic Information Nondiscrimination Act, and are recommended by neural data privacy experts.

Create clear guidelines and limitations for law enforcement access to neural data. Any data collected by a private entity must be protected from unauthorized use by or disclosure to law enforcement to secure Fourth Amendment privacy rights. The use of neural data in law enforcement investigations should be subject to standardized guidelines governing when and how these data can be used, as modeled by genetic privacy legislation in Maryland and Montana.

Juhi Farooqui, BA

PhD Student in Neural Computation at Carnegie Mellon University, researching somatosensory neuroprostheses at the Rehab Neural Engineering Labs at the University of Pittsburgh. BA in Cell Biology and Neuroscience with minors in Computer Science and Cognitive Science from Rutgers University. Formerly a post-baccalaureate research fellow at the Center for Neurotechnology at the University of Washington and a Technology and Liberty Project intern with the American Civil Liberties Union of Washington.

Samantha Pettersen, MPH

A recent graduate of the University of Pittsburgh with a degree in Public Health Genetics. She also has a B.S. from Michigan State University in Genomics and Molecular Genetics. Samantha is now a Public Policy Fellow at the Association for Molecular Pathology (AMP). The views expressed in this brief are the author’s own and do not necessarily reflect positions taken by AMP.

--

--

Juhi Farooqui
SciTech Forefront

Writer, poet, miscellany artist. Scientist, student, usually distracted.