Tech-Policy | Policy Risk Assessment: Neuralink & Brain Control Interfaces
Recently, Elon Musk live streamed a product presentation to update on the progress of his Brain-Computer Interface [BCI] device called ‘Neuralink’. BCI like Neuralink employ invasive electrodes inside the brain and claim to alter cognitive-processes for the enhancement of human beings. While experts dismissed the event as another of Musk’s ingenious ploys to garner hype, such potential transfiguration of human beings raises interesting questions from a policy perspective. The claimed therapeutic applications of such BCI on brain disorders require an evaluative policy risk assessment exercise in order to ensure appropriate regulatory oversight. I attempt to undertake a brief policy risk assessment exercise of BCI like Neuralink focusing only on the quintessential steps involved in such a process and thus building a bare skeletal for detailed policy research study.
Normative Basis : Virtue-guided Approach
Novel technologies like Neuralink raise the overarching tension between uncertainity & need. Additionally, the complex network of technologies, therapeutic applications, risks & benefits in the field of novel BCI raise considerable challenges in the implementation of novel technologies & therefore it is insufficient of lay out a set of principles and interests and expect their practical application to be carried out. The need and uncertainty about the development of these technologies exist in mutual tension which is aggravated with unnecessary hype. In light of these considerations, any meaningful policy framework must be reinforced with principles of virtue. Virtue ethics is an approach to addressing questions about how we should live and conduct ourselves that places particular emphasis on moral character. It is most closely associated with the Aristotle’s ethics. A virtue guided approach to policy-making is appropriate in the context for several reasons.
First, a virtue-enabled approach enables the accommodation of flexibility and balance need and uncertainty that a policy framework requires.
Second, the idea of virtue ethics itself implies the necessity to use practical judgement to garner a response that is appropriate and proportionate to the particular circumstances at hand.
Third, Virtue ethics enables us to attend to potential recipients of BCI interventions not merely as Neo-liberal owners of our brains but as whole individuals with particular values, plans and relationships.
Last, and most important virtue ethics ensures sensitivity in particular circumstances.
Model Framework for Policy Risk Assessment
Such BCI require the development of a policy framework that is sensitive to the needs and uncertainties of therapeutic applications of neurological interventions. In light of the same, I employ Prof. Weigel’s model framework for policy risk assessment in order to carry out a brief policy risk assessment exercise that concerns BCI devices.
POLICY GOAL — SAFETY
FRAMING OF PROBLEM — Unintended effects of the novel BCI include their potentially harmful impacts on patients’ health and brain functions.
POLICY RECOMMENDATIONS — (i) Responsibility on those developing the technologies to provide accessible evidence that any potential risks are not disproportionate to benefits. (ii) Developers should carry out comparison of therapeutic benefits with the risk/benefit ratio of other treatment options. (iii) Onus on regulators to consolidate & publish a body of accessible and assessable evidence.
POLICY GOAL — Autonomy
FRAMING OF PROBLEM — (i) Philosophical understanding of our individual autonomy from an existential point of view. (ii) Disruption of individual autonomy on account of brain disease or injury.
POLICY RECOMMENDATIONS — (i) Informed consent (ii) Responsible consent in cases where the patient does not possess the mental capacity to decide due to serious mental or neurological health disorders.
POLICY GOAL — Privacy
FRAMING OF PROBLEM — (i) Information collected from BCI can be used for the identification of the person undergoing treatment. (ii) Information collected from BCI can be used for discriminatory purposes. (iii) Illegitimate interception of information collected by BCI devices.
POLICY RECOMMENDATION — (i) Developers should form associations to formulate best-practice standards for the collection and dissemination of data that should adhere to the governing data-protection standards.
POLICY GOAL — Equity
FRAMING OF PROBLEM — (i) Accessibility of treatments available only to the citizen of wealthy states. (ii) Treatment can be a cause of discrimination for those who live with neurological & mental health issues as long as these interventions remain novel.
POLICY RECOMMENDATION — (i) Funding models that enable close relationships between science, industry & non-governmental organizations. (ii) Relevant stakeholders should work together to combat social stigma & discrimination against individuals with brain-related models at a societal level.
POLICY GOAL — Trust
FRAMING OF PROBLEM — (i) Hype that leads to loss of trust & confidence in these technologies if the promises are not fulfilled. (ii) Challenges on developers and researchers to convey the limits of current technology to secure informed consent to potential users against a backdrop of hyped expectations.
POLICY RECOMMENDATION — (i) Developers should cultivate responsible communication practices concerning the limits of realistic capability of BCI. (ii) Academicians & Universities promoting BCI commercial products via their research findings should reflect on their own responsibilities while publishing it.
While technologies like Neuralink claim to potentially offer therapeutic benefits,they frequently represent one of the few, or only, treatment options currently available to individuals living with serious neurological or mental health disorders. There is, therefore, considerable value in inventive and reflective research and innovation practices in this field. However, this is also an area marked by uncertainty, vulnerability and hype. The virtues of responsibility and humility (severely depleted) require that decisions — taken by professionals and patients — about undertaking interventions using novel BCI neurotechnologies should be based on the best available evidence of their benefits and risks. Where interventions using novel BCI neurotechnologies have been demonstrated to be safe and effective, and provided they are subject to appropriate regulatory oversight, I would hope that the exercise of inventiveness would mean that they also become cheaper, easier to use, and more widely and equitably available.