Per Pro Schema
Published in

Per Pro Schema

Tech-Policy | Policy Risk Assessment: Neuralink & Brain Control Interfaces

The Blurring Boundaries between Man & Machine

Introduction

Recently, Elon Musk live streamed a product presentation to update on the progress of his Brain-Computer Interface [BCI] device called ‘Neuralink’. BCI like Neuralink employ invasive electrodes inside the brain and claim to alter cognitive-processes for the enhancement of human beings. While experts dismissed the event as another of Musk’s ingenious ploys to garner hype, such potential transfiguration of human beings raises interesting questions from a policy perspective. The claimed therapeutic applications of such BCI on brain disorders require an evaluative policy risk assessment exercise in order to ensure appropriate regulatory oversight. I attempt to undertake a brief policy risk assessment exercise of BCI like Neuralink focusing only on the quintessential steps involved in such a process and thus building a bare skeletal for detailed policy research study.

Normative Basis : Virtue-guided Approach

Novel technologies like Neuralink raise the overarching tension between uncertainity & need. Additionally, the complex network of technologies, therapeutic applications, risks & benefits in the field of novel BCI raise considerable challenges in the implementation of novel technologies & therefore it is insufficient of lay out a set of principles and interests and expect their practical application to be carried out. The need and uncertainty about the development of these technologies exist in mutual tension which is aggravated with unnecessary hype. In light of these considerations, any meaningful policy framework must be reinforced with principles of virtue. Virtue ethics is an approach to addressing questions about how we should live and conduct ourselves that places particular emphasis on moral character. It is most closely associated with the Aristotle’s ethics. A virtue guided approach to policy-making is appropriate in the context for several reasons.

First, a virtue-enabled approach enables the accommodation of flexibility and balance need and uncertainty that a policy framework requires.

Second, the idea of virtue ethics itself implies the necessity to use practical judgement to garner a response that is appropriate and proportionate to the particular circumstances at hand.

Third, Virtue ethics enables us to attend to potential recipients of BCI interventions not merely as Neo-liberal owners of our brains but as whole individuals with particular values, plans and relationships.

Last, and most important virtue ethics ensures sensitivity in particular circumstances.

Model Framework for Policy Risk Assessment

Such BCI require the development of a policy framework that is sensitive to the needs and uncertainties of therapeutic applications of neurological interventions. In light of the same, I employ Prof. Weigel’s model framework for policy risk assessment in order to carry out a brief policy risk assessment exercise that concerns BCI devices.

Prof. Weigel’s Model framework for Policy Risk assessments

POLICY GOAL — SAFETY

POLICY GOAL — I

FRAMING OF PROBLEM — Unintended effects of the novel BCI include their potentially harmful impacts on patients’ health and brain functions.

POLICY RECOMMENDATIONS — (i) Responsibility on those developing the technologies to provide accessible evidence that any potential risks are not disproportionate to benefits. (ii) Developers should carry out comparison of therapeutic benefits with the risk/benefit ratio of other treatment options. (iii) Onus on regulators to consolidate & publish a body of accessible and assessable evidence.

POLICY GOAL — Autonomy

POLICY GOAL — II

FRAMING OF PROBLEM — (i) Philosophical understanding of our individual autonomy from an existential point of view. (ii) Disruption of individual autonomy on account of brain disease or injury.

POLICY RECOMMENDATIONS — (i) Informed consent (ii) Responsible consent in cases where the patient does not possess the mental capacity to decide due to serious mental or neurological health disorders.

POLICY GOAL — Privacy

POLICY GOAL — III

FRAMING OF PROBLEM — (i) Information collected from BCI can be used for the identification of the person undergoing treatment. (ii) Information collected from BCI can be used for discriminatory purposes. (iii) Illegitimate interception of information collected by BCI devices.

POLICY RECOMMENDATION — (i) Developers should form associations to formulate best-practice standards for the collection and dissemination of data that should adhere to the governing data-protection standards.

POLICY GOAL — Equity

POLICY GOAL — IV

FRAMING OF PROBLEM — (i) Accessibility of treatments available only to the citizen of wealthy states. (ii) Treatment can be a cause of discrimination for those who live with neurological & mental health issues as long as these interventions remain novel.

POLICY RECOMMENDATION — (i) Funding models that enable close relationships between science, industry & non-governmental organizations. (ii) Relevant stakeholders should work together to combat social stigma & discrimination against individuals with brain-related models at a societal level.

POLICY GOAL — Trust

POLICY GOAL — V

FRAMING OF PROBLEM — (i) Hype that leads to loss of trust & confidence in these technologies if the promises are not fulfilled. (ii) Challenges on developers and researchers to convey the limits of current technology to secure informed consent to potential users against a backdrop of hyped expectations.

POLICY RECOMMENDATION(i) Developers should cultivate responsible communication practices concerning the limits of realistic capability of BCI. (ii) Academicians & Universities promoting BCI commercial products via their research findings should reflect on their own responsibilities while publishing it.

CONCLUSION

While technologies like Neuralink claim to potentially offer therapeutic benefits,they frequently represent one of the few, or only, treatment options currently available to individuals living with serious neurological or mental health disorders. There is, therefore, considerable value in inventive and reflective research and innovation practices in this field. However, this is also an area marked by uncertainty, vulnerability and hype. The virtues of responsibility and humility (severely depleted) require that decisions — taken by professionals and patients — about undertaking interventions using novel BCI neurotechnologies should be based on the best available evidence of their benefits and risks. Where interventions using novel BCI neurotechnologies have been demonstrated to be safe and effective, and provided they are subject to appropriate regulatory oversight, I would hope that the exercise of inventiveness would mean that they also become cheaper, easier to use, and more widely and equitably available.

--

--

--

Novel perspectives — Law, Philosophy, Policy, Technology & Society

Recommended from Medium

Artificial Intelligence does not fit standardized innovative approaches

Science and technology do not create utopia [3]— Cross Reality Internet Make the game more real

Create A Fresh Copy Of A Text With An AI Content Rewriter Tool

AI in Hospitality: Your AI may be ethical, but is it prudent? (Or is it just plain creepy…)

Community Spotlight: ARBlind

Worried about Artificial Intelligence taking over?

Today, we have with us Su Hua, founder and CEO of Kwai, who spoke at the Shun Feng Tang Deep Tech…

Taking A Deep Dive Into Natural Language Processing (NLP) Chatbots

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aayush

Aayush

Law | Tech-Policy | Philosophy | Underground Music | Naked Bikes | FP Shooters | Polymaths |

More from Medium

Prim’s Minimum Spanning Tree Algorithm.

Parallel Computing and Distributed Computing: An Overview

Flywheel Shooter Software for FRC

[Paper Notes] Baidu Apollo EM Motion Planner