AI in the Doctor-Patient Relationship

Identifying Some Legal Questions We Should Be Asking

Claudia Haupt
Data & Society: Points
5 min readJun 19, 2018

--

Photo credit: Bojan Bjelic

Commentators seem confident that artificial intelligence (AI) will transform healthcare. Health apps prompt healthcare providers to explore incorporating “mHealth” into medical care. Meanwhile, technology companies connect existing technologies like Amazon’s Alexa to diagnostics AI, creating new avenues of medical advice-giving. As we may be moving from computer-aided diagnosis to algorithm-generated advice, new technical, medical, and legal questions emerge.

Technological innovation in medicine occurs in a densely regulated space dominated by asymmetries of knowledge and social relationships based on trust. These relationships are governed by a legal framework of professional advice-giving. Traditionally, this framework assumed interactions between human actors such as patients and doctors. Introducing AI challenges this assumption, though there is disagreement whether human doctors will work with AI or will be replaced by it. I contend that AI will not entirely replace human doctors (for now) due to unresolved issues in transposing diagnostics to a non-human context, including both limits on the technical capability of existing AI and open questions regarding legal frameworks such as professional duty and informed consent.

I contend that AI will not entirely replace human doctors (for now) due to unresolved issues in transposing diagnostics to a non-human context, including both limits on the technical capability of existing AI and open questions regarding legal frameworks such as professional duty and informed consent.

These are the key legal features defining the regulatory space in which medical advice is dispensed:

Professional licensing establishes a minimum educational basis for admission into the medical profession. Sometimes criticized for its economic objective in limiting access to the profession, it does serve the traditional purpose of ensuring health and safety of patients. Licensing regimes are state laws enacted under the police powers of the states. Once licensed, professional discipline seeks to ensure that professionals uphold the standards set by the profession.

Professional speech doctrine is still in flux. Several federal appellate courts have recognized First Amendment protection for professional speech against state interference. In Wollschlaeger v. Florida (2017), for example, the Federal Court of Appeals for the Eleventh Circuit struck down a Florida statute that prohibited doctors from asking their patients about gun ownership as a matter of course, holding that this law violated the First Amendment.

Fiduciary duties address the knowledge asymmetries between doctor and patient, creating duties of loyalty and care. The patient entrusts the doctor with providing guidance regarding their health decisions. In return, the doctor owes the patient to act in the patient’s best interests, according to the knowledge of the profession.

Informed consent also responds to knowledge asymmetries between doctor and patient, ensuring that the interest in patient autonomy is protected. In order to make informed choices, the patient — with whom the ultimate decision rests — must be aware of the range of options. The two competing approaches that currently exist in different jurisdictions either assume a reasonable patient standard or a reasonable physician standard, emphasizing either the patient’s or the physician’s role in informed consent, to determine the baseline.

Malpractice liability rests on the premise that only good professional advice is protected. Bad professional advice is subject to tort liability, and the First Amendment provides no defense against malpractice claims. But there is not usually only one answer that counts as good professional advice. Tort law takes this into account through its “two schools of thought” or “respectable minority” doctrines, allowing for diverse views to count as defensible medical knowledge. Grey areas exist between bad professional advice and minority interpretations that often come into relief when professionals have religious, political, or philosophical objections to the professional consensus (say your doctor considers abortion a grave moral wrong and refuses to advise on the full range of reproductive healthcare, or your therapist believes in conversion therapy, a practice the profession rejects); resolving these conflicts presents a legal and ethical challenge.

Photo credit: Paul Goeltz

What might change or be disrupted by introducing AI into this regulatory space? It is useful to consider the introduction of AI into medicine in a larger societal context.

Legal scholar Jack Balkin suggests that “we are rapidly moving from the age of the Internet to the Algorithmic Society.” He defines the Algorithmic Society as “a society organized around social and economic decision making by algorithms, robots, and AI agents who not only make the decisions but also, in some cases, carry them out.” In this emerging society, we need “not laws of robotics, but laws of robot operators,” and “the central problem of regulation is not the algorithms but the human beings who use them, and who allow themselves to be governed by them. Algorithmic governance is the governance of humans by humans using a particular technology of analysis and decision-making.” We should likewise begin to identify questions about forms of algorithmic governance in the medical advice-giving context. At each regulatory access point, the guiding questions ought to be what the incorporation of AI changes, and how this change should best be addressed.

At each regulatory access point, the guiding questions ought to be what the incorporation of AI changes, and how this change should best be addressed.

Licensing might prompt the question whether the AI itself ought to be subject to a licensing requirement. Professional discipline will have to confront questions such as whether a certain level of technological knowledge will be required of licensed professionals using different forms of AI as part of their practice. The same question may arise in professional malpractice, if incorrect use or non-use of an available technology results in bad advice. Liability more broadly will be complicated by the introduction of AI. Who may be held liable as a matter of professional malpractice or product liability? In terms of informed consent, we might ask whether the introduction of AI will require separate consent. Since fiduciary duties are based on a relationship of loyalty and trust, the law will have to address the question of what that means for AI. Finally, the question of professional speech is complicated significantly when a black box generates advice without explanation, and the rise of precision medicine will make this issue particularly salient.

Rather than assessing each app, algorithm, or AI individually, focusing on the regulatory framework allows us to ask the relevant legal questions independent of the specific technological instantiation. In addition to acknowledging that the doctor-patient relationship is densely regulated, it is also worth remembering that this regulation exists for a reason: to protect the values underlying this special relationship built on knowledge and trust. The introduction of AI into this relationship should be cognizant of these values.

Claudia E. Haupt, Ph.D., J.S.D., is a Fellow at Data & Society Research Institute, Resident Fellow at the Information Society Project at Yale Law School, and Research Fellow at the Solomon Center for Health Law & Policy at Yale Law School. Haupt was recently named one of ASLME’s 2018 Health Law Scholars.

--

--

Claudia Haupt
Data & Society: Points

Fellow, Data & Society Research Institute; Information Society Project, Yale Law School; Solomon Center for Health Law & Policy, Yale Law School @CEHaupt