Distributed cognition: designing for the medical expert and the machine

Dr Cosima Gretton
Design No Harm
Published in
4 min readJun 11, 2017

In 2013 the Journal of Patient Safety published a report that found that 440,000 Americans were dying each year from preventable medical error. In the three years that have elapsed since then new technologies to prevent such errors have been developed — if not necessarily implemented.

But how far can throwing technology at the problem get us?

In four year study at a tertiary hospital in Hong King a team analyzed the number of medication errors due to technology. Of these, just 1.9% were due to the technology itself. The remainder, 98.1%, were due to ‘socio-technological’ factors — the errors originating from ‘knowledge coupling’ between the expert and the machine.

One example of such errors comes from my short time in anesthetics. The oxygen saturation probe shows a pulse wave and a number. If it falls off the patient’s finger or can’t get a good reading, the pulse wave is flat or wildly erratic and a question mark is displayed instead of the number. But there is a grey zone in its design. On occasion, when the device cannot detect a clear signal, the amplitude of the wave is only very slightly attenuated but overall continues to appear reliable. And instead of showing a question mark, it continues to display the last recorded number. One day, in the middle of an operation, my consultant looked at the patient and the display and suddenly repositioned the saturation probe. Years of experience had reduced his trust in the machine. The digit that initially read 100%, now refreshed to show the patient had de-saturated (lost oxygenation) to 80%. The tube had slipped down one of the main bronchi, resolved by simply adjusting its position.

Well designed, integrated and intelligent software may go a long way towards reducing deaths due to medical error, but new kinds of errors can and will arise. Experts using technology are subject to all sorts of cognitive and decision making biases that also need to be taken into account.

Some examples below:

Based on Coiera, E. (2015). Technology, cognition and error. BMJ quality & safety,24(7), 417–422.

Dr Itiel Dror, cognitive scientist at University College London describes the use of technology in expert domains as a form of ‘distributed cognition.’ There is, he argues, a spectrum from human only cognition, for example, observing and diagnosing a skin infection, to machine only cognition. The recent emergence of machine learning approaches to perceptual diagnosis are an example of the latter, including Stanford’s computational pathologist, Enlitic’s deep learning for radiology and Watson’s oncology recommendations.

Based on Dror, I. E. & Harnad, S. (eds.) (2008). Cognition Distributed: How Cognitive Technology Extends Our Minds. (258 pp.) John Benjamins, Amsterdam.

As technology progresses towards the machine end of the spectrum, automating a greater proportion of the cognitive tasks of a doctor we will redefine the role of the clinician. There may be new kinds of cognitive error of a different nature to those that we face now.

Of the most concerning to me at the moment, considering the emergence of artificial intelligence, are errors of omission and commission.

Omission occurs where the clinician doesn’t do something because the system didn’t tell her to. And commission is where the clinician does something they might otherwise not have done just because the machine said so.

A recent article in FastCoDesign highlighted that one of the biggest problems with AI is not some Terminator style apocalypse (though Nick Bostrom would disagree), but the gradual attrition of our ability to make decisions.

We’ve already seen it in the attrition of pilot’s skills which caused the Air France crash in 2009. The Bureau of Investigations and Analysis (BEA) has called for improved pilots training even in the context of highly automated flying.

How might we design systems to fully benefit from the combined cognitive efforts of experts and their machines, and reduce the errors that arise from their interaction?

One answer perhaps lies in a new approach called Deep Design. Coined by Sheldon Pacotti at Frog Design in a recent article. He argues that until now, design in technology has been focussed on shutting the user out. When something goes wrong and crashes, we don’t want to know why, we just want it fixed.

With intelligent algorithms, however, we need to create a conversation. We, as consumers as well as experts, want to know why it came to that decision or recommendation. As experts it will be essential in maintaining some level of decision making abilities.

But also, the machines — at the moment — need us. An algorithm must be trained, and for an expert to effectively do this, transparency into it’s process is needed.

I’m not a believer in the robot doctor concept. Healthcare is too human ever to be fully automated. A cancer diagnosis, for example, is best relayed by another human, who can at least try to empathize and may themselves one day experience the same. The most important question as medicine becomes digitized and automated, is, how does the human fit into all of this?

--

--

Dr Cosima Gretton
Design No Harm

Medical doctor | Product Lead | Ex-Test & Trace | Founder @AXNSCollective