A black box for patients and doctors
Does this headline sound like a good idea? It all starts from a tweet by my friend Jon — a junior doctor in Reading and volunteer at UK Health Camp:
The way Jon intends it, this would be a widget attached to each bed or patient which records vital signs, drugs administered and what everyone is saying to them. Shouldn’t we have something like this?
In a previous life, I worked for a software company that developed systems to manage the entire patient monitoring life cycle: from generating referrals for GPs to admissions in hospitals, from printing labels for blood samples to running the machinery that analyse samples and return results. The systems we sold and managed generated millions of records per day and provided a central repository for patient data.
Technically speaking, it would be relatively easy to develop the sort of system Jon talks about. In fact, I remember this was an ongoing discussion when I was working in the sector (around 2007, just before Apple launched the first iPhone).
As care.data has shown, there is an important ethical component when we talk about collecting patient data: notwithstanding the technical aspects, is data collection of some kind justified? I believe that in order to approach this problem, we always need to ask ourselves, whether as technologists or medical professionals, three key questions:
- what can we learn from the data we collect?
- how can we change the way we operate based on what we’ve learned?, and
- what are the possible unintended consequences?
Data collection should never be for data collection’s sake. One of the most interesting projects we worked on was about a monitoring system for healthcare associated infections. This system brought together data about patients, their conditions, the antibiotics with which they were being treated. Through the existence of hubs, data coming from different regions could be collected, collated and compared, allowing medics to be made aware promptly of the spreading of certain infections, particularly antibiotic-resistant ones.
Knowledge, however, should always be aimed — at reviewing a procedure, at confirming processes are rightly in place, at changing structures. One example I often use comes from outside the healthcare world: it’s what data has done to sports like rugby. The ability to collect data from the players on the pitch, as they play, get tired, and receive injuries, has changed the way coaches, physiotherapists and physiologists put together their lineup. Flight recorders, both in their FDR and CVR fashions, are used, first and foremost, to make sure that in the event of any incident lessons can be learned to prevent it in the future.
So why not extend the idea of a healthcare flight recorders to the medics themselves like letting them equipped with body worn cameras like those in use by police officers? If we are seeking maximum transparency and the ability to learn, we certainly should have them? This is where we start having to consider the unintended consequences. We live in a society that encourages legal litigation. Medicine is a practice that is subject to bias and error (there is a masterly discussion of this on Nautilus). I fear that the mere existence of this type of data would just increase the level of litigation, with dramatic consequences in the level of healthcare, insurance premiums for doctors, and potential cost to the taxpayer. As much as I want society to move on from such levels of litigation, I cannot see it happen soon. On the other hand, the ability to collect and analyse such data could have a massive impact on the way we approach the clinical professions.
One of the big questions for whoever runs data-intensive programmes is a synthesis of these three points: how can we collect data that is useful, that can positively change the way we deliver services — sometimes life-saving services — while minimising the unintended negative consequences?