29 Thoughts on the Future of Digital Healthcare

Jessica Rose Morley
19 min readMar 23, 2019

--

It’s my birthday today so here is 1 thought per year on the future of digital healthcare.

  1. Data does indeed save lives, but if we really want to capitalise on the opportunity for it to do so we have got to move away from the belief that this is only the case when the data-driven solution is directly impacting a patient outcome e.g. diagnosis or prediction. Data can save more lives, if we also get excited about non-sexy non-patient facing data sets. Performance management of contracts, effective matching of staff supply to point of care need, A/B testing in ‘model hospitals’ of clinical pathways to improve efficiency, real-time monitoring of GP/Hospital running costs etc. These are all things we currently either don’t do, or don’t do well that ‘AI’ could do far better than us. If we focused investment on these less glamorous areas of the healthcare system, standardising datasets etc. we could release staff capacity and funding for face-to-face patient care and thus save lives indirectly. It would also give us an opportunity to ‘test’ the governance requirements of algorithmic systems in a way that poses minimal risk to lives. For example, we know that we will need to put a ‘shelf-life’ on algorithmic systems that refine because at some point once live, they will cease to be the ‘product’ that received regulatory approval. Working out how to effectively monitor the ‘turning-off’ of intelligent systems that need to be re-approved, or notifying regulators when an intelligent system needs to be reinspected, will take time. If we can pilot on systems that present no risk to people, we can ensure we design this mechanism in the safest way possible.
  2. Whether or not the GDPR genuinely contains a right to explanation is debatable in and of itself, but assuming it does, it’s necessary but not sufficient for medical situations. This is because the ‘right’ only kicks into effect if an individual feels as though they have a reason to contest a decision that was made about them. This is not enough in the context of medicine, we should be expecting and demanding a lot more.
  3. Greater attention needs to be paid to the Human Computer Interaction (HCI) element of digital healthcare, particularly algorithmically-driven healthcare systems. Focusing on explicability is important as argued here by David Watson and others of the Turing and Digital Ethics Lab, but when that explicability function needs to be either clinician-facing or patient-facing in real-time, the nature of how that explanation is presented becomes less of a technical question, and more of an interpretation question. Automation bias is a real issue that we have to tackle, and there is a risk that if we make the ‘black-box’ seem transparent without considering how the explanation is interpreted, we will only heighten its potentially negative ramifications. If all that is holding a clinician back from taking the recommendation of a decision-support tool at face value immediately is a nagging concern that they don’t understand how the recommendation was made, and we take this doubt away, the chances of a clinician accepting a recommendation they would otherwise recognise as being inappropriate because it came from a machine are heightened. This risk is even higher at the patient-facing level. If the way that AI-driven symptoms checkers are made ‘explainable’ is by getting them to state “There is an X% chance of you having Y condition because this was the condition that Z% patients like you were diagnosed with” we’re assuming that is meaningful to the individual reading the message, which means we are assuming an awful lot about their eHealth literacy level and their ability to judge credibility of information. If not handled carefully, from a HCI perspective, such patient-facing intelligent systems could cause an increase in anxiety and damage to people’s psychological integrity. This would not be because the individual in question was a ‘bad user,’ it would be because the product was badly designed. We need to focus on front-end explainability as much as back-end.
  4. Building from this, we also need to recognise that, as Wachter and Mittelstadt have argued here, what is really needed is a right to reasonable inferences. They argue: “protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.” This is as true in the domain of digital or algorithmically-enhanced healthcare as it is in other sectors or industries. Even if we told people how a clinical decision was made, and designed the explanation in a way that it made sense to the individual, this doesn’t really protect them from harm unless it’s clear how this information is used. As this article in the BMJ highlights, medical apps often share data with third and fourth parties in order to deliver the benefits but it’s not always clear to individuals that this is happening or what the other parties are inferring from the data. Data is contextual, if it’s taken out of context the inference could be inaccurate and this could be very damaging. This means we will need to think more broadly, and more flexibly, about what data protection means.
  5. Currently “direct care”, as used in the NHS, is defined as the care of an identified patient by an identified clinical professional. It’s a key phrase when talking about sharing of confidential identifiable information. However, its current definition lacks clarity on what happens in a heterogeneous system where care is being coordinated by a range of human and non-human agents dispersed both in time and space. It’s time to face up to the fact that sometimes people are being cared for in a preventative way, or they are being cared for outside of traditional settings, and sometimes the advice that is being provided to that individual has been derived from a ‘conversation’ between two artificial agents. If we don’t face this reality head-on and start tackling long-standing assumptions we create too much fear in the system and risk missing key opportunities to improve people’s lives because we make individuals too scared of the potential ramifications for them. Direct care will mean different things to different people in different contexts, gaining clarity about what it means in these different scenarios being brought about by the introduction of new places of care and new means of care, is essential.
  6. Following on from point 5, we also need to get realistic that if we are talking about a move towards ‘p4’ medicine one of those ‘p’s’ is personalised. This means we cannot assume that we will always be dealing with anonymised data. Saying that data will be anonymised has become a comfort blanket because we are scared to have the conversation about what safe, ethical and legal use of identifiable data looks like. Right now this is largely working as almost always data-driven solutions are operating at the group, and therefore anonymised, level and the learnings are applied at an individual level, however, this is likely to change in the not too-distant future. Projecting out to what is likely to happen in the next 5–10 years gives us the time and the headspace to proactively plan, to build in civic engagement from the very beginning and to get the framework in place before it is necessary, not after it’s already happening.
  7. Standardisation is going to be vital if we are going to build an ecosystem that envelops digital health technologies of all types that people trust. People largely trust the use of pharmaceuticals in medicine because there are standards in place for every aspect of the production pipeline. We’re going to need this for digital tools too. This means we need standards for evidence of effectiveness; testing, validation and evaluation procedures — how is this done, whom by, on what data sets? etc. ; publication of results — both in terms of ‘X algorithm performs better than X human’ which raises questions about the context of the claim, and in terms of publishing of documentation for transparency purposes why was it developed, whom by, what decisions were made, and who does the tool not work for in what circumstances?; and finally standards for safe-use that can be easily translated into inspection criteria. All of these standards need to be developed in partnership by all those affected by the digitisation of healthcare, and the process for development needs to be agile and responsive enough that it is able to cope with the pace at which technology changes and develops. The cliched excuse regulation is slow and innovation is fast needs to not be accepted.
  8. There’s a need to recognise that when we are talking about creating the ethical, legal, and regulatory framework for the safe development, deployment and use of digital health technologies, including but not exclusive-to intelligent systems, we are actually talking about creating a seamless service pathway that is effectively circular. All digital tools are continuously developing and deploying whilst people are using them and most of them conduct different functions which makes them harder to categorise in an overly dichotomous fashion. This is why we need to treat the regulation of these digital products as a service that is almost always in use and that loops between development and use all the time, depending on certain thresholds of change.
  9. The work that NHS Digital and NHS England have been doing recently about making sure the language used in NHS online content matches the language that people would use in real life, is a fantastic illustration of the fact that sometimes problems created by the use of technology, e.g. chatbots, do not always need very ‘technical’ solutions. This is the value of having multi-disciplinary teams designing healthtech solutions as content designers and social scientists can point out things that developers might not, and vice versa.
  10. The definition of ‘healthy’ is not the same for everyone. mHealth tools (Fitbit, Garmin etc.) have done an excellent job at marketing one definition of ‘healthy; and it’s a very W.E.I.R.D. (western, education, industrialised, rich and democratic) definition. This definition is further promoted through social media influencers who capitalise on trends around #wellness #healthyeating etc. without realising that they are projecting a very narrow view of what that looks like. At face value this might seem inconsequential, but when you consider that most mHealth tools are designed to ‘nudge’ you into behaving a particular way, you can start to see the more sinister implications, especially if we start enabling the data from these tools to be integrated with ‘medical’ data. Saying that somebody is unhealthy if they only took 5,000 steps a day or ate more than 1800–2000 calories takes no account of that person’s current circumstances or the cultural meaning of food. It’s not up to us to ‘police’ health, it’s up to us to ensure everyone is able to achieve the best health outcomes for them. These outcomes need to be contextually specific, and then written into code, code shouldn’t be dictating the outcomes in a top-down inherently paternalistic a fashion.
  11. In order to go forward, we should perhaps go back. εὐδαιμονία is a central concept to Aristotelian ethics. Broadly translated as ‘good spirit’ and translated to mean human flourishing, Aristotle saw it as the highest human good and believed it was achieved through a combination of internal (virtuous behaviour) and external goods (e.g. wealth). Digital health technologies could and should be used to bring back this more balanced recognition of a ‘good and healthy life.’ Too often we penalise individuals for (e.g.) eating fast food and not a homemade meal made entirely from organic produce without acknowledging the fact that it might be that their circumstances are such that this is their only option. This leads to people feeling judged and can act as a barrier to seeking health advice, limiting the opportunities for productive and beneficial conversations between health care providers and individuals. If designed properly digital technologies can act as an intermediary ensuring all the details of an individuals life are passed onto the doctor (within appropriate consent models etc.) enabling them to take circumstances into account and translating the advice back in a way that is meaningful and implementable for the individual. Importantly this must be done in a way that doesn’t subject every single aspect of a person’s life to the ‘medical gaze.’
  12. A related, but simpler and shorter point is that we need to stop assuming that information = action. Just because digital means provide us with an opportunity to present people with more information about themselves doesn’t mean we always should nor that it will automatically result in them taking action based on that information. This ignores the complexity of human reality.
  13. Essentially we should acknowledge that there is a difference between external and internal motivation and when it comes to taking action to improve one’s health, this difference can really matter. The patient engagement capacity model developed and outlined by the authors here makes this very clear. As they explain “the model draws upon social cognitive theory, developed in the 1960s and ’70s by Canadian psychologist Albert Bandura to explain the various ways that people acquire behaviors. This theoretical framework is widely used specifically to study how people acquire their health habits. The theory includes the concept of “reciprocal determinism”: the idea that there is a dynamic relationship between the person, their environment, and their behaviors, in which they continually influence each other and are influenced by each other. For example, when a person learns a new behavior, their confidence in performing that behavior in the future increases.” The authors argue that “this concept is particularly relevant in understanding capacity for patient engagement, because changes in one element (person, environment, or behavior) can result in changes in another. Identifying elements within these interconnected domains may improve our ability to help patients engage.” Interacting with digital tools is a category of patient engagement and thus the factors affecting capacity outlined in the model, will also affect whether or not a digital health tool has an impact.
  14. The way that we deliver healthcare is changing. This is an unavoidable fact. However, this does not automatically imply that what it means to deliver care, to care for someone, or to be cared for has to change as well. These are higher order concepts, based on values and long-established ethical principles, whether they are the values embedded in the NHS constitution or the bioethics principles of beneficence, non-maleficence, autonomy and justice (or both). We should have frank conversations about whether these are the principles we 100% want to commit to and then we should treat them like Wittgenstein’s hinge, i.e. the key part in the system that doesn’t move or change, it’s design stays static so that everything else works. We can change the design of the door as much as we like, the hinge stays immovable. Once we’ve done this we can come up with methods of translating these values into products and into policy.
  15. As Indra and I have argued here we need to create a system where there is distributed responsibility across all parts of the system. Crucially this doesn’t mean that responsibility is diluted so that the individual parties can rest easy thinking that ‘someone else has got this.’ It means the exact opposite. Everyone must feel equally responsible for the outcome so that nothing falls through the gaps, if someone falls asleep for a second, there is someone else to catch the ball. Additionally, it must be possible to trace back the cause of a result in order for it to be corrected or recreated (depending on whether the result was bad or good). For this process to be possible, responsibility (in the etiological sense of being causally accountable) has to be attached to each relevant node in the system so that it is possible to identify the causal chain (Floridi 2016b) (i.e. C was caused by B which was caused by A). To put that more simply, it must be possible to look back and see what caused (in a chain reaction sense) the good or bad outcome in order to learn how to repeat it or avoid it in the future.
  16. Wrestling with the need to find a balance between personal responsibility and state responsibility is not a new experience for the NHS. Indeed it’s probably at its very core and debates have swung from one extreme to the other for decades. This debate is being aggravated in the digital age for a number of reasons. First, there’s a risk that in giving people more tools to look after themselves, and not supporting them to do this, we are shifting all the responsibility onto the individual. Secondly, questions about the ‘right’ level of governance abound and open up a can of worms in this space. Misinformation spread online, or on social media is a good example here. Traditionally the healthcare system has been able to ‘gatekeep’ people’s access to information and thus ensure it was valid and credible. In the age of social media and apps, anyone can provide medical advice via a post on Instagram or via an app. This puts all the responsibility on the individual to determine what is a credible source of information which has pushed some to argue that the State should be taking a harder line, dictating what should count as valid advice and hiding or removing access to that which is not deemed official. But it’s not as simple as that. People have autonomy and they have the right to determine for themselves what works for them, which means they have the right to experiment with alternative medicines, try new diets or use mindfulness apps without feeling as though they are doing something disapproved of. Yet they also have a right to be kept safe. This is why we need to start having realistic conversations about proportionality and risk and respect for people’s dignity and autonomy so that we can find an approach that is flexible enough to allow people to try what works for them and we create spaces for diverse voices in healthcare but we don’t put people in danger. We need to collectively decide how we define this boundary and then we need to take solid actions to protect and enforce it.
  17. Let’s stop assuming that being able to code somehow turns us into Albus Dumbledore able to complete feats of rational impossibility with the magic of technology. Assuming that we can just throw ‘AI’ or ‘digital’ at healthcare and it will naturally find somewhere to land in a way that delivers on all the many opportunities we are aware of is unlikely to be a successful strategy. The Technology Acceptance Model tells us this. Instead we should have open conversations about the outcomes we want to achieve for all the different ‘users’ (as defined by them, not in a paternalistic top-down way) of the healthcare system to give us the ‘demand’ for healthtech interventions and then assess the ‘supply’ against this. That way we can see how far off we are from delivering truly meaningful change and focus investment where it’s going to matter most. This might mean that we accept that digital transformation is potentially not going to make the biggest difference in areas, hospitals, GP practices etc. that are already high performing and, look to areas that instead are underperforming, not delivering the standard of care that we would expect and invest here first. Otherwise cumulative advantage will mean that health inequalities widen, and fast.
  18. Looking at the future of digital healthcare, it is clear that what we are designing is a cybernetic healthcare system where people are presented as a ‘machine’ that can be kept in equilibrium by real-time monitoring of the effects of alterations in inputs to differences in outputs. If this is truly cybernetic, these adjustments will be made automatically with almost no input from the individual themselves, it will be a completely frictionless system. This could be hugely beneficial for people who are currently expected to absorb and act on huge amounts of information which can be very overwhelming, however, it could also grossly undermine people’s autonomy. We should start planning how to avoid this now.
  19. Beneficence is as important as non-maleficence from a Governance perspective. As argued by Floridi here Governance should be seen as being composed of three elements: regulation, ethics and governance (policy and standards). Regulation, policy and standards should be used to ensure no-maleficence but they should be combined with ethics to ensure that the aim is to improve people’s lives. We should be committed to this, digital technologies should not be introduced just for the sake of it or because we are sure that people will not be hurt by it, we should only introduce technologies where we are sure they will make things better otherwise we might take away from things that matter, like face-to-face interaction and human empathy.
  20. In the world of ethical algorithms, we have got to move towards ethics in socio and not just in silico. This means that we need to encourage fairness, accountability and transparency at a system level and not just at an algorithm level. We could design 20 different healthcare algorithms that perform perfectly ‘ethically’ and ‘responsibly’ when they operate on their own but when they are joined together in an aggregated heterogeneous system the collective output is morally reprehensible. For example, we could inadvertently create a system that makes whole groups of people ‘invisible’ so that healthcare services are not designed to meet their specific needs. If we’re alert to this fact from the beginning, we can avoid this, if we’re not — we won’t.
  21. More generally, embedding ethics, and particularly bioethical principles, into algorithmic healthcare solutions needs to be done very carefully so that we are not overly reductionist or rigid to the point of being harmful.
  22. Healthcare data in general, and especially that owned by the NHS is phenomenally valuable but propagating the myth that it’s almost untouchable in its value might ironically mean that we don’t capitalise on it. We should be realistic about its quality, subject it to rigorous and transparent evaluation so that we can identify where we can make improvements through data standardisation, better collection practices, techniques such as de-biasing, and the use of alternatives such as synthetic data. In short we should identify when existing healthcare data is the best in class, so that we don’t end up with sub-optimal combinations.
  23. Conversations about the ‘robot will see you now’ or this is what healthcare system will look like in ten years as if it’s an organic result are unhelpful and potentially distracting. We won’t automatically land on Mars, we could land on Jupiter or Saturn, if we don’t want people to be seen by robots then we should decide this now and make sure we are putting in the appropriate protections already so that we don’t have too many repeats of this situation where a doctor delivered end-of-life news to a patient via video-link robot, that’s damaging to the overall endeavour and doesn’t demonstrate respect for persons, their dignity or their human rights. We should instead be aiming to create a healthcare system that is as human as possible. One where people, their needs, and those of their family are genuinely at the centre, where they feel they are listened to and heard ,and given the time to express their wants and fears for their health. We can do this by looking at all of the ways technology can remove friction and ‘pain points’ from the current healthcare system, be that in booking appointments or in waiting for test results, and we should be technologically agnostic about how we achieve this.
  24. As Natalie Banner has argued elsewhere, far more eloquently than I could, we need to treat people and patients as part of the solution in designing data-driven healthcare systems rather than a problem that we need to overcome. For me this means we must keep society in the loop, not on the loop. In other words we must give people the opportunity to be involved in decision making throughout the design process, not view them as somehow separate, an entity that is looking over us waiting to tell us off if something goes wrong, an entity that is seen as a hurdle that shouldn’t be necessary to interact with if everything goes ‘right.’ This misses the point that the only way things will go ‘right’ is if people, their preferences and their opinions are central to what we deem acceptable or not. Yes, we need to be careful that we don’t fall foul of the Ford problem of people wanting faster horses rather than cars, but we should use that need for caution as a means of motivating us to plan engagement carefully, rather than as an excuse not to try altogether. There’s a reason Floridi and Taddeo state that social preferability must be at the core of data ethics. Let’s be bold and not ignore it.
  25. There is a need to create safe spaces for experimentation where we can look at whether a new digital solution, that exists outside existing governance structures, provides enough benefit to patients that it warrants a rethink of the governance not of the technological solution. These spaces should not just be technical sandboxes, but also safe ‘cultural’ spaces where an open-minded mindset is fostered, encouraged and welcomed in staff. This will lead to more healthcare-provider-led innovation which will ultimately lead to better innovations.
  26. We need to use a rational framework to work out when a specific ‘task’ makes it amenable to be done by an ‘AI’ and when the level of associated risk is too high. Dr Silvia Milano of the DELab has outlined how to do this by considering the risks of AI from the perspective of decision theory and rationality (this is much more about the risk of using AI for a specific problem, rather than the risks associated with bias, lack of transparency etc.). Silvia’s argument is based on the definition of AI by Russel & Norvig of a system that acts rationally, an instrumental definition of rationality (which states that an action is rational if it can be seen to be clearly linked to the desired outcome, even if the desired outcome itself might not be considered ‘rational’) and a focus on individual level decisions which leads us to a conclusion that a rationally acting system will compare all the options and choose the one that has the best possible outcome for it (i.e. the expected utility of an action A is the sum of the utilities of its consequences (outcomes) under all the possible states of the world, weighed by the probability of each outcome). If we can frame a problem so that it can be solved rationally in this manner, then the risk of using an algorithm to solve it is relatively low. However, for problems that we cannot frame like this — for example, for problems where the states and actions cannot be kept independent — the risk of using an algorithm to solve them are much higher. (I’ve explained this in more detail here)
  27. The digital divide is about more than age. People don’t suddenly cease to be able to interact with the NHS via digital means when they hit a certain age, equally people under that fictional age bracket do not always want to interact digitally. Instead, whether or not a digital option is appropriate will be entirely contextually dependent, and therefore highly variable across groups of people and within individual as their situation changes. Assuming that it’s as black and white as a number on a piece of paper blinds us to the need to think in a more nuanced way.
  28. We need to start thinking about the interaction between individual privacy and group level privacy, particularly as we make greater use of genomic information. Profiling is not an activity confined solely to the advertising world. When an algorithm decides that person A has disease incidence risk B based on an analysis of their data, it is not just assigning that risk to the person in question but to all others who would also fit this profile even if they had no say in whether or not the analysis was done. If this information is published, the profile can be used to make inferences about the profile in a whole host of scenarios without us ‘the system’ knowing about it or about the individual. In other words, algorithms can introduce predictive privacy harms in the healthcare space at a group level. We should be cognisant of this so we can protect against it.
  29. To end on a high note. Let’s be mindful of the challenges that digitising healthcare provides but let’s not let this awareness of the risks suffocate our endeavours to design a healthcare system that is fit for purpose for today’s ‘onlife’ (Floridi 2014) experience and delivers world-class outcomes for all who interact with the system. Let’s decide what this looks like and then cut through the hype to work out which technologies will get us there now, in five years, in ten, and further into the future, and start working now to ensure that the Governance framework is such that implementation is swift so that we can start capitalising on the benefits as soon as possible.

--

--

Jessica Rose Morley

AI Lead for DHSC, MSc Student at the OII, Tech for Good enthusiast and volunteer for One HealthTech