How we built a digital ethics framework for health charities

Developing a sector-specific ethics framework without being overwhelmed by the AI ethics explosion

DataKind UK
DataKindUK
13 min readAug 29, 2019

--

The digital ethics framework, created by DataKind UK for the Association of Medical Research Charities

By Christine Henry and Tracey Gyateng, DataKind UK

The past year or so has seen an explosion of principles, codes of conduct, checklists, tools and more around AI and data ethics. The sheer abundance made it difficult to identify common principles and themes, and work out how they may align with existing values for organisations.

The Association of Medical Research Charities (AMRC) wanted to help their members navigate this messy ethics landscape. They asked DataKind UK to develop a digital health ethics framework for, and with, their members — using existing principles if possible.

The resulting framework and report can be found here, along with a list of questions for charities to ask themselves and their tech partners when doing digital health projects, to ensure the principles are readily applied.

This post discusses how we did it. We focus on the ethics framework methodology — the details of how we worked with general artificial intelligence (AI) and data ethics principles to adapt to the needs of medical research charities.

Ethics is only one aspect of working in this space, and law and regulation also have a large part to play, but it is nonetheless important for these health research charities to define their ethical aims in digital health, to help guide their work at early stages of projects and where law may be unclear.

1. Don’t reinvent the wheel

AMRC believed that one important — and indeed ethical — consideration was not to simply add to the list of ethical codes, but to assess what was out there already. In the absence of a perfect fit with anything published, we aimed to link their guidance for members to existing high-level principles, and to the long tradition of bioethics in medicine. This meant identifying what was distinctive about their context to find relevant principles, while also connecting their principles to best practice and existing requirements.

2. Understand how it’s going to be used

Is your wheel for a car? A bike? A wheelchair? Just as identifying the context in which an algorithm will be deployed is a key part of ethical assessments in AI, defining context is important when identifying ethical values and building your framework for assessment.

For AMRC members, we set the context as: Data and Digital Health Research, By Charities And Partners, In the UK System. It’s a bit less catchy than just saying “digital health” but it gives us a better handle on where other ethical work has been done and what might be different this time. To break that down a bit:

  • Health research. AMRC members’ work is on the research side, and with innovative projects, the risks and benefits can be uncertain. Charities’ research is also closely related to rich knowledge of their users, and co-development with users as patients and individuals.
  • Digital and AI. How does use of technology surface different ethical considerations than “traditional” biological interventions? e.g. how is a wearable tracker different than a drug?
  • The charity sector. Charities are held to different ethical standards than corporations, and may have different values and aims — so codes of ethics devised broadly may need changes or additions to be fit for use in the non-profit sector.
  • Public health and data. The public health field has a rich history of ethical research, because it acknowledges the tension between individual and the group — for example, quarantine of infectious disease patients limits their individual freedom for the good of the public. As “big data” and “deep data” come into play in digital health, the chances of impacts beyond the individual patient become greater.
  • Partnering, within the UK health system. Charities may partner with large tech companies or start-ups to accomplish projects in digital health. The charity will have access to users and sometimes data, but may have limited ability to ensure its values are carried through all steps of the project. This multi-party style of work also leads to ethical considerations that neither sector would face individually. This partnering takes place within the UK health system — an important consideration for issues around scalability and sustainability of projects.
Digital Health ethics for Medical Research Charities — clearly defining your context can help to identify which principles you may be expected to use, as well as where there will be useful and relevant material.

3. Start with a general plan

Hub in the middle, rim on the outside, circular. That kind of thing.

In this case we took as a basic starting point ethics aims from the health research space. The Georgetown Principles are four common high-level principles from Bioethics:

  • Beneficence: Do work that is to the benefit, not detriment of individuals and society. The benefits of the work should outweigh the potential risks and there should be unmet need.
  • Non-Maleficence: Aim to avoid harm, including harm from malicious or unexpected uses. This is closely related to beneficence.
  • Autonomy: Enable people to make choices. This requires people to have sufficient knowledge and understanding to make a decision. “Informed consent” is the term commonly heard in the context of health.
  • Justice: Be fair — the benefits and risks should be distributed fairly and not worsen existing inequities. In digital health projects such as machine-learning tools, unfairness can arise from biases in input data and in how the model or algorithm is constructed and applied.

A fifth addition

During our research phase for this project, the “AI4People” principles developed by Floridi et al. for the European Commission High Level Expert Group on Trustworthy AI were published. They have since been officially adopted as part of the European Commission’s Trustworthy AI Ethics Guidelines. The authors looked at several of the main principle sets for AI, and concluded that while the common ethics principles for AI mainly aligned with the four bioethics principles above, a fifth major principle was also needed.

This fifth principle is explicability. This covers explainability of AI decisions and things like auditability, transparency of model choices, and accountability — knowing who is responsible for an outcome.

The way we introduced this principle to AMRC members is by contrasting an AI or algorithmic tool with a traditional medical intervention such as a drug: If a patient gets given a drug and it doesn’t work for them, that’s seen as OK ethically (barring negligence) because biology is complex.

If, on the other hand, an algorithm recommends that the patient not be given that drug in the first place because it’s predicted not to work for them, we are in the realm of looking for an explanation, because this is now something based, at least to some extent, on human choices. There should be some explanation, and some responsible person or organisation. (It’s likely that what is required will change as society and medical professionals become more familiar with algorithmic decisions — ethics and AI are not static fields!)

So, thanks to this rich academic work on AI, we already have a successful merger of two of the contexts for our charities (health research and AI/tech), and we chose to use the 5-principle framework as a solid starting point. Any charity project in digital health should, in general, be held to these five principles.

The next step was to look at ethical norms and common principles in the other related areas, so see if anything was not covered or even if it overturned any of the existing five principles — and ideally, to work out why.

4. Add in the other contexts

My wheel analogy is in serious danger of wobbling right now, but these are anything that makes the basic wheel™ fit for use for these users’ purpose — reinforcing spokes, grippy tyre, that goo stuff that goes inside tyres when your context is that you’re a serious cyclist, etc.

A. The Charity Sector

We found several ethical codes or principles across charity actors, and for our key comparator with the AI4People principles went with the Digital Development Principles. Developed by a consortium of international development actors, this is probably not the first place you’d look for inspiration around digital health. But there are a lot of similarities: they’re from charities, working with vulnerable user groups, using tech in often innovative ways, and partnering with tech companies.

We did a cross-comparison of principles and found that while some align more or less neatly with the five core principles of AI4People, there are some points that seem to need something new. This is not saying that the AI4People principles are incomplete, but that they are targeted at completely general usability, and here we’re narrowing the context and applying some context-specific values.

An example of comparing the 5 AI4People principles and the 9 Digital Development Principles — even when many aims align, different groups split them out in different ways and it can be complex to work this out!
Many of the Digital Development Principles — designed for charities working with vulnerable people — suggest ideas around collaboration, openness, and context, which are not necessarily required in general AI ethics.

The Digital Development Principles have several references to being collaborative, being open, and reuse and improvement. AMRC members discussed this and decided on two high-level principles: Community-Mindedness and Open Research. These are important ethical considerations especially in the civil society sector.

To think about community-mindedness, the comparison we can think about is to a private sector app platform barring some third-party health apps from its platform when the company decided to take that functionality in-house. For private companies this may be inconvenient to users but it is not always seen as unethical. For charities on the other hand, users and the public expect them to put users first.

It particularly resonated with AMRC member charities representing people with chronic conditions — if you as a user have both asthma and diabetes, say, you will benefit from having a medication adherence app or platform with the same user experience for both, and you could experience harm if you have to master two different systems. Even if this causes additional development or negotiation burden between tech partners, charity projects in particular need to be mindful of this sort of context.

Similarly, open research is going to benefit patients by allowing others to learn from your mistakes and findings, and potentially making data available (suitably anonymised) will lead to more findings. Publicly communicating these principles to tech company partners from the start helps to ground any discussion about data sharing or platform openness.

B. Public Health

While Autonomy is one of the standard principles of bioethics research, Public Health is in many ways about the limits of individual autonomy, and working across populations. Some AMRC charities have already been involved in public health projects — things like vaccination or anti-smoking campaigns — but the field comes into digital health because of the potential for using immense amounts of data, and linking data in ways that can impact beyond an individual patient and their condition. For example, genetic information also could have risks for relatives, if there’s a risk that changes their insurability.

We took as an example the UK’s Pandemic Flu preparedness planning, conducting a similar analysis to the Charity cross-comparison above. Together with the AMRC members we were working with, we concluded that we should draw out the principle of proportionality. This has already been used by the UK Information Commissioner’s Office (ICO) in a ruling about the Royal Free’s partnership with Google DeepMind. The use of all 1.6 million patient records from the Trust was found to be disproportionate to benefit gained.

We suggest that proportionality as an ethical requirement may be even more broadly defined than in a purely regulatory requirement, so that it applies to ethical considerations even at an individual level. For example, it may be considered disproportionate (in the ethical if not the regulatory sense) for a project to link a patient’s full medical record with their location history from activity tracking, and their social media data. Even if there’s some benefit from this (for example, better exercise recommendations), the risks of re-identification, loss of privacy and negative outcomes may be so high that it could be seen as disproportionate.

C. Partnering within the UK system

For many projects, health research charities will need to partner with tech companies to accomplish innovative digital research. This is no bad thing! Yet it also brings limitations of control because the charity may not have the in-house expertise to assess a deep-learning algorithm, say, for meeting all the other ethical goals.

In looking at work already done on the topic of partnering, as well as, for example, the new NHS Code of Conduct for Data-driven Care and the Digital Development Principles, we identified that sustainability is a context-specific ethical value.

A project that is not sustainable (either because of financial or operational issues) can cause harm to patients or medical professionals who have grown to rely on it. For example, if an app is shut down after 2 years of use, or is not maintained so that it stops sending patients reminders.

Sustainability is linked with the principle of justice, especially around ideas of equity and accessibility for all patients. A project that has no sustainable plan to reach the whole patient population in need is likely to omit especially those who are most vulnerable.For example, a pilot research project may run in urban research hospitals, or only on iPhones — missing out those who are geographically isolated or who have lower incomes.

Again, we can usefully compare with the five AI4People principles intended for all comers, to show why partnering across non-profit and for-profit sectors adds new ethical requirements.

A start-up on its own will aim to get to a point of sustainability financially with a product, but if it does not this is rarely seen as an ethical concern, as much as an accepted risk. Likewise, a charity may in theory choose to fund a particularly valuable project for patients from grant funding or to just break even, without ever reaching revenue generation.

But — because of the partnering context, there is what we can think of as a distinct ethical obligation to achieve financial sustainability in order to meet needs of both the charity and the corporate partner. And because this is in the UK system, the path to financial sustainability will often be to create evidence for value of a digital health project, that justifies its cost for NHS Trusts or Local Authorities (this is not to rule out private purchasing as an option in some cases).

Operational sustainability may mean that the tech partner has at least thought about hiring expert staff around the UK, training requirements for medical professionals, and having a maintenance plan.

5. Road-test!

At our roundtable event with AMRC staff and members, as well as coming to agreement on the principles above, we also discussed some other high-level principles.

For example, is there a need for a principle of being evidence-based? While this did come up in some of the sources (for example the Digital Development Principles), it was ultimately seen as an underlying process requirement across everything — you cannot tell if you are achieving any of the other goals around beneficence, justice, proportionality, etc, without collecting evidence. However, the initial framework is of course subject to revision and testing on real partnerships, and this point is definitely one to revisit. If focusing on this principe aids discussions between charities and their tech partners, use it!

Similarly, some principles and codes explicitly list being innovative as a value or ethical goal. Round table participants agreed that innovation is part of their work — but not as an end in itself, only as a means to address people’s unmet needs. Therefore, this is captured under the existing principle of beneficence, and the consensus was that for this group of organisations, being innovative does not add much to the high-level ethical framework.

The ethics framework

At the end of this process, we have ended up with nine principles for the first iteration of the AMRC ethics framework for digital health.

The AMRC Ethical Framework for Digital Health

Where do we go from here?

We present this as one of the first adaptations of the AI4People five-principle framework for a specific organisation/sector. It is an example of navigating through the many, many, many ethics documents that now exist, especially in the data and AI space.

To generalise our approach, take a general set of principles for AI and add in any new requirements for sector, domain, and user needs:

  • Choose your starting point (ideally a fairly general and well-accepted set of principles)
  • Identify the context(s) in which you are working with AI and data
  • Find principles made for or applied to that context, compare them with the principles you already have, and identify what’s different
  • Why is it different? Is this just a difference in wording or emphasis or is there something new here? If it’s new, can we explain why the ethical requirements in our context are a special case? For instance, charities are socially expected to meet different ethical standards than corporations, so a framework designed to apply to corporations as well needs refining
  • Finally, are there tactical or strategic reasons for making a separate and more visible ethical requirement? For example, proportionality could be seen as inherent in the balance of beneficence and non-maleficence, but, as in the AMRC case, practitioners may find added value in bringing it out as a separate principle. In this case, to emphasise the idea that sometimes in the digital health field individual benefits must be balanced against public harms, and vice versa.

Creating a set of ethics principles is a starting point. It can help an organisation define its values, for themselves and for users, funders, partners, and the public. This is a multi-stakeholder process — it’s as much about generating discussion within your organisation around the values and principles as about close reading of language to work out whether two things align. At that point, you’ve hopefully got not just a wheel but a mode of transport — something that can help you move forward, without causing any harm.

Making it real

To work, the principles need to be “operationalised”. In this case, we helped charities begin that journey by listing questions to ask when starting a new digital health project. We have written a shorter blog touching on that process. Beyond that, there is the application of those questions in the face of uncertainty, and the important process of making and recording decisions around the ethics and risk-mitigation of a given project or application — in digital health or elsewhere.

And all of this is a living process, to be repeated as the project (inevitably) shifts between conception and delivery, as societal ethical norms change, and as the political and tech landscape shift the calculus of risk and benefits.

DataKind UK are working to apply our framework-in-context approach to other charities. If you are interested in processes for working ethically in AI and data — or if you have any comments or feedback — we’d love to hear from you.

DataKind UK is a charity that provides free data science support to social change organisations.

Christine Henry is Ethics Committee lead for DataKind UK. She is a freelance data ethics consultant, analyst and data product manager.

Tracey Gyateng is Data Science Manager at DataKind UK and works with social change organisations to use data (both qualitative and quantitative) for decision making.

--

--