Applied AI ethics typology with illustrative examples of where different tools and methods are plotted (and colour map indicating distribution of effort)

Mind the Gap: from AI ethics principles to practices

Introducing an initial attempt to visualise the applied AI ethics field as a whole in order to spark discussion and debate, and to highlight potential new research questions and challenges. Our research paper is called, ‘From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices’ and is available as a preprint on arXiv [1].

--

In response to ethical concerns, a number of entities (more than 70 on last count) have created codes for the responsible development and application of AI.

However, there is a gap between aspiration and viability, and between principle and practice. Into this gap, we see an increasing number of methodologies, techniques and processes (‘tools’) being developed that seek to operationalise and automate adherence to, and monitoring of, good ethical practices when developing and deploying AI-driven products and services.

When should they be used, and what are the gaps?

We found that there is an uneven distribution of effort in the applied AI ethics space, and that the stage of maturity (readiness for widespread use) of the tools is usually low.

Methodology

“Tech workers want more time and resources to think about the impacts of their products. Nearly two-thirds (63%) would like more opportunity to do so and three-quarters (78%) would like practical resources to help them.

People, Power and Technology: The Tech Workers’ View, DotEveryone, May 2019

Digital Catapult (Anat Elhalal and myself) collaborated with Jessica Morley and Luciano Floridi from the Oxford Internet Institute, University of Oxford, to conduct research into the state of applied AI ethics tools in order to contribute to closing the gap between principles and practices.

To do this, we constructed a typology that may help practically-minded developers ‘apply ethics’ at each stage of the pipeline, and to signal to researchers where further work is needed. The focus was on Machine Learning (ML), but it is hoped that the results of this research may be easily applicable to other branches of AI.

Our methodology was as follows (more details in [1]):

  1. We designed a typology of applied AI ethics tools. This is constructed as a grid with ‘ethical principles’ [2] on one axis and the stages of the ‘AI application lifecycle’ [3] on the other to encourage practitioners to go between design decisions and ethical principles regularly.
  2. We conducted a literature review which returned more than 1000 results and then refined this list (articles, blogs, reports, websites, online resources and conference papers were checked for relevance, actionability by ML developers and generalisability across industry sectors) to sources that provide a practical or theoretical contribution to the answer of the question: ‘how to develop an ethical algorithmic system.’
  3. We classified each of these tools to fit in the typology (of course, some are in more than one location).

What we found

There is an uneven distribution of effort across the typology, with many boxes of the typology very sparsely populated, and only a couple having many different tools. Currently, most attention for all the ethical principles is focused on interventions at the early input stages (business and use-case development, design phase and training and test data procurement) or at the model testing phases.

We do not claim to have captured every relevant initiative in our search, but this observation tallies with our experience as practitioners too. We discuss in the paper what the reasons might be for the uneven distribution of effort, including the perceived tractability of the problem (e.g. there is a lack of clarity around key terms such as ‘fairness’), and the role that regulation (e.g. GDPR) has played in stimulating innovation in some areas compared to others.

Those tools that do exist are usually immature and/or inaccessible. This may be for a number of reasons. The tools may be at a research stage, or they may already be available in open-source or proprietary implementations but where the scope of use is ill-defined, or they are difficult to use or to incorporate into the workflow. Defining the scope of use of tools is particularly difficult, not because the tools themselves have ambiguous outputs, but because the attributes they seek to measure or assure are values — think ‘fairness’ or ‘explainability’ — which are inherently subjective and context-dependent.

You can search the full typology here. To propose new entries or edits to entries that have already been categorised, please email appliedAIethics@digicatapult.org.uk.

Our recommendations

Our research showed that there is a uneven distribution of effort across the ‘Applied AI Ethics’ typology, combined with low maturity of existing tools, suggesting areas where more research and development of tools and methods is required, and some of the challenges involved.

Sustainable and responsible AI adoption will require more coordinated and sophisticated approaches to translating ethical principles into design protocols. We know that ML practitioners want these resources and that widespread adoption requires them to be practical (accessible and easy-to-use), especially while strong evidence for the competitive advantage of more ethically-aligned AI is not yet available.

Multi-stakeholder collaboration is essential to define and address practical challenges in applied AI ethics, and to ensure that any resulting tools are trustworthy and impactful.

This research was a companion piece to a series of structured networking activities coordinated between Digital Catapult and National Research Council of Canada in Q1 2019 which brought together technologists, industry, lawyers, standards bodies, academics, civil society groups and other interested parties to examine how to support practitioners in translating ethical aspirations into values-aligned AI; and to build technologies with positive effects and avoid negative consequences from the technology they develop.

[1] Morley, J, Floridi, L, Kinsey, L, Elhalal, A. ‘From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices’ arXiv preprint https://arxiv.org/abs/1905.06876 (2019) (the full reading and resource list is also available here: https://medium.com/@jessicamorley/applied-ai-ethics-reading-resource-list-ed9312499c0a)

[2] Floridi, L, & Cowls, J. (2019). ‘A Unified Framework of Five Principles for AI in Society’. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1

[3] ICO’s AI Auditing Framework: https://ai-auditingframework.blogspot.com/2019/03/an-overview-of-auditing-framework-for_26.html (see the ‘AI lifecycle’ figure)

--

--

Digital Catapult
Digital Catapult

Published in Digital Catapult

The UK authority on advanced digital technology

Libby Kinsey
Libby Kinsey

Written by Libby Kinsey

Machine Learning | Venture Capital | startups | anything blue.

Responses (1)