The road to AI is paved with good intentions

Libby Kinsey
Digital Catapult
Published in
6 min readJan 22, 2019

How do we bridge the gap between the ‘what’ of responsible AI and the ‘how’?

There are a lot of AI ethics guidelines, but most lack comprehensive tools or processes for implementation, and the resources that do exist may still be at a research stage, or difficult to identify or access.

At Digital Catapult, we have a hypothesis that an independent ‘Responsible AI Tech Testbed’ could help by highlighting what resources are available, testing research in the real world, and co-developing resources that address industry requirements. This article identifies some of the key areas that need to be discussed and defined to help AI practitioners start to resolve some of the thorny ethical issues that AI presents.

AI practitioners feel ill-equipped to tackle questions of ethics and social responsibility

“Our responsibility is not over when we have done with optimising the objective function. Whether or not you want to take responsibility, you’re going to be held accountable.”

Suchi Saria, John Hopkins University, NeurIPS2018 Critiquing and Correcting Trends in Machine Learning workshop

The social implications of deploying AI-driven technologies have been in the news a lot lately and the dominant narrative is a negative one [1]. The result, rightly, is that working in machine intelligence today necessarily means grappling with questions of ethics and social responsibility.

This is a task that practitioners typically feel ill-equipped to tackle. Why? Because ethical considerations are not part of the ‘curriculum’. Because technology may not be the solution, and technologists may not have all the levers. Because whilst much has been said about what is desirable from an ethical perspective, less is available about how to achieve this in practice.

Moreover, it shouldn’t need saying, but most practitioners are well-intentioned. They want to avoid negative consequences from the technology they develop, and / or to work on technologies with positive effects. It’s frustrating that there is so little guidance and best-practice on how to do this.

So, what can practitioners do?

Practitioners need to accept that there is ‘no neutral position’

There’s no neutral position. We have to make trade-offs and make a stand. Is AI a forcing function that means corporations have to be more explicit about where they stand?

Facebook employee working on ethics, commenting at NeurIPS2018 Workshop on Ethical, Social and Governance Issues in AI

The uncomfortable truth about grappling with ethics is that there is no universal ‘right answer’ and that individuals and companies will have to define (and defend) their choices. They will have to make trade-offs and communicate them (for example, that one can mitigate bias, not eradicate it; or that one can ensure data privacy with a high degree of likelihood, not guarantee it).

This is not easy, but there are a variety of AI ethics principles and pledges (including Digital Catapult’s Ethics Framework [2]) that can help practitioners to think about values, benefits and risks in relation to the technologies and businesses that they are developing. If these are too abstract, there are some really good books [3] that illuminate some of the real-world harms that have resulted from technology decisions, and could help to focus the mind.

How can we ensure that values are implemented and monitored?

The role of good engineering

The dream is that, given a set of values, responsible AI development can be incorporated into good engineering practice. That is, that we can operationalise and automate much of it. Insofar as this is possible, we are very far from achieving it.

What is the engineering objective?

To get to an engineering solution, we first need tools, techniques and processes that address questions of fairness, explainability, interpretability, transparency, robustness, diversity, inclusion, safety, privacy, accountability…

Herein lies the first problem: these terms are rich concepts. They’re ill-defined and over-loaded. They’re inter-related. They mean different things to different communities.

Immature solutions

Industry is adopting tools whilst the research community is still far from coming to a consensus — this might lead to a false sense of security that these things are solved.

Roel Dobbe, AI Now, NeurIPS2018 Workshop on Ethical, Social and Governance Issues in AI.

Where tools, techniques or processes relating to responsible development of AI exist, they might be at a research-stage, they might be contested, only partial solutions, or inappropriate for real-world use. For example,

  • A number of methods [4] have been proposed for black-box model interpretability (such as Locally Interpretable Model-Agnostic Explanations (LIME), Shapley Values, and Saliency Methods). What are they telling us? Which should be used? The academic literature is rich, but best-practice real-world application is lacking.
  • Some proposed privacy-preserving training algorithms [e.g. 5,6] need a relevant public dataset to be available, which is unlikely in some major domains in which privacy is paramount.
  • Methods for mitigating bias might rely on access to protected characteristics that methods for protecting privacy conceal (e.g. GDPR).

Problems without solutions

Proposed solutions may not exist at all. A recent survey [7] of 25 machine learning product teams across 10 major technology companies (focused on industry practice around fairness) found that ‘while the existing fair ML literature has overwhelmingly focused on algorithmic “de-biasing,” future research should support practitioners in collecting and/or curating representative datasets in the first place’.

Digital Catapult’s Machine Intelligence Garage programme currently supports startups to help translate responsible AI theory into practice [8]. This work has highlighted the dearth of resources and advice for well-intentioned founders who want to build ethical considerations into their businesses from day one, and crucially to do so in an efficient way.

Responsible AI is a process, not a thing

Building responsible AI-driven products and services is an ongoing commitment, not a tick-box exercise. Like machine learning itself, it will be iterative and evolve with feedback. Any engineering solutions need to reflect this.

Where we could start: an independent testbed

For practitioners, the day-to-day pressure is to ship product. The longer-term benefits promised by shipping products where ethical implications have been carefully considered and monitored are, frankly, difficult to measure. We urgently need to find out how to make these considerations easier to fulfil and begin to build evidence that the long-term benefits exist.

Wouldn’t the following be useful?:

  • A directory of resources that are already available, along with an assessment of the state of their maturity, scope of application and limitations.
  • A means for all stakeholders — subjects, practitioners, researchers, policy-makers — to collaborate to solve the types of problems that are encountered in reality. This could mean testing and refining existing resources or prototyping new ones.
  • An evidence-base (such as case studies and longer-term research) for responsible AI ROI so that practitioners can better make the case for the required investment of time and resource.
  • A means to develop best practice guidance and disseminate it.

This could be an independent ‘Responsible AI Tech Testbed’, an entity that can highlight what resources are available, test research in the real world, and co-develop resources that address industry requirements.

Digital Catapult (along with the National Research Council of Canada) will be conducting a consultation in Q1 2019 to test and evaluate the demand, supply, and participants needed to build a Responsible AI Tech Testbed. As part of this, there will be two events one in Ottawa on the 21st and 22nd Feb and one in London on the 11th and 12th March to explore the creation of such a testbed. Details of both these events will be available shortly

If you’re interested in discussing this idea further, please drop me a line @libbykinsey.

Thanks to @Floridi, @SamCatBrown, @anat_elhalal , @balabanovic for their input.

--

--

Libby Kinsey
Digital Catapult

Machine Learning | Venture Capital | startups | anything blue.