Trustworthy AI? Yes, by design

Daniela Ivanova
Design Voices
Published in
3 min readNov 8, 2017

--

AI must move beyond blind faith to demonstrate its trustworthiness

The promise of AI is that it is going to magically come up with solutions to problems we’ve long grappled with because of its perceived intelligence. However, we are nowfinding ourselves in the midst of a paradox: we are learning to use automation to simplify and enhance our lives by concealing complexity, but at the same time we are evoking mistrust by ‘hiding the machine.’ When algorithms even in the early stage of narrow artificial intelligence are a black box, how can we trust the decisions made by these and the smarter algorithms that we expect in the future?

Operational explainability

At the World Summit AI in October, Gary Marcus, professor of psychology at New York University, explained machine learning with a cartoon that compared the technology to pouring data into a big pile of linear algebra and collecting answers on the other side. What if the answers are wrong? “You just stir the pile until it starts looking right.” The truth about decision making for most of what we call AI can be discouraging. The need for explainable AI has made headlines and heated discussions. From startups offering explainable credit underwriting and Netflix’s recommendation system showing us which movies we might enjoy, to NASA’s spacecrafts helping people understand why an unmanned rover has taken a specific action, we’ve quickly come to realise that explainability of AI is a real need.

Designers are masters of explainability because it is an underpinning design principle. We’ve spent the recent decades of the Information Age making complexity human, simple and trustworthy. AI researchers and neuroscientists are already calling for a multi-disciplinary approach to artificial intelligence— the implications of AI are too big for us not to be building cross-disciplinary bridges first.

“Provably beneficial AI”

Operational explainability will presumably give us the logical path of how a machine arrived at a particular decision, but will it provide us with why this decision is good for us, compared to other decisions?

Stuart Russell, a renowned AI researcher at UC Berkeley, introduced the term “provably beneficial AI” to stress the need for AI to be aligned with human values. He argues that if we ensure that the objective we assign to a machine is aligned with human values, the machine will be clear about what this objective is, and if it learns from human behaviour, then we’ve got the right recipe for provably beneficial AI.

This will be as much a technological challenge as a design problem to solve. Reasoning machines will need to do so much of what people take for granted — evaluating contradictory factors, weighing up effort versus benefits and compromising between outcomes for the different actors in a decision-making ecosystem. We don’t want to fall into the trap of playing God with AI; the risks are simply too high. We will need to adopt a stance where human-centred design helps ensure that AI is needed, relevant and assisting rather than overbearing. This will be a key ingredient in the recipe for creating algorithms that engender trust.

Playing catch up

There will be times where transparency and clear benefits will simply not be enough. In our work, we’re seeing examples where people are naturally reluctant to give up control regardless of how big the promise of automation is. It is like our hearts are catching up with new ways of doing things — we’re timid and mistrustful. It is only human. Trust takes time to build.

As the speed with which new algorithms pervade our lives increases, designing for AI will be a more of a balancing act. When creating these new powerful possibilities, we will need to balance our drive and desire for a new world with people’s vulnerability and comfort levels. Designing for trustworthiness will mean constantly playing a sort of tug of war — pulling the former back and pushing the latter forward.

AI still has a strong efficiency angle to it. Tech companies are pushing boundaries, and startups are increasingly coming up with new, more niche applications of machine learning. Efficiency and optimisation aside, designing for AI agents will benefit us in at least one very human way — it will expose our biases, cause us to re-think our values, and ultimately lead us to better understand ourselves.

--

--