Digital Council’s Marianne Elliott and Colin Gavaghan presenting to New Zealand Government Ministers, Hon. James Shaw (Statistics) & Hon. Kris Faafoi (Digital Public Services).

Automated Decision-Making: Early Insights from our Research Project

--

By Marianne Elliott and Colin Gavaghan, Digital Council research leads.

On behalf of the Digital Council and our research partners, the Brainbox Institute and Massey University’s Toi Āria: Design for Public Good, we’re excited to share early insights arising from this year’s major research project into automated decision-making.

What is automated decision-making?

In May, we announced this year’s research topic would look at automated decision making as a case study in trust.

Broadly speaking, automated decision-making refers to decision-making processes where some aspects are carried out by computer programmes.

It often has a data analysis component.

When people talk about using algorithms, machine learning, personal data collection and use, data science, or predictive modelling, they are referring to processes that include automated decision-making to some extent.

Automated decision-making has been around for decades and is part of many of our day-to-day activities.

You probably encountered automated decision-making today without even realising it.

As more data is collected and stored about people and the wider world, and computing power and storage increases, so do the potential applications and impacts of automated decision-making.

There are some applications of automated decision-making that have been controversial due to potential bias or negative effects on people.

An example is some uses of facial recognition technologies, which can exhibit systemic racial and gender bias, and has subsequently been banned in some American cities.

Where and how is automated decision-making used?

There are a broad range of applications for automated decision-making.

We might interact with some of these every day, while others are fairly new and are still being developed.

Examples include:

  • music and video streaming services that give recommendations about what to watch or listen to next
  • online pricing systems that determine how much you are charged for a flight or hotel room
  • customer service chatbots which answer questions about a business at any time of the day or night
  • smart home speakers and devices which carry out voice-based instructions
  • medical diagnosis systems that identify potential medical conditions from scans or X-rays.

Key insights from our research so far

Here are a few of the things we have found really interesting from our research so far.

We will test some of these insights further in the next part of our research and outline our findings in more detail in the final December report.

(1) Automated decision-making is not one concrete thing or idea

Automated decision-making is an umbrella term that refers to a wide range of systems and processes.

Each application will have differing levels of complexity, types of data inputs, and impacts on people and society.

This diversity means there is not one set of risks and benefits that cover all automated decision-making applications.

Risks, benefits and trust levels are context specific and will likely vary depending on the specific application of automated decision-making, the potential impacts, types of data and technology uses, and organisations who implement the systems.

(2) Humans play an important part in automated decision-making systems

The idea of “automation” brings to mind computers doing their own thing with little human intervention and perhaps even a level of agency.

However, throughout an automated decision-making process, there is always some level of human interaction.

For example, all automation is designed and built by human designers, and processes surrounding automated decision-making are developed, implemented and communicated by humans.

To ensure trust in automated decision-making systems, we propose that there must also be trust in the human elements of a system.

(3) Trust and automated decision-making is complex

Part of the aim of our literature review was to unpick some of the ideas around trust, especially in relation to ADM.

As expected, we found that it is a very complex area.

Trust is hard to define, and like many terms, people use it to mean different things.

We are using the working definition that trust is about being vulnerable to another party that has power over you, on the basis that they will do things that benefit you.

We will refine and test this definition further in the coming months.

It is important that our research recommendations do not move us towards a system that is trusted, without being trustworthy.

Trust can be built in a number of ways that may not require a system itself to be trustworthy, for example through engaging branding and marketing.

Aspects that could make a system trustworthy include whether it is reliable and accurate, mitigates against negative bias, and transparent or explainable, and is safe and effective.

We will explore this area further, and we expect that desired levels of trustworthiness might vary depending on the situation in which automated decision-making is being used.

Topics to consider as we complete our research

During the literature review and initial stakeholder interviews, a few key topics arose that will be useful to keep in mind when carrying out the next stages of research:

  1. Automated decision-making involves data, and frequently personal and environmental data. Because of this, we cannot think about automated decision-making and trust in an Aotearoa New Zealand context without thinking about Te Tiriti o Waitangi, the specific qualities of Māori data, and concepts like Māori data sovereignty. There is an opportunity to work with experts in these areas, and engage further with Māori, to ensure that any recommendations and next steps from this research properly reflect the needs and aspirations of Māori.
  2. There are lots of high-level principles already in place around aspects of automated decision-making, both globally and in an Aotearoa New Zealand context. There is an opportunity to think about practical steps and guidance for implementation of systems rather than adding a new set of high-level principles.
  3. Much of our focus to date has been around ensuring trust from the people who are subject to automated decision-making systems. There is also an opportunity to further investigate the levels of trust needed by people implementing automated decision-making systems. For example, what needs to be in place for senior decision-makers to be comfortable with implementing automated decision-making systems to solve problems or grasp opportunities in their organisation?

What is next?

In December, we will produce a report to Ministers with a set of recommendations.

We will also explore other information or resources that we could produce to support the report and make the biggest possible positive impact for people using and adopting digital and data-driven technologies in Aotearoa New Zealand.

Find out more about this work

Read the Digital Council interim report on trust and automated decision-making online.

Watch the New Zealand Sign Language (NZSL) report summary on trust and automated decision-making below.

A final thanks

We’d like to take this opportunity to thank our research partners who’ve worked alongside our team to date. We appreciate your work and look forward to collaborating further over the coming months.

About the authors: Marianne Elliott & Colin Gavaghan

Marianne Elliott is a researcher, writer and consultant who advocates for evidence-based solutions to the big challenges facing human rights and democracy. She was co-founder of ActionStation and is currently Co-Director of The Workshop. Marianne is a trained human rights lawyer, with experience in building online communities, social entrepreneurship and storytelling.

Marianne Elliott, Digital Council research lead.

Colin Gavaghan is the inaugural New Zealand Law Foundation Chair in Law and Emerging Technologies at the Faculty of Law, University of Otago, where he also researches and lectures in medical and criminal law. Colin is the principal investigator in a multi-disciplinary research project examining the legal and social implications of artificial intelligence and algorithmic decision-making for New Zealand.

Colin Gavaghan, Digital Council research lead.

--

--

Digital Council for Aotearoa New Zealand

Seven diverse voices on the big issues affecting New Zealand’s digital future. Find out more www.digital.govt.nz. Join the conversation @digitalcouncil_