Photo by KOBU Agency on Unsplash

Recourse, accountability and feedback loops

Aleksandr
Aleksandr
Mar 29 · 2 min read

Recourse and accountability

Algorithms are being used in decision making systems everywhere in human lives. They can create unfair and discriminatory outcomes. Therefore the need to hold algorithmic systems accountable cannot be understated.

In a complex system, it is easy for no one person to feel responsible for outcomes. While this is understandable, it does not lead to good results.

As machine learning practitioners, we do not always think of it as our responsibility to understand how our algorithms end up being implemented in practice. But we need to.

Accountability promotes trust. It enables a path to justice, to identify and remedy unfair or discriminatory outcomes.

Accountability may be achieved by human audits, impact assessments or via governance through policy or regulation. Tech companies generally prefer self-regulation, but even they now recognise the need for external intervention. For example, certain decisions identified as high-risk require checking by a human.

What to do if damage is done though? So far it seems like there’s no clear process for remediation. Though investigative journalism and certain research groups are doing their best to identify systems that are unfair or discriminatory, thus pushing for accountability and action. In some cases such unfair systems have been withdrawn or modified. Many other situations are not as optimistic though.

Feedback loops

A feedback loop is a process in which the outputs of a system are circled back and used as inputs. In data science, algorithms applied to human social and economic behavior create feedback loops that reinforce their own justifications for use.

Algorithms not only model, they also create. For researchers grappling with the ethics of data analytics, these feedback loops are the most challenging to our familiar tools for science and technology ethics.

The problem of using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating these same biases.

Examples are — predictive policing, or recommender systems, when some online content becomes popular at the expense of other viewpoints.

It is extremely important to keep in mind that this kind of behavior can happen, and to either anticipate a feedback loop or take positive action to break it when you see the first signs of it in your own projects.

It is also necessary to open up a debate regarding which decisions can be extracted from data that takes into account fundamental human rights and freedoms.

unpackAI

unpackAI

unpackAI is a nonprofit organization that makes AI and Deep Learning education as accessible as possible by offering free virtual bootcamps with community-driven learning experience and guidance of professional mentors. Follow us: https://www.linkedin.com/company/14590931/

Aleksandr

Written by

Aleksandr

unpackAI student

unpackAI

unpackAI is a nonprofit organization that makes AI and Deep Learning education as accessible as possible by offering free virtual bootcamps with community-driven learning experience and guidance of professional mentors. Follow us: https://www.linkedin.com/company/14590931/