Illustration by Dominik Heilig

On Algorithmic Transparency: Why is it important and how can we shed light on AI black-boxes?

Lucidminds AI
lucidminds
Published in
4 min readJun 17, 2019

--

Algorithmic Transparency is one of the three core values of Lucidminds AI. We believe that (non-technical) end-users of algorithms have the right not only to explanation (as stated in the GDPR, though it is not legally binding as of yet), but also to have control and clarity over key parameters, input-output relations, and tailored recommendations that they receive.

Algorithms are widely used for decision making in various fields as automated systems that have direct and significant impacts on our lives such as housing markets, trading, employment, commerce and much more. As these are complex systems, their models and algorithms are technically very challenging and their explanations may offer only a little amount of meaningful information to its users. On the other hand algorithm authors (corporations, companies in this case) might not be interested in sharing the details of their algorithms because it may reveal proprietary and key business strategies, manipulate their system and cause exploitation. When all these points are considered whilst developing algorithms, they naturally turn to AI black boxes.

Nevertheless, at Lucidiminds AI, we see that there is a large practical space where Algorithmic Transparency brings a triple win for businesses, users, and society.

Biasedness Factor

Hiring is a hard and very important job for almost every company. It’s a big challenge to find new colleagues that share the same values with the organizational culture, it requires a lot of time, knowledge and resources. To overcome some parts of this problem, an algorithm within a recruitment software might be very useful. While designing such algorithm(s), designers have to make difficult decisions that affect many people’s lives directly.

If a candidate’s application gets rejected by an algorithm, a technical explanation won’t be enough for any of the end users to understand the reason for the candidate’s rejection. In non-technical recruitment scenarios, recruitment managers are obliged to explain the reasoning behind the rejection but algorithms are not yet obliged. Therefore transparency becomes a key factor. (But transparency to whom, with what purpose?)

A candidate should have the right to know why s/he was rejected and end-users will like to know how the algorithm considers:

  • The ethnicity of the candidates
  • Gender pay gap
  • Age and experience
  • Skills matching
  • Values matching
  • Personal, physical attributes
  • Educational background

If such algorithms are designed by one demographic set of criteria and only works for such specific attributes, it may potentially cause disastrous results, especially when they’re evaluated in the wrong hands.

Control over parameters, input-output relations

As candidates should have the right to know the reasoning behind their application processes, so should recruitment managers have the right to know the parameters of the algorithm, and how certain input changes affect the output. As such, they can steer their algorithm based on the circumstances as they see fit, rather than leaving the full control to the algorithm itself.

Video streaming companies such as Netflix, YouTube, etc. provide recommendations to their users based on the content they consume on their platform. But what-if their users would like to get recommendations that differ 70% than what they usually watch? Do they have the right to choose that? If so, how?

Summary: Counterfactual Explanations

In some cases opening the AI black boxes might be very difficult. What can be done in such cases? Some articles written by Sandra Wachter, Brent Mittelstadt and Chris Russell show that providing Counterfactual Explanations to inform users on what changes can be made to reach certain desirable outcomes is useful here.

For example, if a candidate’s job application is rejected, together with the explanation of why s/he got rejected, what can be done to reverse that decision can be shown.

In the existing literature explanation typically refers to an attempt to convey the internal state or logic of an algorithm that leads to a decision. In contrast, counterfactuals describe the external facts that led to that decision.

says Wachter.

We as Lucidminds ensure that users of our algorithms have control and clarity over key parameters and input-output relations, and on the tailored recommendations they receive.

Open Questions

1. How can Algorithmic Transparency take its part in the design process?

2. Can Privacy be in the nature of algorithms?

3. What are the long term effects of algorithmic transparency in free speech?

4. Who should be the referee in this complex field?

Oguzhan Yayla Lucidminds AI, Co-Founder & CTO

Acknowledgments: We’d like to thank Bülent Özel, Dominik Heilig, and Hamza Zeytinoglu for their comments and contributions to the article.

--

--

Lucidminds AI
lucidminds

With Complex System Design & Analytics, we translate Discourse to Practice