Towards AI Transparency: Can a Participatory Approach Work?

Reshaping Work
7 min readOct 21, 2021

--

By Ivana Bartoletti, Global Chief Privacy Officer at Wipro and Visiting Policy Fellow at Oxford Internet Institute

*This blog is based on the lecture delivered within Reshaping Work’s Multistakeholder Dialogue Project

We are seeing an increased demand for accountability as well as for algorithmic transparency. In particular, this greater demand is coming from regulators, policy makers and the general public as the hardwiring of inequality and discrimination into AI systems has now become mainstream.

However, meaningful accountability is not simple at all and a reductionist approach may fail exactly those who need that accountability the most. That is why being transparent and open is something that needs to be discussed in a much wider context — institutional, technical, and political.

There are three key points when discussing transparency around AI systems:

1) Trade secrets, including the intersection between trade secret legislation and privacy law. For example, it is very difficult for companies, whose IP often constitute their business, to openly share their algorithm in any form. That is why one of the key areas of research at the moment in the wider legal community is exactly how can we combine copyright law with trade law, trade secrets, data privacy legislation, and anti-discrimination legislation.

2) Scale of numbers/data entries — it is very difficult to understand algorithmic data in an employment tribunal context, for instance. Even if there is any disclosure, it is very difficult to comprehend the scale of numbers and the huge data inputs.

3) Complexity of algorithms for lay people — one of the most important aspects to understand is how can we create a bridge between different professions. To train a new generation of legal professionals able to understand how algorithms work and speak the same language with data scientists and engineers — because that is the only way to really dissect the “machine”, is essential. For example, if trade unions had these skills in house, they could perform this role. The wider issue of complexity is not just for the experts who are going to look at the data and the workings of a machine, but also for people in general as these systems are very difficult to present to the public.

When it comes to transparency itself, an issue that we are facing is what transparency means. This is part of a more complex discussion about transparency’s intersection with two other notions — lawfulness and fairness. Transparency in its meaning is very much related to these concepts. We always ask the questions “Is it lawful?” and “Is it is fair?”. A machine which is perceived as not transparent is also often perceived as unfair.

Another issue to understand is that transparency and explainability are two rather different concepts. We have to be able to interpret transparency in a much wider context, which may incorporate some element of explainability. It is also related to studies of where data is coming from and where the data is going to; similar to what supermarkets do with products — they are able to trace the production supply line and see where the ingredients are coming from. Transparency is very much contextual and very much depends on the audience that the organization is trying to explain the algorithm too.

The Italian Courte di Cassazione has recently stated very clearly that consent is not valid if the algorithm is not transparent. As you can imagine, this sentence is rife with complexity, because consent, in my view, is a very complex thing itself. Too often, it puts the onus onto the user while what we need is a clear shift in the burden of responsibilities.

In any case, consent is hardly a legal basis to use in an employment context where a lot of algorithmic management of hiring and performance is happening.

Another very interesting case in this regard, is in relation to Deliveroo, where the Italian data protection authorities concluded they have not performed a data protection impact assessment, which is an essential part for understanding and identifying whether there is a risk of an unfair outcomes coming from the data and the system that uses this data. The lack of performance of the data protection impact assessment in itself was a breach of the General Data Protection Regulation (GDPR).

From a GDPR standpoint, there is a requirement for embedding, what I call, “structural compliance mechanisms” within the algorithm — that includes data protection impact assessments (article 24), codes of conduct (article 40) and certification in certain situations (article 42). Talking about the European AI Act, we should note that it is not “the law for AI”, but rather a law that aims to regulate around high-risk artificial intelligence systems. That does not mean that regular privacy legislation, non-discrimination and all other laws still do not apply to these systems.

The other big concept is the limit of human oversight. Human oversight is the notion that humans need to remain at the “steering wheel” of AI in order to be able to provide accountability and transparency. Relying too much on human oversight is a fallacy and risky, because discrimination is essentially human hate. How we can trust humans to steer us away from the hardwiring, the coating and the perpetuating of discrimination that is all human creation? We should also consider the concept of automation bias, as proven by research, that those who rely on a machine tend to trust this machine. In that case, will the humans be able to effectively scrutinise the output of that machine? We have to be really careful not to buy into this concept of human oversight as silver bullet because it is a very complex and a very dangerous one.

What does transparency mean in and how can we navigate all these considerations in a participatory way? Looking at the system for data protection, by deliberation and negotiation, we should focus on transparency in different contexts. In the context of employment, software is used to determine promotion, performance review, etc. Questions that need to be addressed in this sort of the context include — What tool will the employers bring in and how they announce its usage? What do the employers and what do the employees want in terms of transparency? What are the elements of contestability, meaning how can the decision of the tool/machine be contested? Can it be contested on the final outcome or is there contestability built into the process? And finally, to communicate how the algorithm embeds the choices it made in the constant process of evolving itself through the inputs of participation.

Decision on what to explain — whether it is complete transparency or select information that is the most important or useful to understand — must be done in a participatory manner. All this further depends on the product domain, user groups involved, and the context in which the algorithm has been used. If there is a tradeoff between transparency and accuracy, how is that trade off going to be navigated? Some users may be more inclined to have the accuracy, others will want more transparency. The workforce will need access to the system in order to decide. In a real-world scenario, transparency really requires individual solutions based on participatory design.

Once it is decided what to explain, we need to agree on how to do it and how to support the continual learning process. I am an advocate of an internal transparency team that needs to be involved in this defining what key components of the algorithm to be made transparent in the user interface. It would navigate issues like having higher transparency, but doing a lot of visual clutter and cognitive load. The team would also elicit ideas and beliefs from participants about what should be communicated across a representative segment of the workforce. The internal transparency team would engage in how the defined operating model can be reached through user interface design. For instance, it may define several prototypes to elicit conversations, ask comprehension question to access understanding, and use visualisations and on-demand explanations.

Last but not least, there are limits in the participatory approach to transparency and the definition of transparency. One of the limits is “participation washing”: how do we make sure that participation is effective and committed, and can we really make it worthwhile. The fact that we do not have best practices or failures of a participatory design is because we are so new to this field, we do not have an analysis of the social and cultural elements that may have led or could lead to the failure of a participatory design approach.

Bibliography

  • Lee, M.K., Kusbit, D., Kahng, A., Kim, J.T., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., Psomas, A., Procaccia, A.D.: WeBuildAI: Participatory Framework for Algorithmic Governance. Proceedings of the ACM on Human-Computer Interaction 3(CSCW), 1–35 (Nov 2019)
  • Ananny, M., Crawford, K.: Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20(3), 973–989 (Mar 2018)
  • Participation-washing could be the next dangerous fad in machine learning, Mona Sloane, available at Participation-washing could be the next dangerous fad in machine learning | MIT Technology Review

Learn more about Ivana Bartoletti’s work: www.ivanabartoletti.co.uk

Learn more about Reshaping Work’s Mutlistakeholder Dialogue Project: www.dialogue.reshapingwork.net

--

--

Reshaping Work

We inform, inspire, and challenge future of work discussions by organising events and facilitating multi-stakeholder dialogue.