Algorithmic Transparency or Glass Cage?
--
By Tiago Vieira, European University Institute
The recently released European Commission (EC) proposed Directive on Platform Work represents a meaningful opportunity for a debate on working conditions, algorithmic management and workplace democracy — not only within platforms, but in the broader labour market altogether. Although algorithmic transparency was only briefly touched upon by the EC proposal, its importance merits further deliberation especially when it comes to how it interacts with worker representation and participation.
Although AI-powered algorithms are relatively new, these artefacts are often — if not always — marked by impenetrable layers of information that render them opaque, something which — in turn — prevents accountability for their usage (Aneesh, 2009). So, the fact that the EC acknowledges this is in itself a victory for all those arguing that market competition confidentiality imperatives cannot supersede fundamental human rights. In particular, Article 6 of the proposed Directive embodies this victory by demanding platforms provide workers with information about the important dimensions of their labour process that are subject to algorithmic decision-making.
Several works (e.g., Burrell, 2016; Kaminski & Urban, 2021; Kitchin, 2017) to the date have already questioned the extent to which the code of algorithms is indeed replicable and subject to human scrutiny . This means that unless there is some clear provision to force platforms to have their algorithms’ code written in ways that can be interpreted and assessed by humans, as well as prevented from automated development through mechanisms of self-learning, this article and all remaining aspects related to automated decision-making may prove too hard to implement.
Let us nonetheless entertain the possibility that labour platforms start ensuring their code is designed in ways that effectively allow human interpretation. After all, this is something in their very interest, if they want to be able to live up to the duties associated with providing intelligible information inscribed in Article 8, namely, upon workers’ request, providing written, clear justifications for the decisions made by automated means. However, there are some important challenges that the proposed directive is omitting:
- Not all platforms resort to full-fledged algorithmic opacity. A case in point can be the food-delivery company Glovo. The information offered by Glovo is, in many ways, transparent. In essence, the platform organizes the most relevant dimensions of the labour process around a highly gamified set-up, in which workers earn points as a result of their commitment to the company goals, good reviews, efficiency and antiquity. The pursuit for points is paramount in the life of workers, because they are the way to secure access to working slots and, hence, potential earnings (Vieira, 2020). So, the problem is not that workers don’t know the key aspects that shape the algorithm. The true problems of couriers of Glovo are: first, earning points requires not only the accomplishment of dozens of successful deliveries every month, but also that as many as possible get good reviews from both restaurants and clients, something which entails a heavy physical and psychological commitment. Second, the way the company unilaterally handles the criteria that underlie each dimension of the overall score often creates a sense of arbitrariness and discretionary change of the rules of the game. This brings me to the second issue that I elaborate on below.
- State agencies should play a role in ensuring algorithmic fairness, however, Article 7 encourages member states not to monitor the implementation of the preceding article themselves, but rather to “ensure” that each platform alone assesses the impacts of the technological devices introduced in their own-designed labour process through dedicated staff. Following the critiques made to the EU AI Act (Ponce, 2021), I see another case of asking the fox to guard the hen house. Remarkably, for all cases that (now or in the future) operate like Glovo, Articles 6, 7 and 8 (related to the transparency and human monitor of algorithms) are practically meaningless. This conclusion holds even for platforms that fall under the 500+ workers provision (Article 9, number 3), which entails the obligation of employers paying for an independent expert to examine the algorithm on behalf of workers.
- Consultation might not be enough to ensure workers’ protection. The challenges raised above could much less prominent, was it not for the formulation of Article 9, which postulates the right information and consultation as per the Directive 2002/14/EC. Notably, in this instance, consultation means workers have the possibility to formulate a formal, written opinion on any “substantial changes” in the labour process, that is, a process confined to “exchanging views and establishment of dialogue” (Article 2, paragraph g). However, crucially, workers’ opinions are not binding. Going back to the example of Glovo, it isn’t hard to foresee how platforms will be able to navigate this provision: inform, listen, and move on as intended from the beginning.
Summing up, the EC is, in general, in the right direction with this proposal. However, should it fail to realize how the devil hides in the details (in this and other matters), its present efforts may be rapidly undermined by platforms’ well known ability to circumvent the law, making this a wasted opportunity for a truly, meaningful change.
References:
Aneesh, A. (2009). Global labor: Algocratic modes of organization. Sociological Theory, 27 (4): 347–370.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3 (1), 2053951715622512.
Kaminski, M. E., & Urban, J. M. (2021). The right to contest AI. Columbia Law Review, 121 (7), 1957–2048.
Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, communication & society, 20 (1), 14–29.
Ponce, A. (2021). The AI Regulation: entering an AI regulatory winter? Why an ad hoc directive on AI in employment is required. Why an ad hoc directive on AI in employment is required (June 25, 2021). ETUI Research Paper-Policy Brief.
Vieira, T. (2020). Self-exploitation among platform delivery workers: the case study of Glovo. MA Dissertation, UPF Barcelona. Available at: https://repositori.upf.edu/handle/10230/45168
The opinions and views expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of Reshaping Work.