Lost in translation? Invitation to address the challenges of interdisciplinary cooperation in the FAT community

Danny Lämmerhirt
6 min readDec 11, 2019

This blogpost was written by Aviva de Groot, Danny Lämmerhirt, Evelyn Wan, Goda Klumbyte, Mara Paun, Phillip Lücking, and Shazade Jameson.

Introduction: The short of it

The rapid deployment of complex computational, data-intense infrastructures profoundly influences our human environments: private and public, social, commercial, and institutional, and on a global scale. Adverse effects have been more and less predictable, and challenging to reveal. Calls for creating a more responsible practice are heard from many sides. Accountability, fairness, and transparency are much used terms, and the FAT conference series has aimed precisely at unpacking what this means for AI development.

This has not been easy, nor can it be. The highly interdisciplinary effort that needs to be poses specific challenges. There are gaps in understanding between those who design systems of AI/ML and those who critique them, and in between the latter. These can be defined in multiple ways: methodological, epistemological, linguistic, and cultural. How can we hack our systematized research and design patterns towards new, communal methodologies?

This workshop, which will take place in Barcelona on the 29th of January as part of FAT* 2020 responds to these challenges. In a 3-hour effort, we will translate and thereby ‘explode’ common workflow patterns in AI design to a multidisciplinary setting. In bridging gaps (in) between existing criticisms of machine learning and the practice of design principles and mechanisms, we aim to build a common ground for computer scientists, practitioners, and researchers from social sciences and humanities to work together. The method we use aims to identify actionable points for our respective work, and a more fundamental appreciation of what it means to combine our methods.

More concretely

The fact that knowledge of AI/ML has been concentrated in the hands of too few is an injustice much addressed within the broader FAT community. A lack of diversity in the workforce, and a one-dimensional technical perspective introducing design logics revolving around terms like “optimization”, “efficiency”, “fixing” and performance scores from which ML technology is perceived, designed and deployed — i.e. crafted — both erase potential of crafting otherwise. Opening up the ML community, but also embedding its research in more diverse, multi/interdisciplinary settings is called for — loudly.

Crafting otherwise requires us to examine both the contents and methods of working involved in research on these techniques and their employment. We see such work as ‘epistemic practices’ that are inevitably value-laden as they are tied up with the history, challenges and traditions of all connected disciplinary fields. Terms like fairness, optimal, or causal have different meanings in different fields, but there are tougher challenges. Our methods for knowledge production differ greatly — between technical and non-technical, quantitative and qualitative perspectives. All of these compete for a voice in the public discourse, that place where many hope to see the ‘informed, democratic debate’ to take place. We propose the term ‘epistemic justice’ in order to probe reflexively into the basis from which we operate from, whether as critical scholars or as designers. Opening up our methods for others to understand means honestly translating, and that entails some soul searching. What informs our methods of working? What assumptions do we take on? Whose voices do we afford authority, and why? What do our different disciplines identify as key principles or key procedure that need a fundamental place in the design process? When do we call for NOT designing anything? How do we see various disciplines come together to articulate a larger shared vision? In short, how do we do our “epistemic best” in a multidisciplinary setting?

We are a group of scholars with a background in law, science and technology studies, media studies, computer science, and gender studies. Our shared fascination and puzzlement with these questions prompted us to organize a workshop during the ACM FAT* conference as part of the call for sessions to critique and rethink accountability, fairness and transparency.

Our workshop is part of the ongoing effort to cultivate more reflexive epistemic practices in the interdisciplinary research setting of FAT*.

This 3-hour workshop will be structured as follows:

  1. Introduction/ short presentations by facilitators
  2. Charting methodological workflows: Participants will document, share, and discuss their usual workflows to analyse algorithmic systems. During that exercise, participants will compare their workflows across disciplines, and compare their experiences with a prototypical AI design workflow.
    Our goal for this exercise is to make different disciplinary workflows visible and to develop a critical design workflow for AI which enables epistemic justice based on different disciplinary experiences. This will be accompanied by brief presentations of these new critical workflows, explaining their logic, their advantages, and the type of questions or tasks that it would be most useful for and where its limitations are.
  3. Translation cartographies: Groups will be invited to reflect on their own interdisciplinary process of critiquing algorithmic models. In this exercise we will surface and discuss the terms and concepts that could assist or inhibit collaborative AI design. As disciplines draw attention to different problems and questions, and frame their entry point to AI in different terms, we will map both the necessary terms as well as contested terms that are important for collaboration. Finally, participants will develop a glossary to accompany the new hybrid workflow.
  4. In a closing plenary we will reflect on what you have learned in the workshop, the thoughts our session has provoked, and how we imagine to put the ideas from the workshop into use.

How can you get engaged?

Are you interested in multi-disciplinary work around the design and use of AI systems and planning to attend ACM FAT*? Then join us at the workshop during the ACM FAT* conference in Barcelona on January 29 (note: the workshop will be limited to 30 participants)!

More details can be found at the FAT* 2020 website as well as the University of Kassel website. Please note that we will provide more background in the session itself. With the documents we are currently preparing to hand out, you will also find a glossary of key concepts and further reading.

In order to help this workshop be fruitful to all, we encourage you to share with us your experiences in advance on this pad.

  • Tell us how you would describe a standard workflow in your discipline. In other words, what are the standard steps that one takes in your discipline to approach and perform a research or design task?
  • Do you have experiences with design processes and formats to facilitate interdisciplinarity around ML? Or would you like to share your experiences how interdisciplinarity has worked for you?

You will soon be able to access the abstract of our conference proceeding at this DOI. We have also prepared a reading list for anyone interested in the topic (here) and for those unable to attend the conference. We would like to build a collective resource that many people can draw from. If you know relevant reading lists or literature, feel free to suggest these in the document.

Authors’ note:

Aviva de Groot is PhD candidate at the Tilburg Institute for Law, Technology, and Society (TILT) at Tilburg University. Her Thesis ‘“Care to Explain?” Articulating legal demands to explain AI-infused decisions, responsibly’ addresses explainability concerns through the lens of epistemic justice. How can modern day decision makers in and on the loop of these processes maintain a responsible relation with decision subjects?

Danny Lämmerhirt is PhD candidate at the Locating Media Graduate School at University of Siegen. His dissertation project draws from STS, economic sociology, and technography to explore the role of devices in organising bottom-up health data cooperatives and their collective data practices to valorize data.

Goda Klumbyte is PhD candidate and research associate at the Gender/Diversity in Informatics Systems group at the University of Kassel. Her dissertation focuses on knowledge production in and through machine learning systems from feminist and post/de-colonial perspective.

Phillip Lücking is PhD candidate and research associate at the Gender/Diversity in Informatics Systems group at the University of Kassel. His research interest encompasses relevant contemporary topics of computer science such as machine learning and robotics in relation to their societal impacts, as well as special interest in how modern digital technology can be utilized for social good.

Dr. Evelyn Wan is a postdoctoral researcher at the Tilburg Institute for Law, Technology, and Society (TILT) at Tilburg University, and an affiliated researcher at the Institute for Cultural Inquiry at Utrecht University. Her work on the politics of digital culture and algorithmic governance straddles media and performance studies, gender and postcolonial theory, and legal and policy research.

Mara Paun is PhD candidate at the Tilburg Institute for Law, Technology, and Society (TILT) at Tilburg University in the ERC-funded project “Understanding information for legal protection of people against information-induced harms”.

Shazade Jameson is PhD candidate at the Tilburg Institute for Law, Technology, and Society (TILT) at Tilburg University on the ERC-funded “Global Data Justice” project.