Next steps in data-intensive research at the intersections of technology, law and society
CIM Blog, Sept 2019
Noortje Marres (University of Warwick), Matt Spencer (University of Warwick) and Gloria González Furster (VUB)
“Are you the gentlemen coming for the algorithms?” asked a nursery manager of CIM PhD student Loup Cellard and his informant during fieldwork on the French government’s algorithmic accountability initiative.
Changing infrastructures of knowledge transform relations between citizens, governments and researchers. In Loup’s case, designers and an ethnographer were brought into the world of public administration and policy, tasked with making algorithms ‘transparent’ for citizens.
New data infrastructures and new data regimes — open data, algorithmic accountability, the GDPR — create methodological possibilities as well as new objects of study. The “Rights as Methodology?” workshop, at which Loup relayed this encounter, brought together scholars from legal studies, science and technology studies, philosophy and social research, and was designed to explore possibilities for collaboration between the Centre for Interdisciplinary Methodologies (University of Warwick) andthe Research Group on Law, Science, Technology and Society (Vrije Universiteit Brussel).
Intersecting research interests
The workshop was informed by the following research interests.
First, for those coming from the methodological side, that of designing and doing data-intensive research in digital society and cultures, there is increasing interest in and concern with how regimes for data governance shape contemporary research agendas. With recent changes in regulatory frameworks, such as the General Data Protection Regulation (GDPR), and shifts in social norms about what constitutes legitimate data use, the mutual influence of methodological and legal frameworks for “data in society” is clearer than ever before.
This can be contrasted with older concerns that used to animate debates about data-intensive social research: Since the 2010s, much work in this area, especially public interest-oriented research, used to rely on the Application Programming Interfaces (APIs) of online platforms, such as Twitter and Facebook, for data access. In relation to this research, a primary concern was that, by adopting online platforms’ data structures when conducting social research, this work was at risk of uncritically replicating ontological assumptions designed into them, such as the notion that sociality can be equated with “network-ing”. However, in the wake of recent societal scandals about data collection by social media platforms, and resultant changes in online platform settings and permissions, researchers are increasingly looking for alternative ways of accessing platform data and prominent among these are approaches which rely on individual platform users’ rights to access information, such as through data subject access requests. While rights-based research methodologies predate APIs, their increased prominence in platform research makes computational researchers much more aware of the ways methodology intersects with rights frameworks. This raises a different set of methodological questions, about the interactions, not just between social science and digital technology, but between law, social research and computation.
Second, coming from the side of a social, legal, ethical and philosophical engagement with regulatory frameworks for data governance, what is striking about our contemporary moment is a possible change in our understanding of how these frameworks can be deployed in relation to research, and whether their role is understood as constraining or enabling: Privacy and data protection law have traditionally been portrayed as an obstacle to, or at least a limitation on, the access and further processing of data in this context. There are, however, indications that thinking about the role of data protection rights is evolving, and, to some extent, being reversed. (Legal) tools are not merely or exclusively directed towards the defence of an individual’s rights and interests, but can be regarded as targeting more generally the pursuit of common goals, such as, for instance, enhancing transparency of contemporary data practices, or fighting against data-based bias and discrimination. In this context, the use of data subject rights, such as the right of access to personal data, for knowledge purposes, opens up a different, constructive approach to a possible exchange of properties and features of methodological and regulatory frameworks.
It was against this backdrop that the workshop organisers asked: under what conditions are and can “rights” be deployed “as methodology” to conduct social and cultural research in data-intensive societies?
The day started with a presentation by René Mahieu (VUB) entitled “The right of access to personal data as a methodology.” Adopting a practice-based perspective on who uses access rights, Mahieu’s research deploys data access requests as a methodology to investigate organisational processes of transparency: his study asked 100 participants to send access requests to diverse organisations in the Netherlands, from a plastic surgery clinic to a B2B data broker. They found a great variance in the provision of information by organisations in response to these requests. The ensuing presentation, by Loup Cellard (Warwick), was based on fieldwork in Etalab, the French government’s open data task force, and similarly analysed information requests to the French administrative data agency [CADA]. Cellard’s study focused specifically on algorithmic transparency: the commitment of the French government to make public the algorithms that underpin government services in areas as diverse as tax to the allocation of nursery places. He discussed a performative effect of these transparency initiatives: as a consequence of the public policy commitment to explain algorithms, government services are increasingly defined in algorithmic terms.
The follow-up discussion offered interesting reflections on methodological conditions of studying organisations through transparency requests: both studies bring us face to face with the relative opacity of organisational processes, as a condition for the study of transparency regimes. Both studies ended up adopting a pragmatist approach: they establish the properties of organisational transparency regimes through a study of the effects of the implementation of these regimes (which type of requests are honoured by whom and how?). Such an approach faces challenges of validity and reliability, as Dr. Rosamunde van Brakel (VUB) pointed out. But it can also be understood as extending the empiricist impulse of a big data methodology by different, experimental means: big data analysis, just as investigations based on data access requests, operates under conditions of the relatively opacity of data’s origins, and a relatively lack of knowledge about a baseline population. Cagatay Turkay (Warwick) noted that computational methods can also be used to break the opaqueness of algorithms. Taking advantage of this requires more reflection on, and a practical reworking of, explainability as not only a legal principle and a social scientific project, but as an interdisciplinary methodology.
Laura Drechsler (VUB) offered lucid reflections on the research exception contained in the GDPR, and the new kinds of tests emerging as a consequence, attempting to evaluate whether research would be rendered impossible by the absence of data. Ine van Zeeland & Jonas Breuer (VUB) followed this with report on their current research into the intersection of rights, design and technology in the tricky case of consent in situations under CCTV surveillance, and the potential for smart wristbands in museum settings. In his response, Professor Joris van Hoboken (VUB) reflected on wider possible ramifications of the GDPR for both academic research, focusing on the question: who gets defined as the data controller of personal data collected and analysed for research purposes? The higher education institution, or the researcher? This will make a potentially huge difference for how and by whom the relation between the researcher and research subjects is constructed. Matt Spencer (Warwick) arrived at a similar question, asking about the possibly increasingly widespread conditions under which research gets formatted as a legal process. Asking about the consequences of the leakage and/or spread of this formatting into moments and occasions previously defined in methodological — not legal — terms, Spencer asked about the likelihood of law emerging as a new master discipline in the datafied society?
The next session on Data Rights as Feminist Methodology was kicked of by Helena Suarez (Warwick) who presented her doctoral project on the digital mapping of Feminicide. Helena drew attention to the role of data visualisation in the mediation of this issue in feminist activism, noting that it inevitably mediates not merely the fact, but the attribution of feminicide, and thus has methodologies for the identification of victims built into it, methodologies which require explication and critical reflection. In her commentary, Naomi Waltham-Smith (Warwick) raised the question of whether and how constructivist epistemologies are capable of broaching the question of violence, as a phenomenon or event that takes us beyond questions of definition, labelling, shaping, and discursive contestation. At the same time, addressing feminicide “the interface”, in this case of digital mapping, requires de-individuating the question of violence, and not assuming “women” as already constituted subjects of the recording of deaths.
Alessia Tanas (VUB) mapped privacy design approaches for the group, laying out the contrasting visions of Privacy Enhancing Technologies, Privacy by Design, Data Protection by Design, and Privacy Engineering. Richard Terry (Warwick) walked the group through the methodological possibilities of reverse engineering user data schemas through interface analysis. Later in the day Simone Casiraghi (VUB) gave his presentation “Opening the back-box of identity in biometrics practices”. Investigating what concept of identity is presupposed in biometrics, Casiraghi argued the technology enforces a reductionist approach to identity, and flagged the gap between technological and social scientific understandings of identity: between identity defined in terms of a correspondence among unique markers, versus identity as enacted and projected and fundamentally creative. The ensuing discussion opened up the question of what happens to this opposition if we adopt a practice-based perspective on the implementation of bio-metrics: as biometrics is implemented in social environments like airports, isn’t it likely that we will be able to observe the enactment of social identity with biometrics (the “privium member”) and indeed the cobbling together of the bio- and socio- processes of identification?
The final session consisted of commentary by Dr. Michael Castelle (Warwick) and Professor Mireille Hilebrandt (VUB). Castelle outlined his conceptual framework of en- and de-textualization, the ways in which phemonena are loaded into -and out of -text as part of processes of data-fication, arguing that research based on data rights access requests will have to operate under this constraint. He noted that there is still a real clash between accountability initiatives and the dominant logic of information system production. Prof. Hildebrandt began by pointing out that there are many different types of law that must be considered in relation to machine learning, including the Machinery Directive, which regulates safety, and the GDPR, which has as one of its purposes to make consent meaningful again. She foregrounded the importance of the designed environment in the structuration of data subject-research relations: if we are to deploy data-rights pragmatistically, as discussed above, then we must take into account the choice architectures scripted in the back-end of digital architectures in societies.
While there was no more time to reflect on next steps at the end of a full day, conversations during the day suggested there was broad agreement that emerging transparency and data protection regimes present both an important topic and resource for interdisciplinary research on data-intensive societies and culture. The deployment of data access requests for social research purposes presents an especially fruitful focus on this regard. This still experimental approach points towards a broader, innovative, methodological strategy: that of deploying practice-based, empiricist strategies for researching data-intensive social formations. At the same time, it must be recognized this is a methodologically challenging undertaking, as the “objects of research” surfaced by the means are marked by ambiguity: when conducting social research by means of data access requests, are we studying organisational practices, or data infrastructures? However, we do well to remember that this is a long-standing issue in digital social research, that is unlikely to just “go away’’, and unlikely to be rendered manageable through simple methodological checks in the absence of a stable baseline population. There are several precedents in social research traditions that may be helpful points of reference here, such as ethnomethodology and STS-informed work in political sociology on “trials of explicitness” (Muniesa and Linhardt, 2015) on the empirical elicitation of properties of institutions and government in public events (public ceremonies, police arrests, ..).
In this regard, one of the principal take-aways of the workshop is the proposal that the deployment of rights can enable the development of alternative methodological frameworks for analysing and specifying accountable and explainable data systems in society. Instead of assuming “”the transparency of transparency” — the ideas that AI can be made fully transparent — and readily explainable — if only the right protocol gets implemented, this approach treats data access as a methodological occasion for making public socio-technological data practices, which are likely to be marked by dynamism, ambiguity and complexity. That is, rather that presume full transparency of systems as an epistemic horizon for data-intensive research in a computational society, we would seek to develop rights-based methodologies that achieve intelligibility or “explainability” — of data-intensive social formations — under conditions of relative opacity.
The one-day workshop Rights as Methodology? was co-organised by Professor Noortje Marres (Warwick), Professor Gloria González Furster (VUB), and Dr. Niels van Dijk (VUB) and was supported by EUTOPIA, the European research alliance.