How to govern visibility?

Torin Monahan
surveillance and society
2 min readApr 14, 2021
(Caption: Ubiquitous visual technologies. Source https://unsplash.com/photos/fptbhXOdMoo, by Kirill Sharkovski)

In the following blog post, Rebecca Venema shares the underlying ideas behind her article, “How to Govern Visibility?: Legitimizations and Contestations of Visual Data Practices after the 2017 G20 Summit in Hamburg,” which was recently published in the journal Surveillance & Society.

///

CCTV, smartphone cameras, crowd maps, dash cams in cars or body cams worn by protesters or police officers, vast amounts of images shared online: public spaces and everyday life nowadays are highly saturated with visual technologies and images. This has significant implications for public life in general, and for political protest, policing, and surveillance constellations in particular. Gatherings and practices of different kinds of actors are increasingly visible, can be watched and tracked, not at least thanks to facial recognition tools that allow one to map and match facial features.

These developments come with both potentials and risks, with opportunities for crime prevention and safeguarding public security, as well as concerns about privacy infringements and creeping shifts towards totalitarian mass surveillance.

These ambivalences became particularly visible in the context of police investigations after the 2017 G20 summit in Hamburg, which I used as a case study in my article featured in Surveillance & Society.

Protests against the summit had culminated in various violent confrontations between protesters and the police as well as in severe riots. In the subsequent prosecutions, the police collected more than 100 TB of photographs and videos, analyzed them with the help of a third-party facial recognition service, and published more than two hundred pictures of suspects online. What sparked my interest was that these practices triggered heated controversial public and political debates. In my analysis, I was then particularly interested in how different actors legitimated and contested practices of collecting, managing, analyzing, and publishing visual data.

Police authorities characterized visual data and facial recognition as objective and necessary evidence-providing tools to fight crimes.

As the main results, I show two antagonistic evaluative schemata. Police authorities characterized visual data and facial recognition as objective and necessary evidence-providing tools to fight crimes and stressed that all practices were covered by existing law. Critics, in turn, expressed concerns about infringements of civic rights, the trustworthiness of police authorities, and surveillance capacities moving towards a creeping “Big Brother-scenario.”

Based on my findings, I draw attention to topics that remained blind spots in the debates, but that I deem important for further deliberations on ethical and legal norms for visual data practices and governing visibility. One of the aspects I further discuss is the trust in facial recognition as a neutral and “evidence producing” tool. Operating principles of algorithmic tools, potential analytical biases, or the social implications of bulk storage of citizens’ faceprints were not questioned — but are important to discuss given the importance of visual data practices and facial recognition in contemporary societies.

--

--