ICTC’s Tech & Human Rights Series

#HumanRights: A Brief History of Digital Activism

A Conversation with Professor Ronald Niezen

ICTC-CTIC
ICTC-CTIC

--

On May 11, 2020, the Information and Communications Technology Council (ICTC) spoke with Professor Ronald Niezen, the Katharine A. Pearson Chair in Civil Society and Public Policy in the Faculty of Law and Department of Anthropology at McGill University. Professor Niezen was appointed as the William Lyon Mackenzie King Chair for Canadian Studies at Harvard University for 2018–2019. An anthropologist with wide-ranging experience, Professor Niezen researches and teaches in the areas of political and legal anthropology, Indigenous peoples, and human rights. Prior to McGill, he taught for nine years at Harvard University and has held visiting positions in the Department of History at the University of Winnipeg and the Institute for Human Rights at Åbo Akademi University in Finland. You can find out more about his research, work, and new book here.

Photo by Robert Bye on Unsplash

Kiera: Thank you so much for joining us today, Professor Niezen! We appreciate your time and are very excited to speak with you. You have a long and notable background in political and legal anthropology and human rights. To begin, can you tell us a little about how you found this field and what drew you to it?

Dr. Niezen: It started when I was working with the James Bay Crees very early in my career, in the 1980s. I answered an ad in the Montreal Gazette for a research position for the Cree Board of Health and Social Services of James Bay. I flew to all of the Northern Cree communities and very quickly became involved in researching and supporting the James Bay Cree’s campaign to end the second phase of hydroelectric development in Northern Quebec. Interestingly, the Grand Council of the Crees, which was on the cutting edge of the Indigenous people’s movement, framed the campaign, which took place internationally in the U.S., Geneva, and Amsterdam with the International Water Tribunal, as a human rights cause. For two decades after that, I followed the Indigenous rights movement, seeing it as a transnational movement that made use of new technologies in innovative ways, connecting Indigenous peoples in Geneva with their constituents and connecting Indigenous groups and organizations with one another globally. From there, I became interested in looking at the wider phenomenon of how other rights and justice claimants — not just Indigenous peoples — used new technologies and new means of communication to promote a common cause, form communities, and engage in outreach to sympathetic audiences.

Kiera: Your new book is called #HumanRights: The Technologies and Politics of Justice Claims in Practice. The central focus of the book is that human rights and social justice movements are entering a new era as things like social media, artificial intelligence, and digital forensics reshape advocacy efforts. You argue that new technologies are interacting with older models of rights-claiming to reshape the modern pursuit of justice. What were the “older” models of rights-claiming and how have they been changed by technology? What do the “new” models look like?

Dr. Niezen: After World War Two and throughout the movements of the 1960s, the primary way that people made justice causes known to the wider public was journalistic filters. Street protests in the Civil Rights Movement, for example, were covered by journalists only if they became dramatic enough to capture attention. In the post-WWII era, moreover, many human rights claims focused on the impunity of war criminals. I write about the example of Serge and Beate Klarsfeld, based in Paris, who engaged in a very effective journalistically-oriented campaign to bring attention to the impunity of war criminals at home and abroad. For example, she slapped German Chancellor Kurt Kiesinger at an event in the Berlin Congress Hall with scores of journalists already there, cameras at the ready, to draw attention to his Nazi past. Their main challenge in these postwar efforts toward accountability was to get the attention of journalists to cover the stories, to get enough newspaper and television coverage to bring public attention to the issue in order to get the slow wheels of justice moving forward.

With advent of the internet, that all changed. Starting with the worldwide web in 1991, people could suddenly make claims online, and throughout the 1990s, there was an explosion in people’s abilities to represent their claims on their own without a journalistic filter. They could represent themselves as claimants to a global public by communicating with one another through instant mass communication. In this way, these new technologies favoured activism. The Zapatistas movement in Chiapas, Mexico, was the first major internet-savvy justice movement, even though it wasn’t framed in human rights terms. It made use of people who were experts in managing online information hubs, communicating very rapidly and getting news and information out to a global public very quickly. This was particularly effective for public outreach and rights claims, and in the case of the Zapatistas movement, it made it impossible for Mexico to invade Chiapas without suffering all kinds of reputational and economic consequences.

In #HumanRights, I argue that we are in the middle of another major shift in the relationship between human rights and new technologies, brought about largely by the simultaneous advent of the smartphone and social media. On one side, we have what Shoshana Zuboff calls “surveillance capitalism,” facilitated by big tech corporations that design and profit from people’s use of social media and smartphones. On the other, we are seeing the start of digital witnessing and forensics, whereby things that people post online can become evidence of human rights violations or war crimes. As a result, we have a new tension between the ability of states and others to engage in unprecedented tracking and surveillance and the ability of people to defend rights and pursue justice claims.

There are also other qualities to this relationship between technology and public rights-claiming. For one, experts have become far more important than they used to be because the tools of human rights violations and awareness of those violations are frequently in code, and who has access to code? Programmers and engineers have taken on much more prominent roles in creating platforms that provide visibility to rights claims, provide protection to rights claimants, and more. By the same token, these same platforms are also used by states to create new kinds of rights violations. So these are all part of a kind of digital arms race, not only between states who are engaging in hacking and disinformation as a way to gain advantage against other states but also between rights claimants and powerful actors who are using new technologies to control, conduct surveillance, and supress dissent.

Kiera: Do you think the role of journalists is obsolete or defunct now in light of that shift?

Dr. Niezen: Not at all. Journalists still play a huge role, particularly those who are active in publishing human rights stories. With the collapse of many newspapers and other mainstream media outlets, those that remain are, if anything, more important than ever. We’ve seen a number of very prominent journalistically responsible newspapers and T.V. outlets take on a leading role in bringing attention to corruption and violations of human rights, our sense of justice, knowledge, and truth.

Faun: In your book, you touch upon the idea that ICTs are concentrating human rights evidence-gathering in an elitist space where only those with specialized training (e.g., being able to get through a firewall) or technology can participate. Is this elitism any more prevalent than in the “analog” era of human rights where someone needed access to journalists and older forms of media? In other words, has the ability to press a human rights claim not always belonged to a privileged minority?

Dr. Niezen: That is an excellent question. To answer that, we need to look not only at whether there is an elite that is able to press a human rights claim, but what kind of elite? What is the specialized knowledge and access that you need in order to press a human rights claim, and in what relationship with what technology? Going through the different kinds of human rights claims that we’ve seen over the nearly 70 years of human rights history, we’ve seen a big shift in the internet era where people are able to build networks or hubs for information-sharing and activism, and big activist networks that collaborate and coordinate. However, now there is a dramatic specialization that separates people who can be effective human rights activists from those who can’t. In the early internet era, you only needed a good computer to be able to build websites and to pass on information, all of which could be learned fairly quickly. Today, effective human rights advocacy also takes forms like platform-building, which requires specialized tools and knowledge. There are a number of platforms that I can think of, like those sponsored by The Whistle or Citizen Lab, where you need to have a very sophisticated data-driven platform to protect witnesses, to enable them to upload digital information, or to stop hacking efforts by sophisticated actors who want to stop dissent. These specific engineering, programming, and platform development skills require years of training and practice to reach a level where one can work with other people to develop an online platform or engage in verification activities.

Faun: That’s a very interesting comment about the level of specialization required to ensure a platform is cybersecure, as even the highest-paying companies have a hard time securing experienced cybersecurity staff. How do activists manage?

Dr. Niezen: Exactly. However, one thing I left out is that the new universal access to smartphones is also making digital witnessing possible at every level, so you don’t need to be a specialist to begin the process of participating in digital witnessing. The verification process may happen down the line with someone who is very well equipped and trained in a specialized way, but it can start with anyone who has a cellphone in their hand and can upload a video to a social media platform.

Faun: Throughout your book, you discuss the algorithmic curation of stories we see and, particularly, stories that have successfully “gone viral,” even in the analog era (such as the image of Phan Thị Kim Phúc, commonly known as “napalm girl”). In your research, how have you seen storytellers attempting to create “virality,” and what qualities does a justice claim-style story need to go viral?

Dr. Niezen: First, let’s think about this: what doesn’t work? What doesn’t work is what I refer to as the “numbing effect of numbers.” When we approach a justice claim by talking about the number of people who have been harmed, displaced, killed, injured, etc., people don’t relate to it. What seems to work, interestingly, goes beyond technologies that we are using and goes back to something that is intrinsically human — going all the way back to the campfire — which is telling stories, creating narratives, building characters and following them, and creating a connection between the audiences and characters involved in the injustice.

Kiera: You mentioned earlier that these technologies are not only in the hands of activists. Turning to that idea, you’ve written that technology has “become both a remedy for the most insidious crimes against humanity and a vehicle for their perpetration.” How are technologies also reshaping the ways in which actors try to prevent or disrupt activists and justice claims?

Dr. Niezen: New technologies have increased the capacity for states to engage in surveillance and follow people in real-time. Some states are developing algorithms and platforms for facial recognition and tracking, which can be used for the collective good or repressive ends. COVID-19 is a good example. The same platforms that are being used for contact tracing, to follow the transmission of the disease, can be used to disrupt the actions of dissidents. Some states are also using new information technology to run disinformation campaigns because if you can create a media environment that confuses people enough, or where the truth is hard to see, people can’t pick out the truth as easily, or assign responsibility for anything to anyone. This hinders activism because people cannot conduct so-called naming-and-shaming. How can you apply shame to something that people don’t see?

There is also raw censorship: a simpler use of technology and state powers to repress. A recent example is the Lebanese woman in Egypt who referred to Egypt as a “son-of-a-b**ch country” on social media after she was subjected to sexual harassment as a tourist. She was arrested for the statement and sentenced to eight years in prison. She served two years before an outcry was enough to get her released. The point was made: the government didn’t want people inside the country making embarrassing statements that shamed the government. Ultimately, all of these aspects are coming together to empower state actors who are using new technologies with impunity to shore up their hold on power and influence.

Faun: This is a slight pivot, but you write in your book about the well-known problem of a dearth of women in the tech sector as well as in the “tech-left” communities of Silicon Valley. What ramifications do you think that a lack of gender (and/or ethnic) diversity has on tech-enabled activism and on the specialists that you mentioned before?

Dr. Niezen: I can only project and imagine what might be missing, but when we think about technology as a creation — as something that has information and priorities going into it from the very beginning — it’s clear that this issue will have very wide ramifications. Who are the people who are dominating the tech industry? It’s mostly young males, and not members of visible minorities. This profile (and the lack of diversity) influences the priorities in the development of new platforms. If we only have certain kinds of people working in tech, then we only have certain kinds of information, ideas, and priorities that are going into the development of new technologies. The problems with this are most visible with the use of technology in criminal law, especially predictive policing. For society to be more widely served by technologies, we need more diversity in the development of technology. We need more women, minorities, and people with different kinds of experiences.

Faun: As a final “pivot” question, I enjoyed your methodological section on “Digital Ethnography” and the limitations imposed by not speaking to someone face-to-face. Given the context of COVID-19, where face-to-face communication is logistically and sometimes ethically difficult, what reflections do you have for people who are suddenly conducting social research online?

Dr. Niezen: It’s difficult, right? I make the point quite forcefully that digital ethnography by itself isn’t really ethnography because when people are online, they are curating the way that they are represented. When personal presentations are carefully thought through, it goes against the serendipitous and accidental nature of ethnography where researchers spend as much time as they can in particular places (even institutional spaces), hearing different stories, and allowing accidents to happen.

With COVID, it’s not quite as stark a difference as the typical online-offline dichotomy. I’ve seen people in Zoom meetings with kids running into the room and cats walking across the keyboard, which opens up the possibility of a different kind of conversation and insight into people’s lives at home, which is otherwise impossible in a bureaucratic setting or an office.

Kiera: As a legal anthropologist, what is your perspective on the efforts by legal institutions to mitigate negative social impacts of technology (e.g. Europe’s GDPR)? In particular, in light of tracking during COVID, do you think laws are going to be effective tools to mitigate harmful uses of technology later?

Dr. Niezen: This is a hard question. I haven’t written about it as much, but I can answer in another way. There’s a great video of San Francisco in 1906 where a trolley-car is going down a major street full of mayhem. There are horse-drawn carriages, cars, and pedestrians all weaving in and out because there are no traffic signals anywhere: traffic lights hadn’t been invented yet. It was only in the 1920s after cars became more powerful that traffic signals were installed to regulate and direct the flow of traffic.

I believe we are at an analogous stage with new technologies today. To date, there has been no will to regulate these new technologies. Instead, there has been sharp resistance to regulating them. But one key difference between 1906 and now is that back then, the creators of technology — the automobile manufacturers — had no problem with traffic regulations because that didn’t mean they were going to sell fewer cars. In fact, the regulation was good for them, keeping customers safe. Regulation now, in contrast, is a scary thing for big tech corporations because it means an erosion of their access to data, which is their driving force, the data that gives them a vision into our private lives and has enabled the enormous revenues that they’ve generated, mostly through targeted advertising. They are putting up a fight, but I think the only way to move forward and create protections against the worst kind of abuses of our data has to do with regulation of our data and what people are able to do with it. We are seeing the beginnings of this now.

Photo by Alex King on Unsplash

Kiera: To end, looking ahead, what do you foresee as being the most important outcomes or changes in relation to technology and human rights in the next few years?

Dr. Niezen: One of the things that I see coming is a greater acceptance of digital witnessing in international regimes governing justice claims, particularly in international criminal law. The Human Rights Centre at UC Berkeley is developing protocols for the use of digital evidence, which will make it more acceptable for investigators, prosecutors, and judges to make use of digital data. I recently learned about a judge in a refugee tribunal who had to call a 15-minute recess to have someone explain to him what Instagram was. A person had made a claim and used their Instagram account to show strong evidence that they were part of a persecuted minority. Not every judge is like this, and there is a variety in the levels of understandings of new technologies and how they can be used as evidence, but with the new protocols, I think this is something that we will see as a greater part of international criminal law.

Thank you so much for your time! It was a pleasure to speak with you.

Professor Ronald Niezen is the Katharine A. Pearson Chair in Civil Society and Public Policy in the Faculty of Law and in the Department of Anthropology at McGill University. He holds the Canada Research Chair in the Anthropology of Law, and was selected as the William Lyon Mackenzie King Chair for Canadian Studies, Harvard University, for 2018–2019. An anthropologist with wide-ranging research experience, Professor Niezen researches and teaches in the areas of political and legal anthropology, Indigenous peoples, and human rights. You can find out more about his research and work here.
Faun Rice is a Senior Research and Policy Analyst at the Information and Communications Technology Council (ICTC). Faun is a social scientist with previous experience in audience/visitor experience museum research and endangered language revitalization. With ICTC, Faun brings her perennial interest in human social organization to bear on the impact of emerging technologies on the labour market, career pathways, and urban and rural life in Canada
Kiera Schuller, Research & Policy Analyst (ICTC), holds a background in human rights, international law, and global governance. Kiera launched ICTC’s new Human Rights Series in 2020 to explore the ethical and human rights implications of emerging technologies such as AI and robotics on rights, equality, privacy, freedom of expression, and non-discrimination.

ICTC’s Tech & Human Rights Series:

Our Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

--

--

ICTC-CTIC
ICTC-CTIC

Information and Communications Technology Council (ICTC) - Conseil des technologies de l’information et des communications (CTIC)