Tools and Takeaways from the 2022 Amnesty International Digital Verification Corps Summit

Madeleine Wong
Human Rights Center
8 min readOct 31, 2022

Reflections from Berkeley DVC members Madeleine Wong, Danilo Gleichmann, Sofia Schnurer, and Yaas Farzanefar

HRC Investigations Lab students reflect on the 2022 DVC experience.

Focus on the details, advanced visual analysis

By Madeleine (Maddi) Wong

“Roughing the passer. Five-yard penalty.” Because of the modernization of football, advanced visual analysis has become pivotal to making live calls during games, potentially changing the outcome of four-hour matches in five seconds. As someone who is passionate about sports and its intersection with gender equity and social justice, I was intrigued to discover that verifying evidence of human rights violations could be so closely linked with football. During the advanced visual analysis workshop with Amnesty International’s Stella Cooper, I was introduced to many skills and techniques that would help me enhance my verification and discovery skills in the Digital Verification Corps (DVC).

Stella began our session by introducing a framework that would be useful in verifying any piece of digital media. She instructed us to (i) work from concrete observations we identify in a piece of media, (ii) acknowledge our biases, and (iii) seek what we don’t see. Providing us with a series of questions to ask ourselves before, during, and after our video analysis sessions, we were taught to go in with a purpose, never assume information, stick to our goals, and follow up with grounding exercises. We were then able to apply our new framework to a current case study that Amnesty International was working on, identifying important components of a series of videos. In small groups, we focused on identifying language,perspective, location, and actors when analyzing the videos. This allowed us to cement our new skills, so we could bring them back to our DVC team at UC Berkeley.

Maddi Wong and Stella Cooper (L-R) at the Digital Verification Corps summit in Mexico City.

Stella also highlighted different types of bias, and encouraged us to practice collecting ourselves after viewing graphic media. We were able to understand confirmation bias, visual bias, and subjective behavior by analyzing the actions of Harry Styles and Will Smith as portrayed by the media. Although bias floods interpretations of popular culture, Stella also showed us its implications for human rights investigations, walking us through the impacts of bias on the 2020 Kenosha protests and Kyle Rittenhouse shootings. Stella ended the session by encouraging us to always seek missing information by layering different types of media. She also described some implications external to human rights, specifically in sports. I had the opportunity to speak to Stella about a field called “sports investigative journalism” and professional development in that field, which opened my mind to new career opportunities and goals. The session on advanced visual analysis was just one of the highlights of the DVC Summit in Mexico which was a fulfilling, insightful success.

Understanding video manipulation and tackling deep fakes

By Danilo Gleichmann

Video manipulation can be divided into two main categories: human-centered and AI-centered. The former refers to the traditional editing of footage with the intent of altering, misrepresenting, or subverting context. The latter is more malicious, and often uses machine learning to doctor or fabricate high-quality fake content.

What makes deep fakes unique from others in the open source intelligence (OSINT) space is that video manipulation is becoming increasingly sophisticated and difficult to distinguish from authentic visual evidence. Spotting this type of manipulation requires a critical eye to spot the nuances in both footage and body language. Most people know of deep fakes from obvious celebrity impersonations to make it seem like they are saying something funny or outrageous. Currently, deep fake detections are still made by humans, although computer-centric innovations are on the horizon.

Danilo Gleichmann and other DVC students speak during a workshop at the 2022 summit.

Some revealing factors that help humans detect deep fakes can be hard to spot. In the deep fakes workshop at the DVC summit, we practiced in real-time. We were instructed to look at mouth and lip movements, since glitches in those areas are the biggest reveal of deep-fake activity. Other signs to look out for are unusual ear forms, unnatural glass frames, and faces being ‘too symmetrical.’

At the Human Rights Center’s Investigations Lab, there are several tools we can use to develop our critical thinking skills, and learn how to spot video manipulation and deep fakes. The Digger Project has created several exercises to help people detect artificial videos, sounds, and pictures. The main risk with deep fakes in our DVC work would be for our team to take all visual materials at face value when conducting discovery. It may sound like a no-brainer to think critically when evaluating digital materials, but maintaining a constant critical eye is easier said than done. Even the most experienced OSINT investigators can be fooled by a sophisticated deep fake.

3D modeling and human rights investigations

By Sofia Schnurer

As someone deeply passionate about recent technological developments to aid in human rights investigations, I found the training on 3D modeling to be one of the most memorable presentations at the Amnesty International DVC summit. Tom James, a 3D Design and Research consultant with Amnesty International, has an advanced architecture background that he applies to reconstruction of crime scenes. One familiar difficulty he described when conducting a visual investigation is the absence of key footage needed to verify an incident. 3D modeling can be useful in providing that necessary context for crime scene and other incident reconstruction. Tom referred to many examples of 3D imagery used to recreate a scene, one being a digital reconstruction by the New York Times and Forensic Architecture (where Tom used to work) of a chemical attack on an apartment in Douma, Syria in 2018.

Sofia Schnurer listens to a presenter at the DVC summit.

The reconstruction of the chemical attack was used by the New York Times to verify if a chlorine filled bomb that killed 49 civilians was dropped by a Syrian military helicopter. Leaders in the Syrian government denied culpability in the attack, claiming there was no evidence of any chemical use. Because the Times and their partners at Forensic Architecture could not visit the scene in Syria, they used videos from Syrian activists and visual evidence within Russian reports to reconstruct the bombing on the apartment. Images available of the bomb revealed key features that researchers were able to use to reconstruct what type of bomb it was and how it landed on the apartment. Researchers were then able to identify the chlorine-filled bomb as one dropping from a Syrian military helicopter by having a 3D model that would most accurately reconstruct the placement and impact of the bomb on the apartment.

It’s remarkable how researchers can use 3D reconstruction to piece together a scene without being physically present at the scene of an event. This is a fascinating addition to the investigation process that, even with its limitations, can only expand within human rights work.

Data storytelling for justice

By Yaas Farzanefar

Datasets tell stories, and the stories they tell shape our perspective and impact our everyday lives. When we collect and analyze data, certain values and interests are reproduced and amplified, while other values are not. Yet, these hidden gaps, along with systemic bias in datasets, are rarely questioned! At Amnesty International’s 2022 DVC summit, Sophie Dyer, tactical research advisor at Amnesty International’s Crises Evidence Lab, held a workshop on data storytelling for justice to highlight the ways data is used in the field of human rights, and why we must question our data.

The workshop focused on data-gathering techniques, missing data sets, and data bias. One issue faced in the data-gathering world is the problem of missing data sets. These are defined as empty data spaces where no data exists, and lack of data is unfortunately often correlated with the most vulnerable groups in various contexts. Missing data sets reveal the lack of attention to certain communities and hint at hidden social biases and indifferences that affect the world.

Yaas Farzanefar (far right) works with fellow DVC students on a project at the 2022 DVC summit.

An astonishing finding for me was the concept of data lineage. Often when we hear a fact, we don’t question where the data came from. This fact was highlighted when Sophie mentioned the example of women’s fertility rates after the age of 35. It’s often repeated that one in three women over the age of 35 will not be able to conceive, but where did this data come from? As it appears, the data used for this popular statistic is completely outdated! This popular statistic is derived from church birth records from sixteenth century France before the advent of modern medicine. Yet, this fact continues to haunt married couples as the age of marriage continues to rise in the modern age. However, new research suggests that female fertility after 40 is actually improving today. This research highlights how biased data can seriously impact our lives. We hear thousands of facts throughout our lives, but do we question them enough?

At Berkeley’s DVC, we are assigned regions to observe, and then analyze and identify human rights abuses in digital media. Our project this semester is on police brutality against protestors, specifically instances with police weapons used against civilians. However, these videos don’t inherently include a gendered lens, and undermine the brutal reality that gender is often a defining factor in how police choose their weapons. Police regularly engage in sexual harassment and assault against female protesters at protests around the world, especially in authoritarian contexts. Female-identifying and nonbinary individuals also experience violence at protests differently, as much of the violence is directed against perceived gender identity. The data we are gathering fails to show this. Because of this framework, we either do not observe videos that don’t show live ammunition or categorize all these brutalities as one. The data storytelling for justice workshop was definitely an eye opening experience that urges us to question our data, whether in research or everyday life.

To learn more about the Human Rights Investigations Lab, please visit the Human Rights Center website. To learn more about the DVC, please visit the Citizen Evidence Lab website. Yaas Farzanefar co-authored a blog with the Citizen Evidence Lab and Amnesty International, which you can read here.

--

--