AI to seek justice in the Syrian conflict

Equinox AI Lab
14 min readFeb 27, 2023


Carla Acosta — Visual Designer Equinox AI Lab


It takes work to write about war. I was born in a country that experienced combat in the 80s: Colombia. I was born in the ’90s, a decade after; however, it sensitised me. That’s why I’ve always been interested in the Middle East conflict, especially in Palestine and Syria. Syria and Colombia have a lot in common; they are beautiful places with amazing people and even share city names like Palmyra (Syria) and Palmira (Valle, Colombia).

I’ve read several books to understand the conflict, and the most important I’ve realised is that wars do not have just one cause; it is mandatory to comprehend multiple contexts and triggers. In this study, I’ll write about the Syrian war, the conflict’s digital media, and Artificial Intelligence.

Syria is a country which resisted war for almost a decade. Since the beginning of Syria’s conflict, activists on the ground risked their lives to document human-rights violations, from torture and attacks on protesters to indiscriminate rocket strikes and barrel bombs1. There are millions of hours to analyse, understand what happened, expose to international organisations, and seek justice. All this content is known as user-generated content (UGC) or eyewitness media. AI has been used to detect war crimes, chemical weapons, unauthorised guns and victimisers on that content.

In this text, I’ll dive deeper into the Syrian war user-generated content and how it relates to AI to leverage the different justice processes in international organisations, avoid investigators’ trauma, and recognise guns and bombs to create solid cases. Finally, the end of the writing will answer the question Is UGC and AI enough to do justice? After a previous analysis.


Syrian frictions have several roots, from past unresolved conflicts, land disputes and political tensions to ethnic and religious differences. It would take me a whole book to explain the war; that’s why I prefer to recommend one that explains it from a broad view; its name is Syria, written by the author Victor de Currea-Lugo2. All the tensions that I mentioned before detonated when young people took to the streets in the southern city of Daraa in March 20113.

Since 2011 young protesters have been countered by strong government crackdowns and increasing violence from both government forces and civilians. Soon conflict became a civil war, where different nations intervened, and atrocities were committed. Lots of videos, photographs and even voice recordings were uploaded to the internet to document the war in real-time. As a result, there are more digital media hours than conflict hours.


Eyewitness media need to be organised and protected. That’s why the Syrian Archive was born. “It is a private Syrian-led project that aims to preserve, enhance and memorialise documentation of human rights violations and other crimes committed by all parties to conflict in Syria for use in advocacy, justice and accountability. They shed light on how civilians’ documentary practices and experiences have significantly contributed to the production of multi-source digital testimonies within diverse and constantly transforming local, social, political and organisational contexts” 4. On the Syrian Archive website, there are 3 million recordings uploaded. Those videos are always verified to stop misinformation. Time, date, location and sources are checked. This material makes collecting Syrian evidence and building cases more manageable. United Nations is working with AI to analyse the recordings and judge victimisers.


-AI, social media and war-

In the Syrian war, AI is used for diverse purposes. The first purpose is to capture content; Syrian Archives has a very nice database with organised and verified videos and images; however, there are still many videos that no one has seen. AI can capture Syrian War content on the internet and notify investigators. Machines are trained to recognise places, dates and images.

On the other hand, AI is also capturing content from social media to ban it from Facebook, Twitter or YouTube (Tiktok didn’t exist at that moment, so there’s not a lot of content there, and Instagram had 1-year-old). The COVID-19 virus forced social media workers to empty offices and rely on automated takedown software5. “AI is notoriously context-blind” and is threatening and deleting documentaries and videos, confusing them with terrorist content. “With the help of automated software, YouTube removes millions of videos a year, and Facebook deleted more than 1 billion accounts last year for violating rules like posting terrorist content” 5.

Different algorithms are looking for war content on the internet to send it to the International Criminal Court or United Nations, for example. Other algorithms are taking down “dangerous” content (according to what they’ve learned is dangerous). Is AI helping or blocking war justice? How can algorithms differentiate war content from terrorist content? How can we benefit from terrorist content to seek justice?

The main problem is when memories are lost, the act itself will be forgotten, and justice won’t ever come. Preserving war content is essential to humanise war. People have uploaded their relatives’ deaths to have proof and to expose it to the world, and as hard as it sounds, it is the only way they have evidence of what’s happening in their country. For them, it is difficult to trust the government or other internal institutions.

Artificial Intelligence needs to be more efficient in this case, and it is data scientists, UX designers, psychologists and tech professionals’ job. Kids are often exposed to internet content, and it’s reasonable that parents don’t want their kids to find explosions, murders, or torture online. Of course, they have the right to decide what content they want to watch. But Syrians and victims around the world also have the right to speak and denounce crimes. As professionals working on tech, we need to create strategies and solutions to promote free speech without violating others’ rights, for instance, user profile creation, tags or specific platforms for kids and war content. A great example is eyeWitness, an app created by the International Bar Association; it allows one to upload sensitive content, leave it securely on lockers, and use it later for investigation or trials.6

-AI to avoid trauma-

AI can see the data and can reduce the time human investigators spend watching through hours of traumatic videos and images. Can you imagine spending your days looking at tortures, murders and explosions? AI can analyse and cluster different kinds of crimes to further investigation and also delete duplicates or unrelated images.

During a war, trauma is not limited to the battlefield. Investigators, journalists, and even social media users can be exposed to violent and distressing content that leaves hard impressions. Those impressions are called secondary trauma, and it is a result of second-hand exposure7.

In table 48, we can see that from a sample of 209 journalists and humanitarian workers interviewed, 33% have seen disturbing material online once per week or more. So if someone consumes war media on a regular basis, they can feel trauma, and sometimes they even feel the shame of feeling trauma because they are not the direct crime victims.

Table 4 — Making Secondary Trauma a Primary Issue

An editor at a news agency explained how he was traumatised by the unexpectedly distressing content of the picture of Alan Kurdi, a 3-year-old Syrian boy found drowned on a beach in Turkey in September 2015.

Alan Kurdi photograph taken by Nilüfer Demir

“The dead child on the beach. I walked into the office, and a colleague rushed up to me saying, ‘look at this, look at this, it’s really important’, and you don’t have time … the guards haven’t gone up, and I spent the entire evening in tears, I was really shaken by it…” 8

AI analyses the content; it can be trained with computer vision techniques to tag disturbing images or sounds on media and raise awareness among the viewer. “If there is a warning of what you are about to see, you are steeled for it; you can brace yourself. 8” (For example, warning advice on Instagram). AI can also create media clusters, so investigators don’t have to review all the war material but just fragments of specific cases they are building.

At the same time, Natural Language Processing can be helpful. As almost all the Syrian War User Generated Content (UGC) is in Arabic, it is hard to share the burden among all the NGO members; those who speak Arabic are loaded with traumatic content. With NLP, AI translations can be helpful to understand the recordings faster, easier, and by all the members. In fact, the first time technologists (IBM) and international criminal justice professionals collaborated was to translate evidence in the Nuremberg trial between English, French, German, and Russian.9

If Artificial Intelligence is able not only to reduce revision time but improving the mental health of second-line workers, it is worth it. Fight injustice for victims and survivors without traumatising those who want to help.

-AI, war content and Object Detection-

Object recognition algorithms work to detect specific objects, for example, weapons and all related data in recordings, to create and build cases. (Remember that there are some countries accused of giving the government weapons and interfering in the war).

One example of object detection is Mr. Khateeb’s. He is the founder of the Syrian Archives and wanted to assemble a searchable database on all munition attacks. He hoped to build a case where Syria and its military backer, Russia, were accused of using internationally banned weapons during the conflict.1 This was a challenging task. They needed to have lots of images, explosion sounds, and videos of where those guns were used, to train the machines. Still, they are not on the internet (remember that other AI algorithms are banning war content on social media). In this case, tech professionals had to be creative.

They created synthetic data (2D and 3D gun models and did sound and image recreations) to train machines and identify attacks in 1.5 million videos recorded during the war. Since mid-2021, the case was ready to start, they thought that showing the International Community what was happening would lead to an intervention against Syria’s regime, but it was not the case1. AI is not enough to break the sovereignty of a state in this case. But at least some proof is being compiled to keep on making the perpetrators visible.

Machines don’t do it alone; they can’t; this is a sentence I comprehend working here in Equinox AI Lab. It is necessary to have a creative mind behind to solve problems (like we saw in the previous example) and also someone who checks if the model is accurate. You have to test and prototype uncountable times. That’s why I believe Object detection could be such a powerful tool to analyse war media content; this technology is trustworthy as professionals follow quality checks while training and implementing the algorithms.

Bomb attack in Kafranbel Surgical Hospital in Syria — Taken from


According to Laura, (a psychologist I interviewed to write this article), survivors need closure. Then, they can process and accept the events with justice, reparation and guarantees of non-recurrence. Therefore, it is possible that flashbacks and post-traumatic stress may be reduced in their ordinariness.

It is not easy to record the death of your relatives, the destruction of your house, or the bombs attacks on the school you attended when you were a child, but it is one of the few ways Syrian people have to denounce or at least have proofs for the future with the hope of justice. If they make an effort to archive traumatic media, the least we, as an external party, can do, is review it, listen to it and use it to find justice, its primary purpose.

If there are tools to find justice, human rights workers and justice professionals should take advantage of them always in pro of the victims. The arena in which technology, international human rights, and criminal prosecution intersect is new and growing, so they must find strategies to adapt laws and accept easier and faster digital evidence.


“In 2017, the International Criminal Court issued its first arrest warrant that rested primarily on social media evidence, after a video emerged on Facebook”5. That video showed Mahmoud al-Werfalli, a Libyan commandant shooting dead ten prisoners. According to the Irish Times, it seems that some of those videos were posted online by Werfalli’s Al-Saiqa colleagues, which means it is alleged terrorist content.

This is when we should ask ourselves, can we benefit from terrorist content? How can we teach AI to use it in our favour instead of just banning or deleting it? One solution is that big social media companies like YouTube or Facebook algorithms don’t focus just on deleting or banning content. First, it should review it, notify the proper entities if it’s valuable, and preserve the media to use it in the survivors’ favour.
“There are, of course, procedural difficulties in this new era of “open-source evidence”, sometimes it is hard to verify, it’s hard to prove that it hasn’t been tampered with, and its date, time and location can be hard to establish beyond doubt”.6 But UGC allied with AI can be a great duet.

Personally, I think it’s possible to build a trustworthy case with AI-provided evidence. The first thing we need to clarify, as Alejandro, my interviewed Data Scientist told me, is the role of AI when we say “AI to seek justice”. AI won’t be the judge; it is still unable to be. As I wrote before, to exemplify, it will take the role of evidence provider, or it will be a tool to build cases. A lot of human beings will collaborate in the process, and that’s the synergy we have to reach in human-machine interaction.

Computers won’t replace people; they will potentiate their capabilities. Therefore, AI won’t be the judge; all the people who work with it will contribute to finding the truth and expose it to a tribunal.

Let’s clarify how courts define media content evidence: “Digital evidence is “information and data of value to an investigation that is stored on, received, or transmitted by an electronic device”. At international organisations, chambers will examine its provenance, source or author.” 9 If they consider it has weight in the investigation, they will use it as evidence in the cases. It is critical that judges understand how AI works, so they will understand how the proofs were collected, how models validate media information and why it has weight.

There are some examples where media was sufficient evidence. Since the Second World War in the International Military Tribunal for Nuremberg, a mountain of Third Reich propaganda, public campaigns, films, and photographs prove the Nazi’s genocidal intent and other criminal incidents10.

A most recent example is Sweden; in 2012, the police got access to a film posted on Facebook where a Syrian rebel participated in a serious assault, and the court accepted it as evidence to prosecute him.6 Last example is Ukraine; in this war, people are taught to record videos and collect witness evidence to make it reliable. AI is also used to analyse Russian social media pictures and detect perpetrators. There aren’t a lot of cases built yet, but they will come, and we will be able to watch them.

As a general principle, international courts and tribunals have three basic rules, and all evidence has to fall into one of the three categories. The first is “power-based rules” that define the prosecutor’s authority to collect evidence. The second is “rights-based rules” that require the prosecutor to accord certain privileges to suspects and witnesses during evidence collection. And third, “procedural rules” govern the techniques the prosecutor can use to gather and preserve evidence.9 It is easy that digital media evidence doesn’t fall into any of the three rules; that’s when there is the need to have a defined admissibility.

AI and UGC will be enough to seek justice as the people who manage them follow protocols and discover creative ways to analyse, verify and expose what they want to highlight. It is mandatory that international courts create new rules and regulations to support digital evidence and authenticate it. Since photographs and films have existed, they’ve been a rich source to penalise in different conflicts and definitely, witness testimony is no longer the only critical body of proof.


The Syrian conflict was particular; not only journalists and institutions documented the war events, but the civil population uploaded tons of videos, photographs and voice recordings to social media platforms and apps. These archives became a valuable asset in prosecuting perpetrators. AI has become a facilitator of truth and justice; it is now used to find conflict content online, tag specific attacks or occurrences to make investigators’ jobs easier, delete duplicates, and detect key faces and objects to understand what happened.

At the same time, victims or survivors need reparation and guarantees of non-recurrence, justice will help them process the events, and therefore it’s possible that flashbacks and post-traumatic stress are reduced. That’s why international human rights and criminal prosecution have to find strategies to accept easier and faster digital evidence.

Answering the question, are UGC and AI enough to make justice? They are. We have to keep in mind that the AI role is not being a judge but a tool to provide and authenticate evidence. This evidence can be helpful in finding accountability in natural persons and institutions. But still, there’s a need to create new rules that take into account digital media evidence.

Finally, after the Syrian civil war, more conflicts are recorded in real-time. For example, in Ukraine, witnesses are taught to record videos correctly to make them helpful evidence, and AI techniques like face recognition are used to identify victimisers. So, the field where international human rights, prosecution, AI and digital content is growing, and we have to be prepared.

“Justice will not be served until those who are unaffected are as outraged as those who are”. Benjamin Franklin


Watch here the interviews the author Carla Acosta did with Alejandro Salamanca Equinox Director and Data scientist, and Laura Vesga a psychologist expert in armed conflict.


1. Abdulrahim, R. (2021, 13 February). AI Emerges as Crucial Tool for Groups Seeking Justice for Syria War Crimes. WSJ.

2. Currea-Lugo, D. V. (2019). Syria: Where hate displaced hope. AGUILAR.

3. Reid. (2022, 12 July). Syrian refugee crisis: Facts, FAQs, and how to help. Taken 10 December 2022, from

4. Syrian Archive. (s. f.).

5. Asher-Schapiro, A. B. B. (2020, 19 June). «Lost memories»: War crimes evidence threatened by AI moderation. U.S.

6. Cluskey, P. (2017, 3 October). Social media evidence a game-changer in war crimes trial. The Irish Times.

7. Spangenberg, J. (2022, 9 March). How war videos on social media can trigger secondary trauma.

8. Dubberley, Griffin & Mert Bal. (2015). Making Secondary Trauma a Primary Issue: A Study of Eyewitness Media and Vicarious Trauma on the Digital Frontline. eyewitnessmediahub. Taken 10 December 2022, from

9. Freeman. (s. f.). Digital Evidence and War Crimes Prosecutions: The Impact of Digital Technologies on International Criminal Investigations and Trials. law net. Taken 10 December 2022, from