Published in


Defanging Disinformation’s Threat to Ukrainian Refugees

An abstract image of computer interfaces hovering over a map of the world

Hundreds of miles from the frontlines of the conflict, the millions of Ukrainian refugees now sheltering in Europe face ongoing threats to their safety. In the year since the invasion began, Russia has jointly deployed conventional warfare, cyberattacks and information operations, including disinformation targeting the 4.9 million refugees currently seeking protection in Europe, in its efforts to secure victory in Ukraine.

To counter the threat to both the information environment and refugees’ physical safety, we launched the largest prebunking experiment on social media to date in September. Using short videos, we sought to prebunk two emergent disinformation narratives aimed at undermining European solidarity and putting refugees in harm’s way. The experiment, which ran in two periods during the fall and winter of 2022, ultimately reached almost a third of the Polish, Czech and Slovak populations, garnering over 38 million views. These videos help individuals better identify two common rhetorical strategies commonly used to spread false claims online and thus defend themselves against manipulation.

As in previous waves of migration in Europe, which have likewise spawned spikes in disinformation surrounding migrants, disinformation narratives surrounding Ukrainian refugees have primarily sought to portray Ukrainians as a threat to EU citizens’ health, wealth and identity. False stories, often using manipulated video and images and designed to impersonate reputable outlets, including EuroNews and the Spanish television channel RTVE, have framed Ukrainians as responsible for the reckless destruction of property, the spread of disease, and severe cutbacks in Europeans’ standards of living, despite the claimed harms having never occurred. While these stories are imaginary, their impacts are anything but. As of September, 65 attacks on refugee shelters had been recorded in 2022 in Germany alone.

While the potential risks to refugees are dire, these disinformation narratives suffer a vital weakness that can allow them to be effectively countered. The predictable nature of disinformation campaigns, especially those focused on refugees and migrants, creates an opportunity to prebunk them, reducing their efficacy before they can take root. Prebunking — an evolution of work pioneered in the 1960s by social scientist William McGuire — seeks to reduce the effectiveness of disinformation campaigns before individuals ever encounter them by warning individuals of attempts to manipulate them and then providing them with a microdose of the false claims or tactics likely to be used to influence them.

To counter the threat of disinformation to Ukrainian refugees in Central and Eastern Europe, Jigsaw developed a series of six short videos prebunking then emerging disinformation narratives and the rhetorical tactics used to press them. These narratives were identified through interviews we conducted with experts in Poland, Czechia, and Slovakia, including Demagog, the Polish National Research Institute NASK, and One World in Schools. Two videos, each prebunking a different narrative, ran across YouTube, Facebook, Twitter and TikTok in each of the three countries. One video focused on narratives scapegoating Ukrainian refugees for the escalating cost of living while the other highlighted fearmongering over Ukrainian refugees’ purported violent and dangerous nature.

The videos reached 80 percent, 69 percent and 62 percent of Czech, Slovakian, and Polish Facebook users, respectively, as well as 68 percent and 55 percent of Czech and Slovakian Twitter users, and 50 percent of Polish TikTok users. Viewers of the videos on YouTube were later presented with a single-question survey, containing one of three different questions, to determine if their ability to identify the misinformation tactic — fearmongering or scapegoating — had improved over a control group that did not see the video, a metric reported below as a change in discernment. Put differently, a change in discernment captures the percentage point difference in the share of viewers who could correctly identify the misinformation tactic after watching the prebunking video compared to a group who had not seen it.

The measurable effects of the campaign varied between countries and videos, as well as across the questions asked to determine the effectiveness of the intervention. Ultimately, we found the share of viewers who could correctly identify these misinformation tactics increased by as much as 8 percentage points after viewing one of these videos. The full results are reported in the tables below. Note that these results apply to the full audience base — as such, some viewers may have watched only a portion of the video.

In the first phase of the campaign in Czechia and Slovakia, we ran the campaign across two audience segments: one optimized to reach the largest audience and the other optimized to reach those most likely to watch the video all the way through. The results were strongest amongst those likely to watch — and thus learn from — the complete video, which is consistent with prior experimental results in the lab. In light of the first phase results, the second phase was targeted entirely towards those most likely to complete the videos, resulting in improved discernment of both disinformation tactics in Czechia and Poland.

We found the greatest increase in discernment was in Poland, when comparing between countries. The results in Czechia were more mixed. The largest increase in discernment in the study was recorded there in phase one amongst the audience segment more likely to watch the entire video. Overall, however, a smaller number of questions indicated an improvement in viewers’ ability to discern the manipulative tactic than in Poland. There are a number of reasons why any one question may have null results. If the question is too easy, such that anyone can accurately answer it, or too difficult, such that even those who have been prebunked can’t identify it, a change in discernment will fail to appear. Further testing and refining these questions will be critical in future experiments.

In Slovakia, the campaign produced little discernible effect. Only in the first campaign when combining both audience segments can a measurable change be detected — an improvement of 2.2 percentage points on one question. This may have been due to the videos being dubbed instead of created specifically for the Slovakian market, but further research is required to determine the precise causes.

More promisingly, younger demographics showed a greater increase in discernment than older demographics on a number of questions, despite this group being more exposed to certain disinformation narratives. In particular, the share of Polish viewers aged 18–24 who could correctly identify fearmongering after watching the relevant video rose by 4.4 percentage points even though the belief that Ukrainian refugees are dangerous had already shown signs of taking root amongst this segment of the population.

There are many interventions that show tremendous promise in the lab, but fail to produce significant results in the real world, which makes on-platform experiments like these essential. While there’s far more work to be done to understand which messages are likely to be most effective in reaching people and improving their ability to discern manipulative techniques, these initial results demonstrate that prebunking has promise to scale quickly across platforms and to build resilience to disinformation even in the noisy environment of social media.

Learnings from this campaign — including efforts to simplify critical messages and iteration on the survey questions to effectively measure knowledge gain — will inform our future experiments as we seek to better understand the effectiveness of prebunking in the wild. New prebunking experiments will be launched with local partners and experts in India and Germany later this year, with additional experiments currently being planned.

By Beth Goldberg, Head of Research, Jigsaw



Jigsaw is a unit within Google that forecasts and confronts emerging threats to open societies, creating future-defining research and technology to inspire scalable solutions.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.