Defanging Disinformation’s Threat to Ukrainian Refugees

Published in
7 min readFeb 13, 2023


An abstract image of computer interfaces hovering over a map of the world

Hundreds of miles from the frontlines of the conflict, the millions of Ukrainian refugees now sheltering in Europe face ongoing threats to their safety. In the year since the invasion began, Russia has jointly deployed conventional warfare, cyberattacks and information operations, including disinformation targeting the 4.9 million refugees currently seeking protection in Europe, in its efforts to secure victory in Ukraine.

To counter the threat to both the information environment and refugees’ physical safety, we launched the largest prebunking experiment on social media to date in September. Using short videos, we sought to prebunk two emergent disinformation narratives aimed at undermining European solidarity and putting refugees in harm’s way. The experiment, which ran in two periods during the fall and winter of 2022, ultimately reached almost a third of the Polish, Czech and Slovak populations, garnering over 38 million views. These videos help individuals better identify two common rhetorical strategies commonly used to spread false claims online and thus defend themselves against manipulation.

As in previous waves of migration in Europe, which have likewise spawned spikes in disinformation surrounding migrants, disinformation narratives surrounding Ukrainian refugees have primarily sought to portray Ukrainians as a threat to EU citizens’ health, wealth and identity. False stories, often using manipulated video and images and designed to impersonate reputable outlets, including EuroNews and the Spanish television channel RTVE, have framed Ukrainians as responsible for the reckless destruction of property, the spread of disease, and severe cutbacks in Europeans’ standards of living, despite the claimed harms having never occurred. While these stories are imaginary, their impacts are anything but. As of September, 65 attacks on refugee shelters had been recorded in 2022 in Germany alone.

While the potential risks to refugees are dire, these disinformation narratives suffer a vital weakness that can allow them to be effectively countered. The predictable nature of disinformation campaigns, especially those focused on refugees and migrants, creates an opportunity to prebunk them, reducing their efficacy before they can take root. Prebunking — an evolution of work pioneered in the 1960s by social scientist William McGuire — seeks to reduce the effectiveness of disinformation campaigns before individuals ever encounter them by warning individuals of attempts to manipulate them and then providing them with a microdose of the false claims or tactics likely to be used to influence them.

To counter the threat of disinformation to Ukrainian refugees in Central and Eastern Europe, Jigsaw developed a series of six short videos prebunking then emerging disinformation narratives and the rhetorical tactics used to press them. These narratives were identified through interviews we conducted with experts in Poland, Czechia, and Slovakia, including Demagog, the Polish National Research Institute NASK, and One World in Schools. Two videos, each prebunking a different narrative, ran across YouTube, Facebook, Twitter and TikTok in each of the three countries. One video focused on narratives scapegoating Ukrainian refugees for the escalating cost of living while the other highlighted fearmongering over Ukrainian refugees’ purported violent and dangerous nature.

One of the six videos that ran during our experiment, this video highlights attempts to scapegoat refugees in Poland for the rapid rise in the cost of living.

The videos reached 80 percent, 69 percent and 62 percent of Czech, Slovakian, and Polish Facebook users, respectively, as well as 68 percent and 55 percent of Czech and Slovakian Twitter users, and 50 percent of Polish TikTok users. Viewers of the videos on YouTube were later presented with a single-question survey, containing one of three different questions, to determine if their ability to identify the misinformation tactic — fearmongering or scapegoating — had improved over a control group that did not see the video, a metric reported below as a change in discernment. Put differently, a change in discernment captures the percentage point difference in the share of viewers who could correctly identify the misinformation tactic after watching the prebunking video compared to a group who had not seen it.

A chart indicating the overall reach of this experiment, which reached 62 percent of Facebook users and 50 percent of TikTok users in Poland, 80 percent of facebook users and 68 percent of Twitter users in Czechia, and 69 percent of Facebook users and 55 percent of Twitter users in Slovakia.
Our experiment reached a majority of users on social media platforms in all three countries

The measurable effects of the campaign varied between countries and videos, as well as across the questions asked to determine the effectiveness of the intervention. Ultimately, we found the share of viewers who could correctly identify these misinformation tactics increased by as much as 8 percentage points after viewing one of these videos. The full results are reported in the tables below. Note that these results apply to the full audience base — as such, some viewers may have watched only a portion of the video.

A data table containing the full results from the experiment, which is explained in detail below.

In the first phase of the campaign in Czechia and Slovakia, we ran the campaign across two audience segments: one optimized to reach the largest audience and the other optimized to reach those most likely to watch the video all the way through. The results were strongest amongst those likely to watch — and thus learn from — the complete video, which is consistent with prior experimental results in the lab. In light of the first phase results, the second phase was targeted entirely towards those most likely to complete the videos, resulting in improved discernment of both disinformation tactics in Czechia and Poland.

We found the greatest increase in discernment was in Poland, when comparing between countries. The results in Czechia were more mixed. The largest increase in discernment in the study was recorded there in phase one amongst the audience segment more likely to watch the entire video. Overall, however, a smaller number of questions indicated an improvement in viewers’ ability to discern the manipulative tactic than in Poland. There are a number of reasons why any one question may have null results. If the question is too easy, such that anyone can accurately answer it, or too difficult, such that even those who have been prebunked can’t identify it, a change in discernment will fail to appear. Further testing and refining these questions will be critical in future experiments.

In Slovakia, the campaign produced little discernible effect. Only in the first campaign when combining both audience segments can a measurable change be detected — an improvement of 2.2 percentage points on one question. This may have been due to the videos being dubbed instead of created specifically for the Slovakian market, but further research is required to determine the precise causes.

More promisingly, younger demographics showed a greater increase in discernment than older demographics on a number of questions, despite this group being more exposed to certain disinformation narratives. In particular, the share of Polish viewers aged 18–24 who could correctly identify fearmongering after watching the relevant video rose by 4.4 percentage points even though the belief that Ukrainian refugees are dangerous had already shown signs of taking root amongst this segment of the population.

There are many interventions that show tremendous promise in the lab, but fail to produce significant results in the real world, which makes on-platform experiments like these essential. While there’s far more work to be done to understand which messages are likely to be most effective in reaching people and improving their ability to discern manipulative techniques, these initial results demonstrate that prebunking has promise to scale quickly across platforms and to build resilience to disinformation even in the noisy environment of social media.

Learnings from this campaign — including efforts to simplify critical messages and iteration on the survey questions to effectively measure knowledge gain — will inform our future experiments as we seek to better understand the effectiveness of prebunking in the wild. New prebunking experiments will be launched with local partners and experts in India and Germany later this year, with additional experiments currently being planned.

Update: Deeper exploration of Slovak Results

Following the completion of our prebunking campaign, we conducted a series of focus groups in Slovakia to better understand why the effects differed there compared with Czechia and Poland.

The findings from the focus groups indicate that the primary distinction between the campaigns could be found in the widespread presence of pre-existing migration narratives in Slovakia. These narratives were already being exploited in a highly polarized context at the time the campaign was live. In such a context, the Slovak focus groups reported being skeptical of most messages about migration, regardless of the source or viewpoint expressed.

From this analysis we wish to share our core findings and hypotheses on why our results differed in this market:

  • The anti-migrant narratives we tackled in our campaign were already deeply polarized in Slovakia when the videos went live, which meant prebunking, by definition, would be predicted to have less impact.
  • Our brand and those of our partners were not as well known in Slovakia. Respondents said they would be more open to messaging on misinformation from known and trusted sources.
  • The number and type of Ukrainian refugees in Slovakia differ from the other two markets. Slovak respondents felt the messaging oversimplified the complex realities of hosting refugees as they understood it.

The focus groups also affirmed that there were elements of the videos that worked well across a diverse audience: clear plotlines, authentic settings, low cognitive load for viewers, the focus on self defense, and clarifying how disinformation is used.

There are a number of elements of this research that align with our findings across other markets and campaigns that we have run. Namely, we have found that addressing prebunking techniques rather than specific narratives to be an effective approach to navigate highly polarized contexts and complex emotions in viewers. We applied this lesson in our more recent German campaign to great effect (for reference: Prebunking Manipulation Techniques in Germany).

By Beth Goldberg, Head of Research, Jigsaw



Editor for

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.