What Happens When the Internet Comes for Your Brand?

Renee DiResta
Media Genius
Published in
5 min readSep 20, 2021

--

Communications professionals are well aware that an angry group of people on social media can quickly ruin their day.

Often less clear is why the brand or company they work for is under attack.

Is it a group of legitimately upset customers? A fauxtrage effort perpetrated by trolls? An influence campaign by a state actor with complex incentives?

Disinformation campaigns and false or misleading stories are a very real issue for brands in the age of high-velocity, viral information. That initial handful of angry accounts can quickly snowball into a vast online crowd. Media coverage focuses primarily on political and social-issue mis- and disinformation campaigns. There’s a generalized awareness that social media firestorms pose a threat of reputational harm or negative impact to brands too, but not a lot of clarity around the specifics. Many companies don’t want to highlight the fact that they’ve been on the receiving end of these campaigns. Therefore there’s minimal shared understanding among communications professionals about how they should respond.

To inject some clarity into the dynamics in play, let’s start with some definitions:

Misinformation is information that’s false or misleading but spread inadvertently. Often, the people who are sharing the content are sincerely motivated to help their community. They might share a false rumor — for example, that a food brand’s product made someone sick — but are doing so out of genuine altruism.

Disinformation is distinctly different in its intent: the sharers, whether foreign or domestic, know that what they’re spreading is false or misleading. They want to manipulate people into believing that their claims are true, and want them to share the false content. They might want to erode customer confidence in, or affinity for, the targeted company or a prominent product. They might want to trigger an outcry for a boycott. Sometimes, they use inauthentic accounts to amplify an existing authentic small movement that’s aligned with their goals, making it look bigger than it really is. These behaviors attempt to motivate people to form a particular perspective or take an action in line with the manipulator’s aims.

These terms are (at least somewhat) neatly defined in academia. But in practice, things are far muddier.

When a communications professional is looking at a screen of angry social media accounts, it’s often difficult to determine what they’re contending with. A campaign that seems deliberate and coordinated could actually be precipitated by inadvertent misinformation. Activist groups who genuinely believe a rumor might choose to amplify. Or the story may not be false or misleading at all — it might even be mostly true or based on a real incident, but reframed or co-opted by bad-faith accounts to appeal to a particular political agenda. Sometimes, an attack on a brand can come out of nowhere. The main target may be something or someone else entirely, and the brand is inadvertently sucked into an online battle through no fault of their own. Conspiracy theorists, for example, have suddenly and unexpectedly worked brands ranging from Cemex to Wayfair into elaborate and misguided theories about human trafficking, purely because an online influencer within the QAnon community decided that there was something requiring ‘investigation’. And, at their worst, these attacks do not stay confined online but lead to destruction and harm in the real world.

Today’s internet is populated by cohorts: organized, networked communities united by opinions and beliefs. Some are activists. Their members sincerely committed to online action to raise public awareness about key topics. They may want to persuade people toward their point of view or to rile them up and galvanize them against a particular target. These cohorts may collaborate at times, to curate hashtags, memes, and themes they care about into the feeds of their target audiences. They may also coordinate their timing, causing hashtags containing a brand name to go viral, attracting the attention of tens of thousands of people in a very short period of time. The line between coordinated activism and something more manipulative can be fuzzy. Even social media moderation teams sometimes struggle to differentiate

It’s not easy to precipitate an online firestorm. It could stem from a communication or an action a brand took. It could be fueled by a new marketing campaign or a strong public stance on a social justice issue. Other times, the outrage — and the hashtag that corrals it for public consumption — may have been deliberately manufactured by trolls. But when a brand goes viral for negative reasons, the commentary often quickly moves beyond the initial cluster of accounts as real sympathizers pick it up. It hops from cohort to cohort. After enough hops, context degrades. Any correction or statement from the impacted brand, however, will likely not go viral. So determining how to respond, or if to respond at all, in the early stages is key.

Easier said than done in the heat of a targeted campaign. Assessing the content — the actual dialogue, memes, or videos in question — is only one piece of the puzzle. To have a full picture, it’s important to understand the actors involved, and the motivations and tactics of factions/cohorts, key influencers, and narratives at play.

This is critical before you broadcast your ESG agenda or brand purpose.

Some communities are activist but very insular — things that they amplify are not as likely to break out of the bubble or echo chamber and reach mainstream audiences. While other threat actors do have the potential to impact trust or to disrupt the conversation between customer and brand. Viral lies may influence what consumers believe about the company and impact purchasing decisions.

Most social listening tools aren’t capable of differentiating between sincere customers and troll accounts. They won’t be able to identify whether the narrative is largely confined within one online cohort or spreading across many. And while they may attempt to identify “bots” or automated accounts, bot-spotting is not always reliable, nor effective.

This level of vulnerability is scary. Online cohorts can be unpredictable, and brands can and do become the inadvertent targets of massively viral false or misleading narratives. It’s a scary thought. But a company that’s aware of the risk can ask the right questions and develop a plan for how to react before it’s needed. Crisis communications and cybersecurity offer a model: plan in advance. Scenario planning or “red teaming” can help teams think through types of narratives that might appear in an online campaign.

The emerging concept of Media Security is meant to help facilitate conversation about those specific preparation and mitigation strategies, and how they are applied to manage risk stemming from malignant content and digital activism.

--

--

Renee DiResta
Media Genius

I work in tech, and occasionally write about the intersection of tech + policy.