Twitter’s Move to Prohibit Retweets May Make Visual Misinformation Worse

Jessica Maddox
The Startup
Published in
5 min readOct 23, 2020

Last week, Twitter made a temporary change that added an extra step to the retweeting option. Now, when clicking RT, users would be automatically taken to what was the former “Quote Retweet” option, which prompts the user to add their own comments in addition to the retweet. There was a workaround, however, and users could simply click “send” without adding their own commentary, and the post would function like a normal retweet. This decision was presumably made by the social media platform ahead of the 2020 U.S. Presidential Election, as the site said the change would last through Election Week.

While getting people to hopefully think twice before they retweet is a good thing, this move is merely a temporary band-aid that does nothing to fix the platform’s systemic issues that contribute to misinformation. Similarly, the move does very little to fix visual misinformation, and may actually contribute to an increase in that type of misinformation.

As I’ve discussed elsewhere, visual misinformation is extremely common online, but aside from discussions of deep fakes, shallow fakes, and cheap fakes, the visual has not received much attention in this area. Visual misinformation consists of machine learning and artificially intelligent made video manipulations, but most often, it is simpler than that. Visual misinformation may consist of an edited photograph, something easy to accomplish given the proliferation and ease of learning photo-editing software. It may also consist of a photo taken from its original context and repurposed in another, which is where Twitter’s move has stark implications. Miscaptioning is a subtle, simple, but extraordinarily harmful form of misinformation.

Twitter is no stranger to visual misinformation. In 2016, a photo of Australian actress Samara Weaving was taken from her Instagram and spread across Twitter and Facebook. The photo, which the actress had taken in full bloodied makeup for her film Ash and the Evil Dead, spread with a caption saying, “the result of Fascism in America…simply because she was a Trump supporter.” The rogue post was picked up by accounts with large followings, such as Conservative Nation.

More recently, in summer 2020, the official White House Twitter account under Donald Trump tweeted a photo of bricks sitting on a city sidewalk, claiming “AntiFa and professional anarchists are invading our communities, staging bricks and weapons to instigate violence. These are acts of domestic terror. The victims are the peaceful protestors, the residents of these communities, and the brave law enforcement standing watch.”

But visual misinformation is not a partisan issue, with even former U.S. Secretary of State Hilary Clinton sharing a widely circulated photo in the same summer:

This photo was meant to criticize U.S. President Donald Trump during a period of unrest. As fact-checked by the BBC, they note that a reverse image search reveals the darkened White House photo was from 2014 and edited to look like the lights were out.

Twitter’s new move to curb misinformation focuses primarily on textual news. However, by prompting users to provide their own commentary before tweeting, they very well may fall into miscaptioning. On an optimistic note, maybe this would mean one would do some research or check before posting. But based on what we know about social media research in the area of misinformation and the speed at which content may go viral, many individuals post first, and check, if at all, second. This new move makes it far too easy to retweet a picture and share commentary on the image without necessarily knowing all of the facts.

Twitter has simultaneously implemented a warning before one retweets a news or magazine article, saying “headlines don’t tell the full story. You can read the article on Twitter before retweeting.”

Images are quite similar. We’ve heard the phrase, “the camera doesn’t lie,” but cameras and pictures both lie, all of the time. They don’t do so maliciously. But to frame is always to exclude, and in deciding what to photograph and what to not, in deciding which images to post and which ones to not, picture-sharers engage in fundamentally political, cultural, and ethical decisions. These choices are exacerbated by social media platforms, which we know officially now privilege more emotional, addictive content that breeds vitriol. And while Twitter has previously used “manipulated media” tags to note visual content that has been edited or may be out of context, manipulated media has become a partisan, hotly contested term. Many (conservative) pundits and politicians erroneously claim it is silencing technique by social media platforms. Manipulated media, like the temporary removal of the regular retweet, is only a band-aid from a platform ignoring its larger sociotechnical culture of misinformation.

Yes, visual misinformation can be shared with a simple retweet. But this decision to prompt commentary before doing so means many retweets now may be shared like a game of telephone — the message changes ever so slightly as it gets retweeted, before it bears no resemblance to its original context at all. When it comes to the ethics and fight against online misinformation, and visual misinformation, not all moves are equally good, and some may not actually be good at all.

And, as British internet journalist Chris Stokel-Walker so eloquently put it, “it is wild to me that tech is so US-centric that the world no longer has the ability to straight up RT a tweet because Americans can’t be trusted not to share disinfo before the election.”

Another day, another reason why we can’t have nice things.

--

--

Jessica Maddox
The Startup

Professor of digital media studies and technology. Into all things internet and dogs.