AI for Memers, Part I
The worlds of algorithmic content moderation and recommendation are moving closer, with meme accounts caught in the intersection.
One day, a popular meme account sent me a direct message on Instagram. “I don’t want to get deleted again, but I don’t know what triggers it. We have no idea how things work, just experimentation and word of mouth of how we think it works.”
It wasn’t the first time I had gotten a message like this. In my role as Head of Creative, Community at Instagram, and previously as a senior photo editor for National Geographic, I too found myself grappling with “the algorithm,” and how it was influencing the creative process.
I have the unusual distinction of running the two most-followed brand Instagram accounts in the world (@instagram and @natgeo), during a pivotal time when platforms scaled from human-powered content discovery to algorithmic content personalization.
In the second year after I joined Instagram, personalized ranking algorithms were introduced and everything began to change. I had relatively powerful analytical tools at the time, but the engagement data was only loosely correlated to the guessing game of what would “do well.” I worked with many content creators — from niche influencers and artists to news organizations — who began devising their own best practices to maximize engagement. I heard consistent feedback that posts weren’t seen enough, “because the algorithm changed.”
The perceived power of the algorithm was reshaping every level of the creative process. But in reality, what is cause and effect is more complex. Our perception of the algorithm influences what we choose to share and consume, but it is our actual choices that are training algorithms to control what we see. Over time, this symbiotic plight of creators and algorithms has become existential to me.
Today, I am an inaugural JSK Journalism and Human-centered Artificial Intelligence Fellow at Stanford University. I’m here developing AI for Memers, initially a research initiative on the relationship between viral creators and ranking algorithms. My ultimate goal is to help design better content recommendation systems:
- How do platform changes to “the algorithm” — perceived or real — influence our sharing behaviors?
- How do creators adapt their creative process for discovery in a virtual world increasingly shaped by personalized ranking algorithms?
- What would it take to design content recommendation systems that are better for humanity?
A curator’s perspective on moderation vs. recommendation systems
My training as a curator is to methodically explore, collect, research and review a vast quantity of visual information in order to distill a final selection that conveys a theme, a story, a mood. There are technical elements to this process, but ultimately it’s a subjective endeavor that values perspective and judgment as much as the craft itself.
You see the work of curators in museums, magazines, films, libraries. More and more, you also see a curator’s work within major platform companies, as part of their integrated content recommendations: “editor’s choice,” “top ten,” “photo of the day,” etc. I like to think of these curation roles as successors to Blockbuster Video’s “Staff Picks” shelf by the register (RIP). Human curators deal in matters of taste. And more and more, we are dealing with matters of scale.
On the matters of scale part, algorithms are also trained to curate photos and videos, in the form of recommendation systems. These systems help us navigate the trillions of images posted daily by surfacing what we might like, based on our past behaviors. Personalized predictions improve rapidly because we train the algorithm every time we tap, don’t tap, comment, share, follow. Enough successful predictions by the algorithm, and our content effectively goes “viral.”
Meanwhile, regardless of platform, today’s discussions around disinformation, polarization, and fake news focus on visual moderation systems. This is important work: where to draw the lines for violating or non-recommended content, and set the minimum criteria for what we should see less of on a platform.
The criteria for classifying content that is eligible for algorithmic suggestion in recommendation systems is much more subjective and curious: what we want to see more of on a platform. We are in the early days of determining who gets to shape this criteria for designing recommendation algorithms, and how we can hold their algorithms accountable for taste.
But why AI for Memers?
I believe that there’s an opportunity to learn from today’s top viral creators. These memers are deploying engagement tactics frequently flagged by recommendation and moderation systems.
Meme accounts have demonstrated their mastery of the format to populate recommendation systems with content that spreads through remixes and re-shares. They can be profitable, nimble, and at times challenging. Their singular purpose to go viral and unconventional attitudes about intellectual property have disrupted traditional creative distribution industries.
But memers may have become the world’s most successful community of human curators, by today’s benchmarks of engagement and time-spent. Their creative methods optimize for what “the algorithm” optimizes, and they are quick to adjust their methods as the predictions improve. Content ranking algorithms have essentially created the environmental conditions in which meme accounts and their remixing culture can thrive.
It’s not surprising that the chosen weapon of disinformation campaigns are visual formats perfected by memers. As platforms update their moderation systems to combat meme-based disinformation campaigns, memers of all genres can find themselves caught in the crossfire: their accounts disabled or individual posts flagged as violations.
Everyday memers are getting disabled then reinstated at an increasing frequency, on the same networks that are also seeing surges in memetic disinformation campaigns. Combined, it’s a symptom that the design of moderation and recommendation systems is racing closer together, yet not keeping pace with the creativity of the content creators.
What AI for Memers means for journalism
We have rising expectations of moderation systems to suppress our exposure to information that is illegal or harmful. But we are becoming reliant on recommendation systems to increase our exposure to information that might be entertaining or useful.
Platforms are updating these content ranking systems almost as fast as creators are inventing new ways to use them. Our current focus on strengthening moderation systems to combat disinformation and polarization is helping to expand the categories for types of content that we should see less. But that expansion also leads to more questions about the ethics of content moderation overall.
How we address these ethical questions may start shaping the promotion of certain types of content over others: inadvertently introducing more moderation variables into the world of recommendation systems. This gradual shift would disproportionately affect journalistic and non-governmental institutions, whose focus on broad public interest is already struggling for attention in systems designed to reward the most personalized recommendations.
Today, platform updates and shifting trends continue to reshape the battlespace of algorithmic curation, such as Instagram’s announcement of hiding likes or the documented surge of viral content within private sharing networks. Yet disinformation campaigns continue to dodge moderation systems and enter the more subjective arena of recommendation systems. I believe that meme accounts are at the forefront of pushing the limits and opportunities at the intersection. The AI for Memers project aims to better collaborate with and learn from this community.
With the support of Stanford’s HAI-JSK Fellowship, I am partnering with experts across the university to ensure that AI for Memers becomes a resource for creators, curators and journalistic institutions who are distributing information in a world that is increasingly controlled by these systems.
More broadly, I hope this project contributes to the design of content recommendation systems, inspire creators, and inform those working to identify influence operations utilizing meme formats. As we continue to change these curation systems, let’s remember that the systems are changing us too.
To participate in AI for Memers, please follow along for updates here.