Reinforcement algorithms: friend, foe, or casual bystander?

Part three in a series

Beyond
Beyond
5 min readNov 8, 2018

--

These days, more than half of Americans rely on social media for their news. From immigration policy to the latest celebrity scandal, the majority look to their feeds to keep them informed.

But social media platforms aren’t news providers, and they don’t subscribe to journalistic standards. To attract ad revenue, they optimize for heightened engagement–so algorithms tend to reinforce user preferences. Which is fine when it comes to cat videos, but not so innocuous when it comes to reporting on global events. Especially when one factors in the popularity of sensationalism, and the probability of invention.

At General Assembly’s educational panel series, ‘Inside the Design Studio,’ Beyond’s design team considered how design DNA shapes the ethics of business models–including exploring the charged and complex issue of ‘echo chambers,’ and the impact they may have on civil life.

The issue with the echo

Today, the majority of audiences engage with the news stories they encounter on social media–taking a follow-up action such as liking, commenting or sharing. Algorithms optimized for the user’s preference leverage this insight to filter out contrasting perspectives and differing topics, and deliver more of the same–a system that has become increasingly efficient over time. Researchers have found that in this way feeds can quickly become narrow in terms of perspective, and even extreme. Many attribute the increasing divisiveness in public discourse and politics on these reinforcement algorithms–arguing they group like with like and exclude plurality of thought and opinion.

For Beyond Partner Matt Basford, the issue of reinforcement algorithms raises the more macro issue of AI, and where it will take us. “With many applications of this technology, we’re really teetering on the brink of a precipice,” he stated. “The big question facing designers, engineers, and businesses, is will we harness its positive capabilities in an ethical manner, or will we just apply it for profit, unregulated?”

Beyond’s Design Director Mitchell Hart asserted that at least in the context of digital news, AI-driven platforms can give users what they want, AND what will benefit them. “It is possible to strike a balance,” he asserted. “And the businesses that do this will really deliver a distinguishing value.”

And a sampling of digital news sources, like Apple’s news app, are doing just that. They provide customized content to users by optimizing for category preference, not ideological leaning–e.g., algorithms will serve users who read stories on economy more of the same, but a Fox News-sourced story on the topic will appear next to one from The Washington Post. Additionally, emerging subscription models plan to give users control of over many aspects of their feed, but would include challenging sources in the mix.

Part of the problem, or partial solution?

In the Facebooks and Twitters of the world, however, some research calls into question whether echo chambers even exist–at least for the majority of users. In a 2016 Pew study, over 50% of respondents described their Facebook and Twitter networks as containing a mix of people with a variety of political beliefs, and agreed that social media helps get new voices into the political conversation. 20% also reported content they read on social media had changed their political opinions–a situation unlikely to arise with a status quo of homogenous news feeds.

And even if ‘echo chambers’ do affect the makeup of news feeds, do they in fact increase intolerance and tribalism? A recent study published in the European Journal of Communication suggests otherwise, showing that online exposure to contrasting perspectives can actually have a counterintuitive effect. When confronted with opposing views on social media, people often dig in their heels on their existing beliefs, rather than considering opposing points of view–an emotionally-driven behavior psychologists call ‘motivated reasoning.’ Duke University research similarly points to the practice of online ‘self-licensing,’ i.e. when users give themselves mental permission to retain or even double down on a prior opinion as their ‘reward’ for considering–however briefly–an opposing view.

In fact, the nature of human behavior online–and the fact that people spend more and more time interacting with each other in digital communities–may well be contributing to the general rise in divisiveness. Humans have evolved to survive better in groups. Because of this, generosity and congeniality typically characterize our initial, unprocessed social responses. But digital interactions allow for additional reaction time as well as a level of anonymity which may support more selfish and even antagonistic behaviors. Group dynamics–including how we protect and gain status in groups–differ online as well, and these can motivate digital behaviors that would ordinarily not be tolerated in the analog world.

Researchers at Yale’s Human Nature Lab studying the dynamics of online networks have learned how to manipulate their character, and can tip the scales for good or ill. For example, they have designed algorithms that predict when trolling and/or bullying will like occur, and can intervene preemptively–effectively using bots to remind humans of their own humanity. But this practice is not without pause: the engineering of behavior in a free society, however well intentioned, connotes a sinister cast.

Not surprisingly, Beyond’s UX Designer Emma Netland applauded this end, but adjusted the means. “From a design ethics perspective, I 100% agree with the yardstick: if it’s ok in the analog world, it’s ok in the digital one– this should be our rule of thumb. But adding transparency here is key. You can encourage engagement without manipulating the user. And users will value your transparency–it will serve as a differentiator.”

And for those uneasy with benevolent robots keeping it real in their Facebook feeds? Never fear. Scientists point out analog social behavior evolved over thousands of years. Social networking, as we know it, has been here for about 30. Researchers are optimistic we will develop human self-regulation for our digital communities over time. And according to Mitchell, “With our accelerated pace of adaptation–as well as the conscious investment of designers, businesses, users, and even government in the process–we can make that time come sooner, rather than later.”

Originally published at bynd.com.

--

--

Beyond
Beyond

We are a design and technology agency that builds world class products for the digital age.