What Algorithms Don’t Know About Online Behavior
What do social-media users really want? It’s complicated. Algorithms monitor our online behavior, then show us content based on inferred values. But what if that’s only half the equation?
By Peter Krass
You’re at a party when the host offers you a snack choice: potato chips or green salad?
Sure, the salad is healthier, and if you choose it, you’ll feel proud of yourself. But those chips look awfully good — even if later you’ll feel unhappy.
That scenario, says MIT professor Manish Raghavan, can help to both explain and resolve a serious mismatch on social media. While platforms work hard to optimize their content to meet user preferences, many users continue to be unhappy.
Raghavan, a professor in MIT’s Sloan School of Management and department of Electrical Engineering and Computer Science, presented his ideas during a recent MIT Initiative on the Digital Economy (IDE) lunchtime seminar, “The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization.” The presentation was based on research Raghavan conducted with two colleagues, Jon Kleinberg and Sendhil Mullainathan that draws on sociology and psychology to explain online behavior.
“There’s a pervasive feeling that something is broken with social media,”
Raghavan said during his presentation — but it may not solely be a technical flaw. Social-media platforms can spread misinformation, they often leave users feeling unhappy, and in extreme cases may even lead to emotional breakdowns. At the same time, these platforms have enormous budgets, staffs, influence and other resources — enough, presumably, to fix what ails them and their users.
Values vs. Behavior
What’s causing the mismatch between user needs and platform recommendations? Past explanations include the existence of nefarious actors, the transfer of bad offline behavior to online, the idea that social-media platforms are inherently addictive, and the claim that the companies are simply too greedy to change.
But Raghavan offered a different class of explanations, namely, internal conflicts between a person’s values and behaviors.
“People make poor decisions online,” he said. “Later, they’re unhappy with what they did.”
The problem starts with technical foundations. Many social-media algorithms, Raghavan explained, work with a concept known as revealed preference assumption. It posits that people do what they want to do. Based on this assumption, social-media algorithms first observe a user’s behavior on a site. Then, based on that behavior, the algorithms infer the user’s preferences and show them more content to satisfy those preferences going forward.
For example, if you spend a lot of time on Twitter looking at tweets about baseball. Based on your behavior, Twitter’s algorithms will make sure you see more baseball-related tweets in the future. So far, so good.
But what if you’re looking at those baseball tweets during the workday, when you really should be crunching numbers for a business account? That exposes a gap between your values and your behavior. You want to succeed at work (a value), yet you’re stealing time during the workday to check baseball scores (a behavior). And the social-media platform isn’t helping you to correct your behavior. “This is problematic,” Raghavan said.
“If we did a better job of understanding what people actually want, we might mitigate some of our problems.”
Multiple Selves at Play
Raghavan introduced a second concept; multiple-selves models, to explain why people sometimes engage in behaviors that undermine their own values.
In a nutshell, it suggests that we’re constantly navigating between different internal systems. What’s known as System 1 is impulsive, driven by instant gratification and what we’d like to do in the moment (more potato chips!). System 2, by contrast, is thoughtful and longer-range; an expression of a person’s true preferences (pass the salad).
Essentially, it’s a battle between things we want in the moment and things we think we should have long-term.
The two systems also affect how we feel after engaging in various activities. Act out a System 1 preference — check out more baseball tweets instead of working — and you’ll probably end up feeling bad. Engage instead in a System 2 behavior — get your work done — and you’ll likely feel good.
Social-media algorithms do a good job of measuring System 1, but a poor job with the subtleties of System 2. Essentially, they’re solving only half the problem, Raghavan said, identifying users’ behaviors, but missing their true values and preferences.
The Path to Positive Experiences
The challenge for platforms is to determine System 2 preferences and values even if people’s behavior mostly follows System 1 impulses. A solution could lead to a more satisfying user experience, one that leaves them feeling happier and better protected on a site.
Many platforms already run System 2-driven surveys, but more can be done with the results, Raghavan said.
For example, a social-media platform could ask, “Are you happy with how you just spent time on our platform?” Then, instead of merely averaging behavioral data (System1) with survey data (System 2), the site could work to understand the difference between the kinds of content that reflect true values and the kinds that do not.
“In general, current survey data is too sparse to directly use for optimization, but it may help us understand what kinds of content we can trust based on behavior (salad) versus the kinds of content where we need to be more skeptical of behavior (chips),” he said.
Evolution of Algorithmic Inferences
Further, social-media platforms could also make different design choices. Already, Instagram offers an optional “take a break” message that users can arrange to appear at a set period of time — say, every 10 minutes. This engages the user’s System 2, essentially asking, “Do you want to spend more time on the platform, or do you want to stop and do something else?”
By contrast, YouTube’s Autoplay, a design feature that automatically runs one video after another, engages System 1. The user mindlessly keeps watching — most likely while munching a big bag of potato chips.
Watch Manish Raghavan’s IDE lunch seminar on demand: The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization
Peter Krass is a contributing writer and editor to MIT IDE.