How to Talk About Humanity and Social Media

Sarita Parikh
The Startup
Published in
9 min readFeb 12, 2020

Without Sounding Like Chicken Little. Or a Wonk.

A baby chicken with a “you’ve got to be kidding me” expression
Photo by Jose Manuel Gelpi Diaz

A close friend of mine recently told me he’d joined Twitter. Upon noticing my surprise, he asked, “Why the concern? I get a lot of information and I’m fine seeing ads personalized to me. The model makes sense.”

Despite my passion about technology for the common good, I struggled to articulate a response. Weighty words swirled around my head — privacy, AI, surveillance, cognitive biases, manipulation, exploitation, humanity. I realized that if I said all those words without clear background, I’d sound like Chicken Little and shrink my credibility. I also realized that I couldn’t organize my thoughts without a PowerPoint presentation citing the oddities of human behavior and a glossary full of terms. I needed a way to describe it without technical lingo, agitation, panic, or TMA (too many acronyms).

Here is a summary, in everyday-speak, of the impacts of social media on humanity. If you’re short on time, the bold text summarizes the key ideas.

To begin, social media makes money from your attention and your data. The idea seems like a win-win: You get endless content (much of it valuable) and can share, learn, and connect — all for free. Movements have been championed through social media. Families have reconnected through social media. It’s fueled human connection, at scale, at no monetary cost to users. Quite a bit of it is fun. In exchange, you see ads (seems reasonable) and your data is sold to other businesses (a little uncomfortable, but okay). All of this seems like a rational tradeoff, and at face value, it’s worth it.

Let’s take a deeper dive.

The platforms are designed to keep you online as long as possible, to sell more ads and collect more data. How long you want to be online is not the point.

But wait a second — we’re on these sites voluntarily, we can choose how long we spend, right? Well… Do any of these scenarios sound familiar: Picking up your phone for a five minute Instagram break and after scrolling… scrolling… realizing that 20 minutes have passed. Or watching a how-to video on YouTube, and hmmm — sure — I’ll watch this next video recommendation, then the next, which somehow always ends up with a Flat Earth video. Or sitting down in the evening for just one show on Netflix, which turns into a new one is starting already — just one more which at some point becomes OMG, IT’S 2 AM. And if you know teens on Snapchat, then you know the power of the streak. If it’s hard for adults to manage our time online, it’s especially hard for kids.

How do these platforms keep us online longer than we want? First, product designers and people with expertise in user experience and behavioral science continue to advance techniques to keep us online, to keep our eyeballs and our attention.

Secondly, and the more troublesome part, is that AI algorithms are finding patterns in what keeps us on a screen longer — these are patterns that humans could never detect on our own. The AI doesn’t have humanity built into it (and let’s face it, building humanity into AI is quite complex¹). The AI doesn’t know or care if the content is one-sided, fraught with lies and vitriol, or conspiratorial. It unwittingly exploits human vulnerabilities: If the content is attention grabbing and keeps us online longer, the content gets recommended. This rewards sensational, but fictional, content presented as fact. This viral, online content that is abundant and sensational generates a lot more attention than researched, balanced content. A study from MIT found that false news stories on Twitter are 70% more likely to be retweeted.

Perhaps the most well known is YouTube content from Alex Jones, who made his fortune from racist and anti-Semitic content, with so little respect for humanity that he alleged the Sandy Hook massacre was a hoax. He advanced the hoax-conspiracy model (and has been removed from most platforms), but countless conspiracy theorists have emulated and furthered his approach.

AI also rewards content that we are already likely to agree with, such that two people with opposing political views will see completely different, personalized, content, each aligning to the views they already have — the echo chamber effect. Seeing this kind of content, repeatedly, impacts us in ways that are small in the moment, but that add up over time. Repeatedly seeing content we agree with invokes our confirmation biases — the information we see is so pinpointed to our beliefs and biases, our perspectives become so obvious, and we genuinely struggle with seeing how someone could see a topic differently. The likes/views/shares tracking sparks our appetite for social approval. Immersion in this kind of content creates social norms for behavior: FOMO and comparison and judgement; Outrage, which is more toxic than effective, is the “new normal.”

One of the more troubling cognitive biases is when we hear the same lies, over and over, the lies actually start to seem kind of true (the illusory truth effect). The environments are ripe for promoting fake or manipulative content, “malinformation.” Although there is a lot of fear about deepfake videos, shallowfakes are far more prominent and easy to produce. A recent example was the simple edit of slowing down a Nancy Pelosi clip to 75% speed so that she appeared drunk. Despite Pelosi’s long standing abstinence (she’s a teetotaler), the video garnered millions of views and has left impressions, especially on people inclined to disagree with her politics. And consider this: Last year, Facebook removed 2.2 billion fake accounts in three months — it’s not just fake news, it’s also fake people.

So, how does this impact us, individually and as a society? We consume content that does subtle, pernicious damage. We stay online longer and don’t enjoy the extra time we spend online — in fact, the extra time often leads us to be unhappy. Instead of promoting the best pieces of humanity (like compassion, generosity, understanding), the models further the baser parts of humanity (like outrage, superficiality, and manipulation). At scale, it has skyrocketed the us-versus-them bubble culture. It has played a leading role in degrading trust in journalism and institutions — a loss of trust labeled a Crisis in Democracy by the Aspen Institute.

The promise of these platforms was to bring people together. It is ironic that they have created so much divisiveness. It’s unlikely that there was any intent to have such wide-scale negative impacts — unlikely that designers were sitting in conference rooms in Silicon Valley, plotting large-scale damage to humanity. No Dr. Evil announcing “Ladies and gentlemen, welcome to my underground lair.” These are the results of unintended consequences. These are incredibly powerful tools deployed at scale, without the risk analysis and due diligence testing needed when impacting billions of people.

Okay, now let’s switch to privacy. Virtually nothing you do online is private, and your data is collected, bought, and sold by companies you’ve never heard of. It would be one thing if the things tracked were simple — say male, 45, two kids. But near-everything you do online is tracked. The most popular sites and apps collect everything you’ve searched for, who you’re communicating with and what you’re communicating about, everything you’ve bought, the places you’ve been, even the letters you’ve typed on screen and erased. Even our TVs are tracking us. It’s big-brother surveillance by private companies². Farhad Manjoo from the New York Times said it this way: “The big story is as you’d expect: that everything you do online is logged in obscene detail, that you have no privacy.”

Your data is then auctioned in a digital marketplace — your data is being sold and bought, right now. All of these pieces of purchased data are combined to form a picture of you. Not just male, 45, two kids, but also profiles, like currently depressed or financially struggling. Maybe you have a squeaky clean life, pay your taxes, floss every day, and have no skeletons in the closet. So, even if it’s creepy being tracked, maybe you feel like it is what it is. There’s the obvious security risks that someone can buy (or steal) information about you, and the countless crimes they can commit. For a few hundred dollars, anyone can buy your biometric information.

But let’s tie this back to the topic of the information presented to us online. The ads we see aren’t just for goods and services, they are also for ideas. It’s one thing if the data is used to target you for, say, shoe ads. It’s a totally different thing when the data targets you for political ads. An example is “data-bait” targeting voters, sending false claims to collect their data, and then targeting them with false, inflammatory, or lie-based content. In the 2016 election, black voters were targeted with messages trying to convince them to not vote — a political strategy of voter-suppression paid campaigns on Facebook.

Examples like this can occur because the platforms have almost no public responsibility. Their responsibility is to maximize profits and shareholder value. These are information platforms for billions of people without any responsibility for integrity. Journalists and traditional media are bound by ethics and standards truthfulness, accuracy, objectivity, impartiality, fairness and public accountability. Social media has none of those ethics or standards. A law called Section 230 of the Communications Decency Act of 1996 allows “immunity” for platforms who post other people’s content. It’s been labeled Power without Responsibility. This means that Facebook, who earned $22 billion dollars in 2018 after-tax profit, has little responsibility for the content on its platform³. Section 230 is getting bipartisan attention, and it’s important to note that it’s a complex topic that will need to address both responsibility and free speech.

Finally, the platforms have no requirement for transparency. As the platforms and AI are focused on keeping us online longer, they can also be designed to encourage certain behaviors. If the AI algorithms were designed to promote a specific point of view, no one would know. Case in point: Facebook conducted a voter-turnout experiment on 61 million people. Voter turnout is something most of us can rally around, but it was a secret experiment on 61 million people related to voting. A small group of people in Silicon Valley have the ability to secretly impact the behavior and beliefs of tens of millions of people. They have the ability to impact the public but no responsibility to the public. You can imagine dozens of self-serving scenarios, especially as their first and foremost responsibility is maximizing profit and shareholder value.

Putting it all together: The goal is to keep us online longer, our private data fuels what we see, things that keep us online longer are promoted more, much of that content exploits human vulnerabilities, there is no accountability for negative impact, and there is no transparency for the public to know what is being done. It’s not one egregious big thing: It’s an accumulation of millions, maybe billions, of daily damages — death by a thousand cuts.

Facebook, Google, and Twitter together made over 50 billion dollars of profit in 2018. We, the people, are using these sites and tools because the good parts are really good. Most of us want innovative, value-adding businesses to be financially successful. But in an environment where businesses have laser-focused responsibility to shareholders and no responsibility to the public, the bad parts are really bad. In the old days, factories dumped toxic substances into the water. Today’s externalities are much more insidious. The financial success of these businesses cannot continue to be built at humanity’s expense.

This topic is much bigger than any one of us. So what can we do? Most importantly, understanding the topic is incredibly valuable — the more any of us know, and the more we understand these complex topics, the more we can collectively create public awareness. We have elected lawmakers, and even if your trust in them is low (+1 from me), this topic is a bipartisan topic.

I hope this article puts a complex picture together in a way that helps you discuss the topic with enough background.

Let’s move technology to the common good.

If you’d like to follow this topic in more detail, great places to start include The Center for Humane Technology and The New York Times Privacy Project.

[1] There are many complex challenges. In many ways, humanity is about personal morality — is it even possible to automate personal morality and would you ever want that? Another challenge: Context is critical in interpretation — how would AI know if a post is from an activist working on true social change or a conspiracy theorist looking to make a quick buck?

[2] Your government has an astounding amount of your personal data, as well. A different topic for a different day, but important to question whether democracy can be healthy when the government increasingly knows where you are and who you are with.

[3] They are bound by laws protecting children, trademark and copyright laws, obscenity laws, privacy laws (although, you (likely) granted wide-reaching access to your data in the Terms and Conditions that you (likely) didn’t read). But no laws for lies, propaganda, or manipulation.

--

--