The four quadrants of Bad Stuff Online: December 10, 2017 Snippets

Snippets | Social Capital
Social Capital
Published in
10 min readDec 11, 2017

As always, thanks for reading. Want Snippets delivered to your inbox, a whole day earlier? Subscribe here.

This week’s theme: could Gamergate teach us lessons about how to fight Fake News? Plus Syapse welcomes a new VP of Oncology, Dr. Aradhana Ghosh

Welcome back to the last issue in our series about fake news and other problems on the internet. The theme we’ve been exploring is how modern internet and social media act as “heavy” infrastructure on top of which “light” information can travel, just the railroads a century ago enabled lighter-weight settlement of the American frontier. The problem we have to grapple with is something we’ve dubbed “Light Tribalism”, in many different forms: it’s why fake news, online abuse, and much of the general awful-ness on the internet reflects how easy we’ve made it to engage in online tribal warfare. The question we’d like to wrap up is the following: what, exactly, can large internet platforms like Twitter and YouTube reasonably do about this? Can we rescue the abundance of good on the internet from being spoiled by the bad?

Earlier this week, Ben Thompson wrote a piece that addressed a parallel issue, specifically the recent news around bad stuff on YouTube, and more broadly approached the question: “what do you do when there’s an abundance of content on the internet, most of it is good, but some of it is terrible?”

The Pollyannish Assumption | Ben Thompson

Ben’s point boils down to: in an online forum (YouTube, for example) where there is an abundance of good content but also a significant amount of very bad content (e.g. exploitation; terrorist recruitment; other content considered universally unacceptable) produced on an ongoing basis, relying on users for moderation is doomed to fail: the nature of recommendation engines and algorithmic content is to give people what they want, and if left alone, the current model will churn out an increasing amount of morally reprehensible content that simply lies out of sight for most. The superior strategy, he argues, is not to try and moderate every piece of new content but rather to start with the assumption of bad actors, actively search for their output, and then delete it.

Reading Ben’s piece, my first reaction was something along the lines of, “Fine, that may work for a specific kind of problematic content (like these videos on YouTube), but it’ll never work for online harassment; it’ll never work for fake news; it’ll never work for all these other problems we have.” But that’s okay; if we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behaviour that the internet has empowered, but rather a few dimensions of them. My best attempt at categorizing them is as follows:

The heavy infrastructure of the internet enables each of these four kinds of facilitated bad behavior; but each type demands a different response. The first dimension is Universally Objectionable versus Tribal: does essentially everyone agree that some content or comment is bad, or is it an “our side versus their side” issue? The second dimension is Pull versus Push: is demand for this bad stuff driven by the viewer who wants to see it (like fake news, or some of those horrible videos), or is it being pushed by a mob of posters who want to see people get trolled (whether out of hate, or simply for fun)?

Out of these four quadrants, I believe it’s fair to say that two of them are relatively clear-cut in terms of how we should try to solve them, and two of them are murky and have no easy, rule-based solutions. First up, let’s consider the two quadrants where we do more or less have a playbook for how to act, and we have to decide which we will tolerate and which we won’t:

I don’t anticipate that either of these positions should be all that controversial. First, if there’s universally objectionable stuff on the internet that’s actively being searched for (the upper-left quadrant; e.g. terrorist recruitment videos), then we have a responsibility to actively seek that content out and delete it. This is what Ben talks about in his piece, and I think he’s right; for that quadrant, anyway. And second, on the bottom right, there’s the advice we’ve all heard before: “don’t read the comments section”. People acting tribally and saying nasty, us-versus-them things to each other in places like Twitter and YouTube comments is just part of the Internet; we need to leave it alone and accept that this is simply the way that humans behave sometimes. Censorship has no place here; our right to free expression is more important than anyone’s offended sensibilities; so long as we’re talking about this quadrant.

But what about the other two, much more difficult quadrants? Specifically, fake news on the one hand and online threats and targeted harassment on the other? These quadrants are hard. They have no easy solutions, but doing nothing is unacceptable. They’re very different problems, but what they both have in common is that they’re about people manipulating other people online, where attackers and bad actors have an agenda and are very sophisticated in how they go about it, and where censorship only makes the problem worse. Adding new rules and regulations does not really help you here; every additional rule you make is fuel for a clever troll army, an opportunity for fake news artists, or a rallying cry for tribal entertainment. Specifically trying to target and eliminate fake news more often than not simply gives the fake news artists exactly what they were hoping for: evidence that they’re being censored, and fuel for their argument of “you can’t trust anything!”. And giving the community tools to report or block offenders doesn’t usually work either. The trolls will simply use it against you: they’ll report everything as fake, or report everyone they’re targeting as breaking Terms of Service agreements. We know how this playbook works. But we cannot do nothing; the status quo cannot stand. So then what?

I honestly believe that in this situation, the best people to turn to for advice are those who have actual skin in the game: the people who know what it’s actually like being in the middle of this stuff, and who know firsthand what lofty ideas crash hard when they’re introduced to reality. One such person is Zoë Quinn, who many of you remember was the game developer who became the target in what’s now known as Gamergate. Her book Crash Override, which just came out, gives a fascinating and harrowing description of how Gamergate went down, along with an inside look into the dark depths of the internet out of which hate is spewing at an increasing rate:

Crash Override: how Gamergate (nearly) destroyed my life, and how we can win the fight against online hate | Zoë Quinn

One of the major points she makes in this book is about how we can successfully fight this kind of behavior: do not ignore the trolls, but don’t give them anything either. Instead, put in the work to actively help victims. If there’s media coverage, or other efforts to shine light on the issue, don’t focus on the trolling, focus on the victims and on the consequences. She writes:

The phrase “sunlight is the best disinfectant” isn’t all wrong; it depends on where you’re directing the light. Shifting the focus on reporting on abuse and bigotry away from shiny media villains we love to hate and onto the people they target and hurt isn’t just humane, it’s better at actually “exposing” the issue. Moving the focus away from the perpetrator and onto the actual effects not only allows the journalist to report on the issue at hand just as it would if they gave the press to the abuser, but it grants additional context and truth that would be otherwise lost. It pulls the issue out of the hypothetical by putting the harm to faces and names and lives, and allows people to feel the reality of it instead of getting lost in jargon, theory, and debate. It avoids the cognitive backfire effect that comes along with signal boosting lies to refute them by properly contextualizing disinformation and harm.

Can this advice be usefully applied to Fake News? Perhaps it might. We need more sunlight on the issue, and we need to shed that light carefully and in a thoughtful way, as Zoë tells us. Fake news and online harassment are two very different problems, but they may have more in common than we realize. That realization may be key to helping us figure out how to deal with both issues: learning how our actions can help victims and expose issues without giving further fuel and weapons to the perpetrators. This will be a challenge for our tech giants, it’ll be a challenge for the governments who inevitably will seek to regulate them, and it’ll be a challenge for the citizens who populate the internet around the world. But I do believe we should agree that doing nothing is unacceptable; we’re better than that. The answers will come from people on the ground, with skin in the game, and the very same elements of the internet that introduced these problems may be our way out; if we listen to the people who’ve been there and know from firsthand experience how we can use the Internet to protect and lift up one another. As Zoë writes, “Everything I have, everything good in my life, I owe to the internet’s ability to empower people like me, people who wouldn’t have a voice without it. All the garbage that is thrown at us is enabled by this broken machine, yet I firmly believe that the internet is also the best tool we have to address the problem.” We should listen.

Elsewhere in the world (including elsewhere in America):

Lost Einsteins: the innovations we’re missing | David Leonhardt

Google for India: building India-first products and features | Google India Blog

Beetles are ravaging Europe’s oldest forest. Is logging the answer? | Erik Stokstad, Science

A looming problem for American renewable energy:

Last minute provision in Senate tax bill could “devastate” renewable energy | Peter Maloney, Utility Dive

Senate bill “would pretty much blow up the tax equity market for wind” | Emma Foehringer Merchant, GTM

Strong and needed words:

Wow. Yes. A reaction to The Silence Breakers as Time’s Person of the Year | Sarah Lacy, Pando

The consent of the (un)governed | Laurie Penny, Longreads

We need leaders who see harassment for what it is — a crime | Ellen Pao, Financial Times

Maybe this isn’t real…:

Bitcoin is none of the things it was supposed to be | Adrianne Jeffries, The Outline

Threat of bots and cheating looms as HQ Trivia reaches new popularity heights | Julia Alexander

In the United States, profits from fraudulent olive oil can be more lucrative than cocaine | Nicola Twilley & Cynthia Graber, The Atlantic

Other reading from around the Internet:

2017 was the year digital ad spending finally beat TV | Peter Kafka & Rani Molla, Recode

NASA sensor to study space junk too small to be seen from space | Ilima Loomis, Science

Trees and shrubs could be less fussy about the climate than scientists thought. That might be good news as the planet warms | Nature Editorial

10 more browser tabs | Eugene Wei

Hacking the Citi Bike points system | Ian Parker, The New Yorker

Inside Oracle’s cloak-and-dagger political war with Google | Tony Romm, Recode

The cure for obesity and diabetes is processed food (Part 1): when the poison is the antidote | Cameron Sepah

Eric Housen, The Golden State Warriors’ go-to guy, doesn’t play a minute | Scott Cacciola, NYT

A few quick hits from the Social Capital family to round out the week:

First of all, we’re delighted to welcome Dr. Aradhana Ghosh to the Syapse team as VP of Oncology:

Dr. Aradhana Ghosh joins Syapse as VP of Oncology | Syapse

As a medical oncologist who has practiced on the Clinical Trials Steering Committee at Kaiser Permanente, Dr. Ghosh has a vast breadth of experience in clinical oncology and medical technology that gives her a unique position to appreciate what Syapse brings to the table. She offers: “Precision medicine presents the most significant opportunity to improve care for cancer patients today. When I was practicing in a large health system, I wished that I had access to the precision medicine that Syapse enables. The team is taking a unique approach to breaking down barriers that will make precision medicine available to more patients, and I’m excited to help with this mission.”

For a touching story to read and draw inspiration from, Jen Lutz from Glooko wrote a story about helping her father with his Type 2 Diabetes, and on asking him to use Glooko as a tool to help him with his ongoing management:

Asking my dad to use Glooko | Jen Lutz

You can read more personal stories from Glooko, their customers, and patients here.

And finally, a new video from Relativity Space that’s pretty incredible to watch:

What you’re looking at is Relativity’s Aeon SN005 3D printed rocket engine going through a test from ignition to full thrust. It took place at the Stennis Space Center, the largest rocket test complex in the United States and Relativity’s home away from home for the past while. If you’re looking to get a job at Relativity, they’re hiring in both Los Angeles and at the Stennis Center in Mississippi.

Have a great week,

Alex & the team from Social Capital

--

--