One Weird Trick to Destroy Western Civilization

Jason Goldstein
7 min readNov 15, 2016

--

If you want to pick a fight amongst programmers, start talking about the comparative skill sets of self-taught developers, computer science degrees, and code bootcamps.

I’ve had more than one person tell me it was reckless that I was writing software without a math background. On the other hand, a less mature compsci graduate may feel threatened by the idea that most of their formal training might be extraneous, and they could get the same skill set in 3 months at General Assembly.

Luckily people grow up and realize the argument is pointless. Experience trumps origin story.

In fact, I’d argue the reverse: the quality crisis in technical circles is not the nature of their technical knowledge — it’s that mere technical knowledge is inadequate, that you need to know about more things in order to fix the web as we know it.

Oh, and if we don’t fix it, we’ll probably destroy western civilization.

Good luck!

This is a hard essay to write. I’ve been kicking around the draft for months because the topic is a minefield. It will, inevitably, rub someone the wrong way.

That’s fine. If you write a good rebuttal I’ll link to it.

1.

The founding myth of technology communities is that they’re smarter, more rational, that they can tackle any problem through sheer force of reason and analysis. They’ve put a science fiction gadget in every pocket and a made many things more efficient.

And yet, when asked to tackle hard problems that you can’t fix with servers and flashy interfaces, the myth breaks down.

What is harassment? How do you deal with users who make anonymous death threats? Should conspiracy theories — fringe, radical politics — be allowed to flow freely or even dominate users’ newsfeeds? How do you serve relevant ads and search results without posing a real or perceived threat to users’ privacy? Are there ethical limits to making your service addictive?

None of this is easy.¹ Over and over, the technologists who build today’s web have failed to resolve them.

You might argue this phenomenon is nothing but standard bad business decisions — picking short term profit over everything else. That’s cynical and probably not accurate. The community says it wants to be a force for good, to the point it’s a joke — literally the cold open joke in the premiere of HBO’s Silicon Valley.

What if the problem is they often don’t know how to think about these problems at all?

When you have a hammer, everything looks like a nail. When your head is swimming with databases and javascript transpilers, what does accidentally eroding the fabric of western civilization look like to you?²

2.

Social networks essentially started as community forums. In recent years, the big ones sold their audiences on the idea that they’re a good place to read the news as well. The morning paper is part of an app, and an algorithm is the gatekeeper, responsible for presenting millions of people with information about the world around them.

By doing so, these apps have inherited editorial responsibilities, whether they wanted them or not.

Since most of the algorithms are driven by engagement, if the algorithm thinks you will engage with conspiracies and rage-bait, that’s what you’re going to see.

But what’s more, it doesn’t distinguish between reporting and activist fiction. The New York Times and NPR, which do journalism, are promoted with related stories from Mic and Red State, which do identity-politics-based-nonsense with roughly the same accuracy and integrity as the “zine” your crazy survivalist uncle used to print in the 90’s.

To distinguish between these you have to know the brands. Sometimes I can’t even tell right away if an unfamiliar outlet is a less-famous Tribune Co. paper, a new media startup, or made-up drivel written in AP style.

And if I can’t tell, the average user is screwed.

3.

Let’s start with this as a premise: you are responsible for the stuff you put out in the world. If you store user passwords in plain text and someone hacks you and steals them, that would be on you, right?

There are two reasons to store passwords in plain text:

  1. You don’t know you should encrypt them.
  2. You know better and you don’t bother.

The former is lack of knowledge. The latter is recklessness. In both cases, it’s a pretty bad bug.

If you create echo chambers and make your users addicted to them, that’s on you too.

4.

Most of the bug reports we’re hearing this week are stuck in this fake news narrative. It’s a problem, but it’s only a symptom of the main bug. To get to that bug, you need to look deeper.

The platforms we’re talking about are communication technologies. They exist to, in theory, connect people. Why then, do they actually create deeper and deeper addictive echo chambers?

We have a pretty good idea what the consequences are. We know that looking at the carefully pieced-together highlight reel of brunches and engagement photos makes you sad. We know that it encourages extreme beliefs that are out of touch with reality.

Consider two cases:

The first is a sad corner of the Internet known as Men’s Right Activists, a sort of neo-misogynist support group that rationalizes their attitudes with elaborate theories about how behaviors of cave women apply to modern society. I personally don’t know any cave women, but I do know if you have crazy ideas in the real world, it’s much easier to find someone to argue with you than to find validation.

The second is progressive politics, which has dived into a postmodern rabbit hole where everyone who doesn’t agree about microaggressions and cultural appropriation is a bigot and a horrible person. If you’re not a member (or if you just wanted to eat tacos in peace) it’s extremely off-putting.

But what’s more, notice the in-group/out-group mentalities. Notice the us-versus-them aspects. How exactly is the world supposed to function if we believe that everyone outside of our intellectual circle is not only wrong, but complicit in a system of evil?

Mark Zuckerberg denies that Facebook affected the election. Nonsense. The product literally shows people information that reinforces their worldview, and makes no effort to separate credible sources from fiction.

Assuming the purpose of the app is to connect people and foster communication, this is a pretty bad bug. Do they not know, or do they not care?

5.

When problems like this come up, most Internet companies throw up their hands. “We just make the tools, we’re not responsible what our users do with them.” That’s almost fair. If I make an email service, I can’t be held accountable if someone uses their email to plan a bank heist. Without reading their email, how would I even know?

But I think these circumstances are different. When one person uses your platform to make death threats, it’s their problem. When it happens every day and your algorithms are encouraging it, it’s your problem.

There is a precedent for thinking about products and the way their creators are responsible for how they’re used. Look at cars. Early cars were extremely dangerous. Over time engineers designed machines that, if they crashed, were designed to crumple the vehicle and protect the passengers.

We expect automakers to do this. We demand it.

What does it mean for web platforms to accept responsibility? What does it mean for us to demand social apps with safety features?

6.

Some people might say, wait, these are product manager issues. Why should the developers need to think about them? This attitude is wrong. Many tech companies are engineering driven; decisions do come from the developers. But moreover, developers live in the details. No one is better positioned to foresee their emergent effects.

To debug the echo chamber issue, you have to be able to think about how code and design decisions translate into real world behavior.

The fix isn’t clear-cut. If users want to share stories that are demonstrably untrue with each other, isn’t that up to them? If they don’t know or follow people who they don’t agree with, what’s the right way to break the bubble?

To deal with this, we need to work with something beyond the immediate technical problems at hand.

Journalists are very familiar with the ins and outs of editorial responsibility. Knowledge of how editors think, and the long tradition of how to clearly keep a line between advocacy, opinion, and hard news would be a great frame of reference to have.

Knowledge of history would help. I see parallels with yellow journalism. No social network has tried to start a war in order to boost engagement, but it’s certainly within their means.

Knowledge of conspiracy theories would help. There’s actually a lot of research into why people believe crazy things about JFK or UFOs or Satanists. It often has to do with some other, more ambiguous source of existential dread. The storytelling makes the participants feel in control.

Knowledge of psychology and neuroscience would help. Why exactly does the Noise Machine™ have these effects on people? How does confirmation bias and cognitive dissonance play into the system?

7.

I’ve gone out on a limb here. I can’t prove the lack of nontechnical knowledge is behind this, but there’s decent evidence that’s the case:

A few weeks ago, ProPublica noticed that Facebook allowed advertisers to target by ethnicity. It pointed out how easy it is to, for example, only show houses to white people.

When we showed Facebook’s racial exclusion options to a prominent civil rights lawyer John Relman, he gasped and said, “This is horrifying. This is massively illegal. This is about as blatant a violation of the federal Fair Housing Act as one can find.”

Facebook has since blocked the feature from being used in housing or insurance related ads.

To be fair, the developers at Facebook aren’t stupid. If I were tasked to build Facebook’s ethnic targeting, would I have had the foresight to raise red flags?

I’d like to think so, but anyone with an ounce of humility and git blame knows that the ugliest mistakes are usually your own. Would you or I, eyeballs deep in code and on deadline, make the same mistakes?

I’m not sure, but I know how much damage we can do if we can’t see more than a few inches beyond the screen.

After all, the most terrifying sentence in the English language is “they meant no harm.”

¹ Except for threats of violence. Seriously, clear the Report Abuse queues and suspend offenders. There’s not a lot of gray area here.

² Don’t give me that look. Hyperbole is hot right now.

Crossposted from whatisjasongoldstein.com.

--

--

Jason Goldstein

Web Developer. Live music aficionado. Serious coffee‑drinker. Have beagle, will travel.