Facebook questions their role as “censors”. Google claims it is struggling to deal with sexism and the aftermath of firing James Damore. And Twitter’s former CEO wonders if it’s possible that Twitter enabled a white supremacist to win the White House.
These are complex issues, surely, yet I’m not convinced there is much mystery. Are you?
I grew up in Silicon Valley and technology, literally picking apricots in the orchards where Apple built its first building, then spending 7 years working for that company. Later, in those same City Center buildings of Apple, helped run the North American division of Autodesk.
So, I want to believe Google, Facebook, and Twitter when they espouse “do no evil” values. The Internet is a way to democratize the power of all our ideas. But, what I know is that tech “leaders” have had chance to address these issues, for at least 10 years. Yet they persist in feigning naivete, instead of demonstrating leadership.
So, now is the time to rewrite the rules of engagement on tech platforms. The “leaders” apparently will not, so we, the community, must organize to make much needed changes.
Stop Pretending You Don’t Know
Pew Research has reported that 39% of people online have experienced bullying, online harassment, and intimidation. Wired reported that online harassment disproportionately affects women, people of color, and LGBT individuals, or 69% of the US population. Reports began to emerge as early as 2007; So how can tech leaders — such fans of data, supposedly — act like they don’t know how big a problem this is?
Today’s online dumpster fire seems to have become just part and parcel of Internet culture that most people don’t remember it didn’t need to be this way. Nearly fifteen years ago, in 2003, Clay Shirky, an internet pioneer, described how online groups could be their own worst enemy, arguing that “the many‑to‑many aspect of Web-enabled communication can transform a group into a mob in a very short time”. He offered solutions back then, but executives in tech took no responsibility.
I was in the audience in 2007 when Kathy Sierra, a Sun Microsystems programming instructor and Java Developer, failed to show up to keynote a Tim O’Reilly’s famous Web 2.0 conference. Two weeks earlier, she had found herself on the receiving end of the Web’s dark side and was forced to go into hiding after receiving online attacks, including a picture of her head in a noose and a threat to rape her by sundown. Trolls even made her home address public. “I’m afraid to leave my yard, I will never feel the same, I will never be the same,” she wrote on her personal blog.
What could have prompted this? Believe it or not, a blog post.
Kathy was a UX/UI pioneer, her work based in cognitive science research motivated by epilepsy early in her life. Her site was ranked in the top fifty tech blogs by Technorati. Kathy wrote that sites should be able to delete unproductive user comments, and that Web dialogue shouldn’t be just a free-for-all but should have guidelines. Some of her readers didn’t “agree”, and to show her their displeasure they posted hateful notes on her blog and another (now defunct) site, Mean Kids. The “protest,” as the anonymous users called it, was forcing Kathy to deal with hate in order to simply do something she loved.
Such a prominent, respected tech leader must have gotten support, right? But they effectively said…Hey, Kathy, that sucks for you. Instead, what those supposed leaders should have recognized is that it sucks for us. Kathy Sierra wound up shutting down her blog, and we collectively lost a fresh, strong voice in technology. Silencing her. We lost her ideas. But, it also signaled something dangerous.
The “free speech” Internet was letting the haters win.
So, predictably, the momentum for hate grew. In 2014, Zoe Quinn, Brianna Wu, and Anita Sarkeesian became targets of the Twitter-based bullying episode, “Gamergate”. As women began creating games, they expanded the market, and added new economic value. Yet this simple act of participating in the marketplace of ideas threatened the status quo.
Did Twitter change policies? Nope… Their response was a corporate shrug.
Naming What This All Is: Hate
I interviewed Brianna Wu, a developer (and cofounder of Giant Spacekat, an independent video game development studio), for my latest book, and she described the groups that attacked her as the “KKK of the gaming industry.”
She’s naming what all of this is: hate.
Women were the canaries in the tech industry coal mine. They warned us. But no action was taken. And so hate spread. Reddit, dubbed by some as the front page of the Internet, has been described by the Southern Poverty Law Center (SPLC) as the #1 haven for violent racists: “The world of online hate, long dominated by [obscure] website forums like Stormfront and its smaller neo-Nazi rival Vanguard News Network (VNN), has found a new — and wildly popular — home on the Internet. Reddit boasts the ninth highest Alexa Internet traffic ranking in the United States and the thirty-sixth worldwide. Many of Reddit’s racist subreddits are among its most popular.” SPLC also named Reddit as particularly misogynistic. The same tactics used by ISIS are being used on Reddit to recruit young men to hate.
As New York Times reporter Farhad Manjoo recently wrote, “When it comes to fighting white supremacists, though, much of the tech industry has long been on the sidelines. This laxity has helped create a monster.”
Choosing Money Over Meaning
What’s clear by now is that the same social platforms that enable connection to create meaning, can also be used as a weapon of social malice.
So, why do tech “leaders” remain (mostly) quiet, and choose to do (mostly) nothing?
Because they benefit. More outrage leads to more traffic which leads to more advertising views by Facebook, which now controls 80% of social interactions. More confusion leads to more Google searches, which controls 90% of all search advertising. And more POTUS tweets threatening nuclear war or announcing “new” military policy causes more Twitter relevance.
Even tech companies that actively say they don’t support hate, are in fact the ones that help extremist sites monetize hate. “PayPal, the payment processor, has a policy against working with sites that use its service for ‘the promotion of hate, violence, [or] racial intolerance.’ Yet it was — by far — the top tech provider to the hate sites”, reports ProPublica. Given recent public outrage, Silicon Valley leaders did kick some KKK folks offline. But that’s the rare exception, and not nearly enough to solve the larger and systemic problem.
Many will say it’s not the leaders’ problems, but the advertising-based business models that reward this outcome. That may be. If so, VCs — like Greylock, Kleiner Perkins, and Benchmark — are just as complicit, as they make pocket money, instead of finding/funding better business models.
How much money? Jon Ronson, in his 2015 book So You’ve Been Publicly Shamed, measured Google’s income for one incident alone — the Justine Sacco incident — where one person sent a tweet before getting on a plane ride to Africa to her 170 followers only to have lost her job by the time she landed due to public shaming. He worked with economists and online advertising experts to document that Google alone earned between $120,000 to $456,000 in just one month for just this tiny and single incident. Enabling hate that oppresses 69% of the voices of the internet is a billions-big revenue opportunity, likely even… trillions of dollars worth.
Speaking of revenues, when it comes to affecting them, tech leaders manage to act quite quickly. We know that a reasonable degree of control is possible, such as when, during the 2016 Olympic Games, Twitter was able to determine almost instantaneously who was posting illegal videos and take them down — proving that it’s not about the social software but the social norms that are allowed and enabled.
In other words, tech leaders are not the defenders of free speech they claim to be, or even advocates of democratizing ideas which they espouse.
No, sadly, these tech “leaders” defend only what makes them money.
Twitter, Reddit, Google, Facebook, PayPal …Note that these are not small players. They are our most powerful. And regardless of anyone’s political preferences, no one can doubt that we can trace the amount of hateful acts we’ve seen exhibited in America, most recently in Charlottesville, directly to prior hateful acts, and tech “leaders” failure to act.
“This is a solved problem,” Internet Entrepreneur Anil Dash, wrote in 2011. “We have a way to prevent gangs of humans from acting like savage packs of animals. But the online world is just ignoring most of the lessons we’ve gathered over the last thousand or so years from disciplines like urban planning, zoning regulations and crowd control.” And so, he argued, “If you could apply these sets of principles, you could prevent the overwhelming majority of the worst behaviors on the Internet.” The examples he cites are having real humans dedicated to monitoring and responding to your community, establishing community policies about what is and isn’t acceptable behavior, making your site have accountable identities, implementing the technology to easily identify and stop bad behaviors, and planning for a budget that supports a good community.
Ellen Pao applied some of these strategies as CEO of Reddit in 2014–15. And they worked. The amount of hatred spewed decreased, according to research.
Applying these principles does not require revolutionary action; implementing specific policies can go a long way to prevent online interactions from spiraling into malicious gang attacks.
The Real Issue: All Our Ideas Need to Count.
While tech use “free speech” arguments, they don’t hold water. What is more fair to say is that tech monetizes the vast majority of us, while protecting the interests of 31% of us.
Instead, the real issue at hand is the matter of equal access, the freedom to let all of our ideas count. This is not an issue of censorship as is sometimes argued, because these are private sites, which (and already do) have the ability to decide their specific terms of service.
Tech “leaders” act as if it’s not a big problem, or at least not their problem. But, by their lack of action, they are showing which side they are on. Whatever they are not fighting to actively and immediately fix, they are saying they find it acceptable. They point to “the industry needing to change”, as if they themselves are not the industry.
This is the leadership opportunity of our time.
So, Let’s — You And I — Act On This
People gathered together in shared purpose can now do what once only large centralized organizations could. So let’s band together — in the shared purpose of having an Internet we deserve —to tell tech “leaders” we need them to step up.
Each of our ideas has to have its shot to count.
As consumers: Tweet this post at your tech leader of choice, and insist they fix this problem of online hate. They could create a commission with folks like Anil Dash, Brianna Wu, Ellen Pao, danah boyd, Clay Shirky and others…and have that commission provide policy guidance on the top 10 things tech companies can do by year-end; establish standards that allow all ideas to be activated by the internet. An independent commission lets each company decide how best to execute; a guiding set of principles then lets consumers measure all the firms against a common framework. The neutral convening group could be Omidyar Networks, or the Kapor Center.
As Workers. As people who work in the tech industry, we can ask more of our leaders. Each of the companies I cite have made *some* improvements and taken positions — for ex, major turnover at Reddit and having Ellen Pao lead change — because of worker initiative. Twitter, for example, introduced safety/reporting tools in the last 6 months, combined with back-end changes they’ve made re suspicious account proliferation, which seem to have cut down some of the issues. We need to build on this, and make this the priority. As Googlers have shown us, self-organized efforts are highly effective.
Governance Folks where these companies operate (i.e. Kamala Harris in California): this could become your problem/opportunity. If it’s not actively addressed by years’ end, we’re going to need government intervention to regulate this. Because, right now, some people are being allowed their self-expression by expressly suppressing the rights of others. This is against our common interests, and shared goals. It debilitates the value of each of us, limits the ideas that see the light of day. Which, of course, limits our economy, our shared prosperity, and ultimately our very democracy.
The time has come to take a stand. All of us who care about an internet that actually democratizes EACH of OUR ideas need to join together to act, now.
(I should also point out because of copyright issues that pieces of this argument are directly adapted from my latest book The Power of Onlyness: Make Your Wild Ideas Mighty Enough to Dent the World (Viking, 2017). It is a “what not to do” story called “When Meanness Rules” from Chapter 5.)