Tech, Democracy, and the Future
What we in the tech industry should do next.
Since the election, the news about tech’s role has hit us in waves: first there was the revelation that fake news may have helped throw things in Donald Trump’s favor. There was Facebook’s bungled response (it’s “a crazy idea” that fake news could have swung the election, said Zuck, while his engineers said the opposite). Then came revelations that Trump’s campaign used huge ad spend on Facebook and fraudy posts on Twitter with some deeply terrifying micro-targeted voter-suppression techniques to drive down the Clinton vote. Finally, it turns out to have all been part of a bigger Russian campaign involving both propaganda and hacking. As a former Facebook employee, watching this unfold has been particularly personal and painful.
We’ve seen negative campaigns before, but this is fundamentally new. Facebook, Google, and Twitter — tools we thought we built to help spread democracy around the world were used against us by a foreign country, the same way the 9/11 terrorists used our commercial airliners against our buildings, our economy and our people.
Okay, so the election was bad for techies. Almost all of us voted against Trump, but a bunch of other people got suckered by him, so he won. But that doesn’t reflect on our industry, does it? You can’t blame a neutral platform for the actions of a few bad actors. We can clean up fake news with better detection algos. Tech companies are a force for good — they connect people. Right? This will all blow over. Right?!
Wrong. We now live in a world where racist demagogues and their dictator buddies can cynically exploit our tools to seize power. Starting to feel a bit nauseous? There is no such thing as a “neutral platform.” Facebook, Twitter and Google all profited from this perversion of democracy. You don’t get to throw up yet — there’s more. Remember all the things that Edward Snowden revealed about tech being used to spy on Americans? That’s all going to be controlled by Trump in a few weeks. Oh, and you built this.
Now you can throw up.
It’s time for us to own up to what we’ve created — a machine that’s just as good at selling Fascism to angry white people as it is at selling Tide to bored housewives. And when I say “us” I don’t just mean those of us who’ve worked at Facebook or Twitter or Google, I mean everyone in this industry, and everyone who supports these products by using them. We’re all responsible for the past, but we’re responsible for the future too. And that’s the little glimmer of hope in Trump’s America: we can build new things that will fix this.
So here, then, are four principles on which to build new products that protect democracy.
Principle One: Protect User Data
The incoming administration has pledged to target immigrants and to expand surveillance powers. No industry has a more sensitive database of user data than we do in tech, and that makes protection of this information even more critical now than it was before the election. Many of us have already signed a petition to refuse to participate in the creation of databases of identifying information for the government to target individuals based on race, religion, or national origin.
This petition also lays out some principles which are a good starting place for a framework for new product development:
1) Minimize the collection and retention of user data which could facilitate ethnic or religious targeting;
2) Encrypt user data wherever possible;
3) Secure user data from hacking with security best practices;
4) Protect and reward whistleblowers and white hats who uncover flaws in your infrastructure.
The EFF has a written a similar framework. All companies need to take a stand about following principles like these and refuse efforts to build ethnic registries of people like Twitter did, not ignore questions about it. Data protection is now a feature many more people care about — new products that do this well will attract users in droves.
Principle Two: Verify Truth
In response to the fake news crisis, Facebook has announced new features to flag bad stories and domains and provide warnings to users. This is a good start, but it’s only a start.
In a world where foreign governments use propaganda against democracies, tech companies have a responsibility to verify the truth of statements that can be viewed by millions of people, the same way they have a responsibility to keep the files they transmit clean of viruses. Since the existing platforms are desperately trying to wash their hands of content from propaganda farms, there will be many opportunities to build new products to help verify truth at scale.
Principle Three: Fight Cyberbullying
Throughout the campaign, we saw systematic efforts to bully opponents of Trump in ways that are unprecedented in US political history. The spike of reports like this one of death threats being used to silence free speech must stop, and tech companies can help. In addition to expanded efforts by existing companies to shut down accounts that are used for these purposes and looping in law enforcement, we need new products to help victims and to protect everyone from these kinds of abusive attacks.
Principle Four: Make Communication Private
To quote Edward Snowden from his recent interview with Jack Dorsey:
“The same technologies that are being used to connect us, to tie us together, to let you listen to this right now, are also being used to make records about your activity. Recording the activities of someone creates vulnerabilities for them.”
Fortunately, there are a number of end-to-end encrypted messaging apps on the market now, but these generally lack all of the functionality of more-popular, less-secure apps. There are also relatively few tools for having larger private and/or anonymous conversations. (Perhaps someone should relaunch Secret?)
Privacy isn’t a feature that’s easily bolted onto an existing product, it’s much better if you build your assumptions around it from the beginning. This means new products that are built to reflect the suddenly-increased desire for privacy have an advantage over existing ones that are merely pivoting in that direction. This is an opportunity not just for messaging, but for all communication media — video, photos, audio, and every other way we humans communicate.
To build a future that’s safe for democracy, we need to re-imagine what tech should do to support and protect people. Let’s build new products that protect user data, verify truth, fight cyberbullying and make communication private.