Loading…
0:00
9:17

Originally, this story was going to be about how we can avoid Big Data Brother online — or at least be a little less vulnerable to it. I had several points planned out: an explanation of how ad retargeting works, including why Facebook ads feel like Facebook is listening to you through your smartphone’s microphone, plus a handful of tips, tricks, and best practices to deflect at least a minimum of surveillance from social media and other digital data-gathering companies. It was supposed to be a simple, straightforward “digital hygiene and you” piece.

Then Cambridge Analytica happened.

In March, it was discovered that political data analytics firm Cambridge Analytica, which ran data operations for Donald Trump’s 2016 presidential campaign (and counts the conservative billionaires in the Mercer family among its major donors), had been able to gather and improperly retain user data from an estimated 50 million Facebook users through a quiz app downloaded by just 270,000 users. A few weeks later, that estimate skyrocketed to 87 million users, reportedly including Facebook CEO Mark Zuckerberg’s own private personal profile. The scandal has (rightly!) rattled a country already grappling with the reality that propaganda fed to us by Russian trolls through the same platform likely affected the outcome of the 2016 presidential election.

For many of us who may have barely spared a passing thought for privacy concerns online until now, seeing the consequences of mass data collection play out in our own backyards has become no less than a watershed moment in our collective life online. What’s more, we’ve now also witnessed it play out on the floor of a hearing room in the U.S. Congress: Earlier this month, over the course of two days and more than 10 hours, all 100 senators grilled Zuckerberg on the matter. The interrogation and related reports have revealed more than most people ever wanted to know about how much information Facebook stores and “makes available” (that is, sells, but indirectly to avoid liability) to virtually any advertiser about us, even if you’ve never used the platform — and worse, they have revealed just how hard Facebook still plans on fighting to keep it that way. The short response to my original idea is, it seems, that for an individual, avoiding the all-seeing corporate eye online is a virtually useless pursuit.

And, of course, Facebook is hardly the only villain at play here. As scholar and USC professor Safiya Umoja Noble argues in her latest book, Algorithms of Oppression, even the internet tools we widely accept as “objective” in today’s world were created and continue to be developed by flawed, demonstrably discriminatory companies. Citing last year’s mess with now-former Google employee James Damore and his anti-woman “memo” as an example, Noble explains in the book’s introduction:

Some of the very people who are developing search algorithms and architecture are willing to promote sexist and racist attitudes openly at work and beyond, while we are supposed to believe that these same employees are developing “neutral” or “objective” decision-making tools. Human beings are developing the digital platforms we use, and as I present evidence of the recklessness and lack of regard that is often shown to women and people of color in some of the output of these systems, it will become increasingly difficult for technology companies to separate their systematic and inequitable employment practices, and the far-right ideological bents of some of their employees, from the products they make for the public.

In other words, the very foundations of the digital space where we all congregate are poison: Structurally, our internet, just like much of our meatspace society, is fundamentally hostile to anyone who isn’t a white man. Hatred and contempt have been coded into its DNA.

Is there any hope, then, of living ethically online? With an internet that forces you to render yourself complicit in order to connect at all? How does one practice good-faith policies on platforms that have been built on a morally reprehensible framework? Platforms whose owners not only don’t respect their users, but actively profit — indeed, almost exclusively profit — from exploiting those users’ trust?

To be honest, I don’t know the answers to these questions. But under circumstances like these, in which the game is utterly, hopelessly rigged, isn’t one of the best things you can do to understand as much as you can about the system that’s screwing you?

Perhaps, then, in an era of runaway technological innovation and exploitation, self-education, including our collective education of each other, must become a virtue in and of itself. Reading terms of service and privacy policies, parsing exactly what companies will take from you and how they might legally sell it or otherwise use it against you — and then simplifying and sharing that knowledge with those around you — becomes not just responsible but also decent. Call it proper online etiquette if you must, but the longer we as laypeople allow ourselves and our fellow citizens to remain in the dark about technology and the companies that continuously thrust it into our lives, the more vulnerable we make each other to the most corrupt forms of profiteering and manipulation. Better the devil you know, always.

This may sound like some sort of digital bootstrapping — falling back on personal responsibility as a moral good when institutions fail us — but there’s one significant difference here: An informed public is, ultimately, what changes institutions. Silicon Valley corporations have been able to continue building fundamentally unequal, exploitative systems that we grow more reliant upon every day largely because the layperson doesn’t understand what they do. It’s a problem with all technology, really, as we accelerate exponentially into the future. As Ian Bogost explained in the Atlantic last year, the concept of “precarity,” which describes the economic and social conditions that force average people to accept uncertainty and ignorance as the cost of progress, has been forced into every corner of our culture by tech companies that create increasingly advanced products that average people can never hope to understand, much less fight:

Once decoupled from their economic motivations, devices like automatic-flush toilets acclimate their users to apparatuses that don’t serve users well, in order that they might serve other actors — among them corporations and the sphere of technology itself. In so doing, they make that uncertainty feel normal.

Would the world, then, be a little bit less of a nightmare if individuals knew, at the very least, exactly how these companies and platforms were taking advantage of them? If we strived to ensure that our family and friends and neighbors knew what they were signing up for, too? When the time came to affect change, to vote or otherwise respond collectively in the public sphere, what might be possible if people knew exactly what needed to be changed?

Within a few decades, our elected officials will all be from a generation that understands a lot more about technology than this one. Whether those representatives will understand the ins and outs of our digital world remains to be seen; it’s possible many of them will remain willfully in the dark. But wouldn’t you rather vote for someone who took the time to understand the threats to their constituents’ well-being, and to democracy itself, however complicated those threats may be? And how are we supposed to do that if we don’t know what we’re asking of them?

It’s a heavy, arduous, even annoying proposal, I know. Up until now, terms of service and privacy policies have been purposely stuffed with more unintelligible jargon than the average person could ever hope to understand, even if they took the time to scroll for an hour and read it all. And on a larger scale, learning how the internet works — how any new technology or algorithm works, for that matter, with products growing more sophisticated by the second — demands a daunting amount of background 101 knowledge, and it often feels like one needs a degree in computer science or engineering to grasp any of it. We’re so far removed from the basics that it’s easy to tire of the process of self-education before we even begin.

Though there are no shortcuts, however, conditions are getting somewhat less painful. On May 25, a European law called the General Data Protection Regulation will go into effect. The rule, which will affect every internet company that does business in Europe (including most of the social media platforms we use regularly), requires companies to be radically more up-front about exactly what data they’re collecting from users — and to explicitly ask permission to do so.

“The GDPR’s idea of consent requires a lot more than previous regulations, which means companies will be asking permission to collect your data a lot more often,” explains my Verge colleague Russell Brandom. “In concrete terms, that means a lot more ‘click to proceed’ boxes, although the transparency requirements mean the text inside may be a little clearer than you’re used to.”

What’s more, people are finding new ways every day to be good by lowering the barrier for entry for the people around them, from sites like Glitch, a free community that allows anyone to find new apps and build their own, to apps like Grasshopper (basically Duolingo for coding), to podcasts like HowStuffWorks’ TechStuff.

The reality is that, eventually, these companies will always find some new, even more ingenious loophole that will allow them to take advantage of what their customers aren’t paying attention to. A lot of the damage being done by massive, egomaniacal corporations and their leaders might be irreversible. And as the internet loves to say, there is no ethical consumption under capitalism, but as individuals, remembering that — reading the labels, knowing just how unethical our consumption is — and doing what we can, where we can, has to be good enough for now.