Wait, When Did We Become “Users?”

This is an intervention. We’ve developed an abusive relationship with digital technologies that see us as users before human beings. And since when was being a “user” ever a good thing?

If you’ve recently found yourself subject to bouts of social irritability, media fatigue, or device guilt, you’re probably too dependent on a particular type of technology. Other indicators include:

  • Fear of missing out aka. “FOMO”
  • Feelings of social or cultural irrelevance
  • Resistance to new ideas
  • Social media shaming
  • Sudden desire to protest something
  • Device addiction and/or guilt
  • Accusations (giving or receiving) of racism/sexism/socialism/fundamentalism/transphobia/toxic masculinity/tribalism/solecism/stoicism/partisanship/whataboutism
  • Revenge porn
  • Light Headaches

Here’s a test: look up from whatever device you’re reading this on and look around you. Chances are that someone is on some form of social media right now and that they’re probably negatively affected by what they’re seeing. The good news is, it’s not entirely our fault. The bad news is that it’s almost entirely our fault.

The combination of our dependency and technology’s indifference has put data privacy firmly at the center of our politics. It’s changed the way we think, the way we communicate, even the way we vote. In the absence of an ethical standard for digital technology, user behavior has become the arbiter of functionality and design. The issue is that when technical and economic performance becomes the key indicators of growth, the result is the dehumanization of people into sets of behaviors that companies utilize to increase our dependency. We gave this to them for free in exchange for a promise of a more connected and informed world, so how did we get the polar opposite?

This is a deliberate paradigm of information technologies like Facebook, Google, and Amazon. We ignored the obvious privacy risks because we assumed that Big Tech was on humanity’s side by nature, but we’ve never had any reason to believe that. And as we approach the dawn of Artificial Intelligence, Smart Cities, and presumably more Mission Impossible, it’s probably a good time to figure out whether we can be trusted with technology before we test if technology can be trusted with us.

When I moved from Sydney to Los Angeles almost a decade ago to work in the entertainment industry, social media was only just catching on as a tool for artists to promote their work. To their credit, apps like Facebook, Instagram and Twitter were invaluable resources that gave creators a platform to share their creative expression with a massive audience they could never reach on their own. Celebrities and artists have always been willing to give up a degree of personal privacy to develop their work and grow an audience. But in extending that choice to us plebs, what data-driven technology has done is demand that we perform our lives in exchange for a “lite” version of celebrity.

This type of technology is particularly damaging because just like fame, it relieves us from confronting the reality of our lived experiences by splitting them into two realities: the one we experience as a biological being, full of mistakes, pain, and growth; and the other we grant ourselves, which we prefer to treat as some kind of existential safeguard against unhappiness.

As a result, we’ve become increasingly amateurish in recognizing the difference between what’s good for us and what’s so obviously not, and that extends itself into how we manage our digital lives.

This plays perfectly into our instinct to associate happiness with wealth and fame, and moreover, wealth and fame with success. But between the Heath Ledgers and Amy Winehouses, to the Trump, Kim, and Kanyes of the world, I’ve suddenly found the damages of celebrity to be more far-reaching than is given attention. As a result, we’ve become increasingly amateurish in recognizing the difference between what’s good for us and what’s so obviously not, and that extends itself into how we manage our digital lives.

Which brings us back to the inconvenient truths about our own responsibility in this. For some reason, we’ve developed the idea that information being free is somehow a good thing, even though it’s clear that by making it free we’ve diminished the value of good information. The result is that creativity and culture are increasingly commoditized to fit with the financial objectives of the platforms that distribute our information as a product.

So all this begs the question: how did we let it get to the point where Mark Zuckerberg and Jack Dorsey are testifying before Congress? Many were quick to dehumanize them as if we haven’t been reared on their technological teet. If we are even partly to blame, how could we have protected ourselves against this? And more importantly, how can we protect against it in the future? It might help to pick apart how we first encountered social media, and understand why driving particular types of dependent behavior became the cornerstone of an ethos that powers big tech companies and startups alike.

Remember a/s/l? It used to take a lot of trust to get those three identifiers from a stranger online. Chat services like AIM were like a personality scratch pad — you could test yourself out. It was a space where words were as good as actions, and for those of us who lacked confidence, we found redemption and community in the backlit forums of an exciting, global network — one where the cloak of anonymity was a superpower we could wield against our insecurities. No bully was too intimidating, no crush too unapproachable. It was, at it’s best, a safe space where we could engage with others based solely on a curiosity and desire for connection.

All too quickly, we learnt that kind of anonymity came at a cost. Cyber-bullying proved to be just as destructive as any other kind — if not more so. Age/sex/location turned into name/phone number/home address, and what had seemed like a social utopia quickly gained the reputation for being the go-to tool for people who leveraged that anonymity to do personal harm to others. Social media is the technological rehabilitation of that instinct. But this time, in the place of anonymity was radical transparency, at least theoretically. The ability to follow the lives of people in, or even completely disconnected from our personal network was granted to us on the basis that we were willing share our lives as well. Framed in such a way, it’s hardly a surprise it was adopted so quickly, but it couldn’t be free forever.

Advertisers spend billions of dollars a year targeting consumers based on fairly broad assumptions about who we are, and what we want to see. As we’ve grown more vigilant of their attempts to grab our attention, they’ve had to get more creative with their messaging and also in their gathering of our personal data. Facebook, Google, Twitter and others offered them an opportunity they could’ve never dreamed of: we were offering all of that personal data and far more up for free.

Much in the same way advertisers use ratings, our personal activity feeds granted us the ability to keep a pulse on our growing social value by reporting the interactions of our audience back to us whenever we were curious. In a sense, they granted us a kind of digital immorality. And just like AIM, this immortality encouraged us to test our voice, opinions, and personality, modifying and refining our brand to maximize and showcase engagement in a place where everyone could see it. We became experts of our own personal brands — brands that became the digital expression of our optimal realities.

The solution is not only to fix what’s already been broken, but to identify the things we can’t afford to break before we start building things.

That radical openness promised transparency and community for all by projecting a tone of futuristic altruism that made it seem somehow uniquely above Constitutional law or even basic ethics. The “break things” doctrine was supported by a religion of “user-centered design thinking” which fails to view impact as a collective experience. Add to that, that beneath these pseudoscientific philosophies and unlimited paid vacation benefits lay a fundamental vulnerability to technical self-sabotage. What’s most frightening about the implications of this “bug” is that the system will fall victim to it by being used exactly as it is designed. That makes it difficult for developers to detect and defend against. The solution is not only to fix what’s already been broken, but to identify the things we can’t afford to break before we start building things.

In the case of AIM, they never offered enough social value to counter the security risk they posed, so abandonment was no surprise. But in the case of Facebook, Instagram or Twitter, essentially what we’re dealing with is the tech equivalent of a recession, where losses are measured not only in lost revenue, but more importantly, lost users — or abandonment. However, just like a bank, social media platforms have become so closely tied to our identity and security, that it’s just easier to tell ourselves that we’ll depend on it as little as we can. But that doesn’t make us any less vulnerable to its power.

It doesn’t help that the favored tools for thoughtful discourse have fallen out of fashion, relegating their efficacy to increasingly superficial mediums, optimized for reactionary communication. Device addiction and social media have only entrenched obstinate group identities, and propagated dwindling attention spans that too often seek no further investigation than the first page of a Google search (which is also primed to show us information that’s unlikely to challenge our views.) That only makes it harder to find opposing views, and since when did anyone do that voluntarily?

Our rapid adoption and subsequent dependency on these services showed that we were willing to give up personal liberties to alleviate our feelings of cultural alienation and meaninglessness. FOMO is rarely so palpable than after a break from social media. Not only have we become more antisocial, but our civil discourse, and the systems we depend on to uphold it, have fallen victim to the same tribalism you’d expect to see online or on reality TV. Which could not more perfectly explain the current US president. Our consumptive excess to that end not only articulates his rise, it serves as a formula for others to leverage the same platform.

It doesn’t take a designer or engineer to see who could benefit from all of this. It only takes foresight. The market responds to our self-labeling by targeting groups as an extension of our values, ceaselessly pitching us on the products that most closely resonate with who we are, or perhaps more accurately, who we think we should be. Liberals cry out against how groups are targeted while unapologetically hanging their own banners all over their social walls. That’s not to say you shouldn’t support a cause or that social media isn’t a good platform to do it, but we shouldn’t mistake our digital advocacy for real action, nor should we conflate the power of our lived experiences with our digital reveries.

Until recent years, technology was an answer to our demand for mastery over our physical environment which has resulted in numerous advances to overall global health, equity and prosperity. The star-crossed rise of mobile technology and social media have resulted in an explosion of insight into human behavior; insights that deepen our understanding of our inner lives, but also our differences. The wielders of this behavioral data were always assumed to be non-threatening—developers, designers and engineers never hurt anybody—but it’s that lack of vigilance we can’t afford to have with technology anymore.

The point is that we shouldn’t wait on Silicon Valley to adjust their models to give back the power and value that they took away from us because If they did, they’d fail. It’s important to be optimistic that technology can be better, both functionally and ethically, and if users are at the center of the product universe, then it is users who will help define the new technologies we adopt. Designers and developers wield more cultural power than ever, so it is their responsibility to be critical thinkers with an opinion on their work and its potential impact. To ask how the things we build empower human beings, and bring value back to curiosity and creativity.

👨🏻 Ross Langley is an Australian writer and designer based in New York. You can find him on Twitter @wurdswerdswords 👋

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by + 376,592 people.

Subscribe to receive our top stories here.