The Dystopia is already here

daniel harvey
20MinutesIntoTheFuture
9 min readNov 20, 2018

Paraphrasing William Gibson, “The dystopia is already here — and it’s globally distributed”. Big Tech platforms with shady business models are beset by bad actors inside and out for nefarious ends. Deepfakes, fake news, and surveillance capitalism are fuelling civil unrest, domestic terrorism, and genocide. We can, however, claw back hope if we take action now.

A brief editors note: Last Thursday (15 Nov 2018) I presented the below as a talk at the most recent instalment of UX Crunch. It was well received on Twitter so much so that I did a tweetstorm version of it. Based on that even more people asked I share the complete talk track so here goes…

**As of July 2019 I’ve shifted most of my online writing to my newsletter. If you like this article you should consider subscribing. Thank you.**

The organisers of tonight’s event asked me to talk about the darker undercurrents in design today. Hopefully we can stamp them out for a brighter tomorrow. Two points before we start:

  1. I’m using real examples of some very dark shit. This is not comfortable content. You have been warned.
  2. I’m a New Yorker. Expect a lot of expletives.

This is a quote you’ve all probably seen a million times. Advertising and techie shitheads love quoting it before they do “cutting edge” trends porn. What they never seem to understand is how politically subversive the last bit is. Anyway, I digress. Because those wankers abuse it I swore I’d never use it.

Still, paraphrasing it is useful for me tonight. One person’s future is another person’s dystopia and thanks to the fundamental power of the Internet as a tool for distribution we can spread dystopia far and wide. Often before we know it, realise it, or even imagine it.

ed. note: a more accurate version of that slide would read: “The dystopia is already here and it’s globally distributed.” If I give the talk again I’ll amend accordingly.

Before I start properly ranting about things (and this is your second warning) I wanted to take a moment to dismiss out of hand utter nonsense distractions like the AGI Terminator scenarios twats like Elon Musk like to raise to misdirect people from more pressing concerns. That Skynet fear-mongering is 100% bullshit.

ed. note: this was animated in the talk. Flying poop is always worth it.

And this is why Musk is full of shit. Pedro Domingos hit the nail on the head when he said “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

Add to that this insightful comment from Michelle Alexander, “It’s tempting to believe that computers will be neutral and objective, but algorithms are nothing more than opinions embedded in mathematics.”

And this one by Zeynep Tufekci, “We’re building this infrastructure of surveillance authoritarianism merely to get people to click on ads.”

Taken as a whole those last three points are terrifying and can be summarised as “Really shitty AI™ is already here. It’s biased as fuck. And it exists to make ad revenue for the morally bankrupt.”

This is your third and final warning. The examples I’m about to share include stories about attempted suicide, attempted murder, and actual genocide.

Bobbi Duncan was a young closeted lesbian. She meticulously adjusted various byzantine settings on Facebook to keep her sexual orientation private and shrouded from her conservative parents. When she was a freshman she joined The Queer Chorus at UT-Austin. The chorus director added her to their Facebook group. Unbeknownst to Bobbi, the director, or any of us at the time, Facebook had made a decision that group privacy settings should override personal privacy settings. Bobbi’s parents received notifications about Bobbi’s affiliation with the group. Bobbi was disowned by her parents and attempted suicide.

  1. In 2014 less than 1% of Myanmar’s residents had Internet access.
  2. By 2016 there were more FB users in Myanmar than any other South Asian country. It became the default news source for a nation riddled with religious and ethnic animosity.
  3. By Aug-Sept 2017 hate speech on the platform reached critical levels.
  4. In that same span of time more than 6,700 Rohingya Muslims were killed by Buddhist citizenry and military.
  5. At least 730 young children were among people shot, burned or beaten to death.
  6. 650,000 Rohingya refugees escaped to Bangladesh to avoid this genocide.

ed. note: I didn’t mention this in the talk but on November 5th (the day before the US midterm elections when news would be preoccupied) Facebook published this post in their newsroom. Nowhere in it are terms like “Rohingya,” “mass murder”, “genocide.” Another fine example of their deplorable “delay, deny, deflect” tactic.

James Bridle did a fantastic investigation into the torrent of nightmarish content aimed at kids on YouTube. “These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place. The system is complicit in the abuse.”

China is using drones disguised as pigeons and gait analysis to surveil their citizenry. Australia is already underway of their launch of “The Capability”, a powerful database that pools biometric information gleaned from driver’s licences, passports, visas and other sources — despite radical concerns about the effectiveness of the matching which some say are 93% wrong in some cases. The City of Baltimore has an on-again, off-again relationship with a company called Persistent Surveillance Systems that uses wide-area aerial drones in an almost pre-crime fashion.

Clayton Delery, in his book “Out for Queer Blood” define stochastic terrorism as “The use of mass, public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random.” It’s a statistician’s way of saying what we all know to be true — when vile people like Donald Trump spew vile rhetoric people like alleged Maga Bomber Cesar Sayoc will take vile action.

I thought I’d let Barack Obama share my next example…

You thought fake news was bad? Wait until deepfakes are common

You can’t have a talk about dystopia without some nod to either Huxley or Orwell right? Huxley said, “In regard to propaganda the early advocates of universal literacy and a free press envisaged only two possibilities: the propaganda might be true, or the propaganda might be false. They did not foresee what in fact has happened, above all in our Western capitalist democracies — the development of a vast mass communications industry, concerned in the main neither with the true nor the false, but with the unreal, the more or less totally irrelevant. In a word, they failed to take into account man’s almost infinite appetite for distractions.”

So what can we do as consumers, creators, and a society to fix all this shit? Because we have to fix it. We can’t just unplug and walk away. Walking away doesn’t help solve the problems for those left behind and it shirks our own responsibility in this mess.

As individuals we can modify our relationship with big tech. The Time Well Spent movement recommends several good tactics as a start:

  1. Notifications only from people
  2. Go Grayscale
  3. Tools only home screen
  4. Launch other apps by typing
  5. Get an alarm clock
  6. Go cold turkey

As employees we can take action. Trump and Brexit disasters have turned tech workers into activists. No longer happy to drink the corporate kool-aid of bullshit values & mission statements they are demanding action of their employers. Some recent examples include:

  1. Prop C
  2. Google’s China search engine
  3. Military contracts at Google, Amazon
  4. Google harassment walkout
  5. Standford students pledging to not work for Salesforce over their border contracts

As a citizenry we can support new policies and regulations that put our human right to privacy above the bottom lines of the corporate surveillance complex. Here we’ve got GDPR (which thankfully will survive in the UK post-Brexit). In the US activists and politicians in Oakland, CA have successfully put in place significant anti-surveillance ordinance. Sen. Ron Wyden’s proposed “Consumer Data Protection Act”, if passed would even jail CEOs for violating consumer privacy.

As designers and technologists we can:

  1. Enforce existing ToS evenly and fairly. No one is above the law and no one should be above ToS either.
  2. More logical definitions of protected groups.
  3. Better algorithms to detect hate speech.
  4. Down rank / shadow ban hateful content

As designers and technologists we can leverage and improve existing copyright tools to not just enrich the pockets of entrenched entertainment interests but to squash deepfakes that will endanger our democracies and algorithmically generated mash-ups that traumatise our children. We can also fight fire with fire as AI itself can be very capable at rooting out the current tells that reveal deepfakes. That’s always an arms race though.

As designers & technologists we need to rethink our fundamental interactions. We need to design for kindness. We need new interactions that make it easier to be kind. Small interventions like prompts that emphasise empathy, warn against idealogical bias, undo angry responses, or driven to private conversations rather than public flamewars.

Tobias-Rose Stockwell has explored these ideas recently. Would be nice to see social media platforms like Facebook and Twitter to similarly experiement.

Beyond just designing different interfaces we have to design and build different networks. That starts by challenging lazy assumptions that advertising should be the default or exclusive model for everything. There are other models: from subscriptions to SAAS direct revenue to DTC sales. And some companies like Amazon have multiple business models. It just make good business sense to diversify your revenue streams, metrics and business models accordingly.

But we’re not going to be able to rethink those fundamental interactions, platforms, or business models if the same people and perspectives are leading the conversation. We’re not going to solve the problems that rich white Californian tech bros have heaped on us all without having more different designers and technologists in the mix. Inclusive design means working with people of diverse backgrounds. Different races, genders, sexual orientations, ethnicities, religious beliefs, ages, ability, neuro-divergence, class, and more. And designing experiences for them all too.

Thank you for your time and attention. I hope you found the talk worth your while. Cheers.

ed. note: and thank you for taking the time to read this and if you’re so inclined please clap and keep the conversation going in the comments. Lidia Zuin wrote about the talk here on Medium as well. Please consider reading the prequel: The Outrage Economy.

--

--

daniel harvey
20MinutesIntoTheFuture

Creative Director, UX Designer, Writer. I write #20MinutesintotheFuture, a critical look at how tech is shaping our lives & what we can do for a better tomorrow