Software is Eating Humanity

How technology today is enabling racism, sexism, and deliberate misinformation, and what we should be doing about it.

Our recent, intensely divisive election has laid bare many faults in our tech products and services.

In particular, racism, along with sexism and deliberate misinformation, is thriving today, thanks in part to the technology we’ve built, exposing some deep (and in some cases, fundamental) flaws in our digital offspring.

A quick recap:

From one perspective, this is nothing new. Cyberbullying is as old as the Internet. But in an age where 62% of US Adults get their news from social media (44% from Facebook alone), the potential damage is now massive.

And as Software continues to devour our world, inherent flaws in our technology are going to wreak even more havoc.

Look at Google Photos, which last year was tagging Black People as “Gorillas.”

Or look at the criminal justice system in Fort Lauderdale, which uses an algorithm to forecast the risk of defendants to commit future crimes as an input into sentencing. Earlier this year, ProPublica found that this algorithm exhibited an 80% false positive rate for violent crimes, with an alarming racial bias: it incorrectly flagged black defendants twice as often as white ones.

How do we know that our platforms aren’t facilitating abuse? That our algorithms aren’t racist? That our black box machine learning models are not being fed data with dangerous biases?

More broadly: What moral responsibility do we as technologists have in minimizing the damage wrought by our own products?


What we can do: The case for self-regulation

Some may argue that the tech industry has very little responsibility: that technology itself is an amoral tool, and that it is not our role to regulate how that tool is used.

But for years we’ve been trumpeting the power of technology for societal change; perhaps it’s time we recognized that it’s not always positive change, and accepted the responsibility that comes with that power: regulation.

There is precedent. We can look at two older technologies, which revolutionized their day as much as Facebook and Twitter are affecting ours: Television and Radio.

The FCC regulates broadcast TV and radio. These rules include penalties for broadcasting hoaxes (if the station knew that the information was false, if the hoax causes substantial public harm, and if the station could have foreseen that the harm would have been caused). Apparently (note: I am far from an expert in FCC regulations), these rules were enacted to protect first responders, who were duped by radio hoaxes, leaving them unable to handle real emergencies. These regulations acknowledge the influence TV and radio have, and seek to protect the public from their abuse.

It would not be hard to argue that social media has become as influential as TV and radio.

Yet government regulation doesn’t always fit the Internet.

The Internet is different from previous forms of media and communication. Its openness is a massive gift. It spans national boundaries, which raises difficult questions: How effectively could the FCC regulate a social network based in another country? How successfully could the UK regulate Facebook when one of its teenagers dies after being cyberbullied online?

Life on the Internet moves incredibly quickly. Government regulations do not. Solely relying on the Government to regulate the Internet would only widen the gap between policy and problem.

So then, it falls to us. We, as an industry, need to self-regulate and police ourselves.

How? A few ideas:

  • Build internal product/engineering teams focused on combating abuse. Since we all love KPIs, measure abuse, and set/prioritize stretch goals for reducing it.
  • Build internal teams focused on identifying bias in models, algorithms, processes. Ideally, these teams would draw from a variety of fields, including anthropology, sociology, law, philosophy, as well as engineering and design (what some call a “social-systems” approach).
  • Support (or start) external organizations that step-in to fill much needed gaps: e.g., fact-checking by Snopes.com.
  • Recognize that our existing product development methods will fail us. “Move fast and break things” will not be an option. A/B tests are not an option when someone’s life is at stake.
  • Build new, more thoughtful product development methods.
  • Recognize that the underrepresentation of women and non-Asian people of color in our industry will also make this harder.
  • Continue to work on increasing the social, gender, racial diversity in our companies.

This will be hard, but we like hard problems

Yes, this will be hard. There is a blurry line between regulating content and censorship, and regulation itself is the antithesis to the open and frictionless nature of the Internet that has led to its success.

But we like hard problems. As Zeynep Tufekci points out, “Perhaps the Silicon Valley billionaires who helped create this problem should take it on before setting out to colonize Mars.

Twenty years ago, I decided to spend my life developing software because I saw its potential to change the world. I still have faith in technology and in our industry.

Yes, technology is a vehicle for change, but that change can engender both good and evil. Our minds and hands are setting these wheels in motion: but do we fully understand where this vehicle is heading?

Let this be our warning: As software eats the world, we can’t allow it to eat our humanity.

(Big thanks to Mike Freedman and the rest of the team at iobeam for their thoughtful input and edits.)