How Silicon Valley’s blind spots and biases are ruining tech for the rest of us

PERSPECTIVE | The industry isn’t looking at who’s being left behind

The Lily News
The Lily
6 min readDec 17, 2017

--

(iStock)

Essay by Sara Wachter-Boettcher. The views expressed are the opinions of the author.

It was Christmas Eve 2014 when Eric Meyer logged onto Facebook, expecting the usual holiday photos and well-wishes from friends and families. Instead, Facebook showed him an ad for its new Year in Review feature.

Year in Review allowed Facebook users to create albums of their highlights from the year — top posts, photos from vacations, that sort of thing — and share them with their friends. But Meyer wasn’t keen on reliving 2014, the year his daughter Rebecca died of aggressive brain cancer. She was 6.

Facebook didn’t give him a choice. Instead, it created a sample Year In Review album for him, and posted it to his page to encourage him to share it. “Here’s what your year looked like!” the copy read. Below it was a picture of Rebecca. And surrounding her smiling face and curly hair were illustrations, made by Facebook, of partyers dancing amid balloons and streamers.

Meyer, a friend of mine who is also one of the Web’s early programmers and bloggers, was gutted. “Yes, my year looked like that,” he wrote in Slate. “True enough. My year looked like the now-absent face of my Little Spark. It was still unkind to remind me so tactlessly, and without any consent on my part.”

When I started working in tech in 2007, I could never have imagined a blunder like this. Facebook had just begun transforming from a college-centric site to the behemoth it’s since become. Google had just bought YouTube. The iPhone hadn’t even launched yet. People were still writing “click here” on their links (and I was trying to get them to stop). But seven years later, something had started to feel off.

Despite all the improvements in technology, my peers and I weren’t getting better at serving people. And Meyer’s story really drove that home. Facebook had designed an experience that worked well for people who’d had a good year, people who had vacations or weddings or parties to remember. But because the design team focused only on positive experiences, it hadn’t thought enough about what would happen for everyone else — for people whose years were marred by grief, illness, heartbreak or disaster.

It’s not just Facebook, and it’s not just grief or trauma. The more I started paying attention to how tech products are designed, the more I started noticing how often they’re full of blind spots, biases and outright ethical blunders — and how often those oversights can exacerbate unfairness and leave vulnerable people out.

Like in the spring of 2015, when Louise Selby, a pediatrician in Cambridge, England, joined PureGym, a British chain. But every time she tried to swipe her membership card to access the women’s locker room, she was denied; the system simply wouldn’t authorize her. Finally, PureGym got to the bottom of things: The third-party software it used to manage its membership data — software used at all 90 locations across England — was relying on members’ titles to determine which locker room they could access. And the title “doctor” was coded as male.

In 2016, JAMA released a study showing that the artificial intelligence built into smartphones from Apple, Samsung, Google and Microsoft isn’t programmed to help during a crisis. The phones’ personal assistants didn’t understand words like “rape” or “I was beaten up by my husband.” In fact, instead of doing even a simple Web search, Apple’s Siri cracked jokes and mocked users.

It wasn’t the first time. In 2011, if you told Siri you were thinking of shooting yourself, it gave you directions to a gun store. After getting bad press, Apple partnered with the National Suicide Prevention Lifeline to offer users help when they said something that Siri identified as suicidal. But five years later, no one had looked beyond that one fix. Apple had no problem investing in building jokes and clever comebacks into the interface from the start. But investing in crisis or the safety of its users? Just not a priority.

The examples go on and on. In August 2016, Snapchat launched a new face-morphing filter — one it said was “inspired by anime.” In reality, the effect had a lot more in common with Mickey Rooney playing I.Y. Yunioshi in “Breakfast at Tiffany’s” than a character from “Akira.” The filter morphed users’ selfies into bucktoothed, squinty-eyed caricatures — the hallmarks of “yellowface,” the term for white people donning makeup and masquerading as Asian stereotypes. Snapchat said that this particular filter wouldn’t be coming back, but insisted it hadn’t done anything wrong, even as Asian users mounted a campaign to delete the app.

Individually, it’s easy to write each of these off as a simple slip-up, an oversight, a shame. We all make mistakes, right? But when we start looking at them together, a clear pattern emerges of an industry that is willing to invest plenty of resources in chasing “delight” and “disruption” but one that hasn’t stopped to think about who’s being served by its products and who’s being left behind, alienated or insulted.

There’s a running joke in the HBO comedy “Silicon Valley”: Every would-be entrepreneur, almost always a 20-something man, at some point announces that his product will “make the world a better place” — and then describes something absurdly useless or technically trivial (“constructing elegant hierarchies for maximum code reuse and extensibility,” for example).

I’m sure it’s funny, but I don’t watch the show regularly. It’s too real. It brings me back to too many terrible conversations at tech conferences, where some guy who’s never held a job is backing me into a corner at cocktail hour and droning on about his idea to “disrupt” some industry or other, while I desperately scan the room for a way out.

What “Silicon Valley” gets right is that tech is an insular industry: a world of mostly white guys who’ve been told they’re special, the best and brightest. It’s a story that tech loves to tell about itself, because the more everyone on the outside sees technology as magic and programmers as geniuses, the more the industry can keep doing whatever it wants. And with gobs of money and little public scrutiny, far too many people in tech have started to believe that they’re truly saving the world. Even when they’re just making another ride-hailing app or restaurant algorithm. Even when their products actually harm more people than they help.

We can’t afford that anymore. Ten years ago, tech was still, in many ways, a discrete industry — easy to count and quantify. Today, it’s more accurate to call it a core underpinning of every industry. As tech entrepreneur and activist Anil Dash writes, “Every industry and every sector of society is powered by technology today, and being transformed by the choices made by technologists.”

Tech is only going to become more fundamental to the way we understand and interact with our communities and governments. Courts are using software algorithms to influence criminal sentencing. Detailed medical records are being stored in databases. And, as information studies scholar Safiya Noble puts it, “People are using search engines rather than libraries or teachers to make sense of the world we’re inhabiting.”

The more technology becomes embedded in all aspects of life, the more it matters whether that technology is biased, alienating or harmful. The more it matters whether it works for real people facing real-life stress. And the more it matters that we stop allowing tech to make us feel like we’re not important enough to design for. Because there’s nothing wrong with you. There’s something wrong with tech.

This piece was adapted from “Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech.”

Sara Wachter-Boettcher is a web consultant and author.

This essay originally appeared in The Washington Post.

--

--