3 Recommendations to Combat Technochauvinism

MIT Press
5 min readJan 14, 2019

--

Created using frame vector by Alicia_mb — Freepik.com

By Meredith Broussard

How can you take a “good” selfie? In 2015, several prominent American media outlets covered the results of an experiment that claimed to answer this question using data science. The results were predictable to anyone familiar with photography basics: make sure your picture is in focus, don’t cut off the subject’s forehead, and so forth.

What was notable about the experiment — but was not noted by the investigator, Andrej Karpathy, then a Stanford PhD student and now the head of AI at Tesla — was that almost all the “good” photos were of young white women, despite the fact that older women, men, and people of color were included in the original pool of selfies. Karpathy used a measure of popularity — the number of “likes” each photo garnered on social media — as the metric for what constituted good. This type of mistake is quite common among computational researchers who do not critically reflect on the social values and human behaviors that lead to statistics being produced. Karpathy assumed that the photos were popular, and therefore they must be good. By selecting for popularity, the data scientist created a model that had significant bias: it prioritized young, white, cisgender images of women that fit a narrow, heteronormative definition of attractiveness.

Let’s say that you are an older black man, and you give your selfie to Karpathy’s model to be rated. The model will not label your photo as good, no matter what. You are not white, you are not a cisgender woman, and you are not young; therefore you do not satisfy the model’s criteria for “good.” The social implication for a reader is that unless you look a certain way your picture cannot possibly be good. This is not true. This model would not rate a selfie of Denzel Washington as “good,” which is absurd because Denzel has never looked less than handsome in any photo ever.

Programmers often make the mistake of substituting popular for good. This error has implications for all computational decision-making that involves subjective judgments of quality. Namely: a human can perceive a difference between the concepts popular and good. A human can identify things that are popular but not good (like ramen burgers or racism) or good but not popular (like income taxes or speed limits) and rank them in a socially appropriate manner. (Of course, there are also things like exercise and babies that are both popular and good.) A machine, however, can only identify things that are popular using criteria specified in an algorithm. The machine cannot autonomously identify the quality of the popular items.

The Facebook algorithm prioritizes popularity. So does the YouTube algorithm. That’s why kids get all kinds of inappropriate recommendations from YouTube, and why the Facebook algorithm surfaces fake news or pages devoted to phony pharmaceuticals. These things are popular with users. Users are not necessarily humans, however. Fake followers and click fraud have been problems since the very beginning of the internet; bots can be used to make posts and videos appear more popular, allowing bad actors to game any recommendation system.

There is a particular mindset that says that algorithms are superior to human judgment. The same mindset argues using technology is always the best strategy. I call this technochauvinism. It’s quite popular. Just think of every person who has ever said “We don’t need school, kids can just learn from the internet” or “Let’s replace human editors with algorithms.” Technochauvinism is rarely good. Technochauvinists have blind spots that allow social ills to metastasize.

This brings us back to a fundamental problem: algorithms are designed by people, and people embed their unconscious biases in algorithms. It’s rarely intentional — but this doesn’t mean we should let computer scientists off the hook. It means we should be critical about and vigilant for the things we know can go wrong. If we assume discrimination is the default, then we can design systems that work toward notions of equality.

How can we design better systems? Here are three recommendations for how to not be a technochauvinist:

1. Read the resistance. My book, Artificial Unintelligence: How Computers Misunderstand the World, joins a number of other books that offer a nuanced, well-informed, skeptical view of technology. Try reading Safiya Noble, Algorithms of Oppression; Cathy O’Neil, Weapons of Math Destruction; Marie Hicks, Programmed Inequality; or Virginia Eubanks, Automated Inequality. On the academic side, check out the research produced by Latanya Sweeney or Solon Barocas, and also the papers coming out of the Fairness & Transparency (FAT*) conference. AI Now, run by Kate Crawford and Meredith Whittaker, and Data & Society, run by danah boyd, are think tanks that produce excellent work about AI, data, and social implications.

2. Stop fetishizing your phone. Many people have a Freudian fixation on their phone, where they view it as a precious, totemic object. It’s not; it’s just a machine. If you are one of these people, remember that what you value isn’t the phone itself, but the social connections that it represents. Read up on phone addiction, and do things to combat it. Turn off all of your notifications, except for one or two, to reduce distractions. Take fewer photos. When you get the urge to check social media or email or whatever your addiction is, take a moment to breathe. Are you feeling lonely? Tired? Anxious? Hungry? Frustrated? Desperate? Whatever the feeling is, take a moment to feel it. Then, take action to soothe the feeling, without using your digital device to push the feeling away.

3. Take apart the technology to understand it. To learn how a system works, it helps to build it from the ground up. This doesn’t mean you need to manufacture a silicon chip, but it does help to know what is going on inside a computer. Try getting an old computer and taking it apart. This is a great activity to do with a kid. Figure out where the electricity goes in, where the fan is that cools the parts that heat up, follow the wires that go from the circuit board to the display. It’s fascinating! If hardware isn’t your thing, try building software by doing tutorials on an online site like Scratch, or Codeacademy, or Tynker. You’ll see that code isn’t magic, it’s just math. If you could handle math up until about fifth grade, you’ll be able to handle basic coding. After you have taken things apart and built some things, look at a ranking technology you use all the time and figure out how the ranking actually works. Ask yourself: is it trying to measure what is popular, or what is good? If it is merely popularity, can it be changed to make it more inclusive and fair?

Meredith Broussard is the author of Artificial Unintelligence: How Computers Misunderstand the World (now in paperback). She is a data journalism professor at the Arthur L. Carter Journalism Institute of New York University.

--

--

MIT Press

Visit the MIT Press Reader at https://thereader.mitpress.mit.edu to read thought-provoking excerpts, interviews, and other original works.