An interview with Meredith Broussard
By: Sabrina de Silva
Meredith Broussard is an affiliate faculty member at the Moore-Sloan Data Science Environment at the NYU Center for Data Science and an assistant professor at the Arthur L. Carter Journalism Institute of New York University. On July 30th, 2018, Broussard discussed her latest book, “Artificial Unintelligence: How Computers Misunderstand the World” with Sabrina de Silva, Content Writer at the NYU Center for Data Science. See below for the interview. Broussard’s research focuses on artificial intelligence in investigative reporting, with an emphasis on using data analysis for social good.
- This interview has been lightly edited & condensed for clarity.
What is your general stance on technology?
The first line of the book is “I love technology.” And I do! I love making things, I teach computer science to journalists, and I build AI tools for investigative reporting. I am pro-technology!
I am against “technochauvinism,” the idea that technology is always the best solution. I am against the idea that doing something with a machine is better than doing something with a human. I am against the idea that machines are better than humans.
You talk about how it’s important to have imagination. To what extent can and should we imagine technologies in our future?
We can dream up extraordinary things because the human imagination is absolutely magnificent. But, just because we can imagine something doesn’t mean that we can do it, or that we should do it. Part of being a responsible adult is being realistic about what we can and should do.
What do you think of the notion that entertainment and cinema portend innovation?
I’m so tired of people wanting to invent things they saw in the movies! If one more person tells me they want to invent something out of Star Trek… well, if my eyes rolled any harder, they would fall out of my head.
What about dystopian media? Like Black Mirror?
I’m interested in the way entertainment reflects cultural anxieties. It’s a terrific show. It’s interesting to look at dystopian fantasies about tech.
Do you think they’re useful?
I think art is useful for many reasons. I mean, I don’t think that it’s healthy to look at fiction and say, “Fiction is a template for real life.”
We should be intentional about where we go with technology, and move on from ridiculous ideas. Let’s [embrace] stuff that’s actually new. Let’s dream up things that will be better for the world.
How should we value new technologies?
I would like to see people stop thinking about technology as something magical. My new research project is about archiving digital journalism. It’s very easy to think about a book being in a library. Because there’s no physical object in digital journalism, it becomes much more abstract to think about its location in a digital repository. Digital items are far more ephemeral than libraries or museums. If we can start talking about digital objects in the same way we talk about physical objects, that would improve the discourse. It would be easier for people to conceptualize what is happening with digital objects.
Recently Amazon’s facial recognition software incorrectly matched 28 members of Congress with criminal mugshots. This book explores a lot of questionably implemented tech with lofty claims. What are your thoughts on this case? Can we better direct software’s foray into the unexplored?
We have known for years that facial recognition software doesn’t work well, especially for dark-skinned people. It is also much better at identifying men than women. There’s this amazing project at MIT called Gender Shades by Joy Buolamwini. She’s a young woman with dark skin, who noticed that facial recognition software training data was not diverse enough, and the system was not learning how to process women and people of color as well as light skinned men.
There are many ways in which technology is failing to adequately serve people. What kind of world would we be living in if soap dispensers that didn’t work for dark skinned people were more widely distributed? The little things like soap dispensers matter because they’re symptoms of these subtle and incredibly damaging larger issues, like facial recognition software. Let’s put the brakes on something that is not a positive social innovation.
In your ideal world, where do humans fit in, and how much technology is there?
There is no appropriate amount of technology. It’s all about using the right tool for the task. What is the value added for introducing a tool that’s not fit for the task? For example, why would you need a computer to teach someone how to scramble an egg?
Infrastructure is key. Unless you’re willing to spend money on infrastructure, high tech programs in schools are not going to work. People imagine that using technology makes things better, faster, and cheaper, but implementing technology in a school usually ends up being far more expensive. It’s not a good bet for your average, impoverished public school to replace things that work, with technology that doesn’t. When you do the math, technology can end up being a lot more expensive than you might think.
How can the general public educate themselves about “technochauvinism” and become tech literate?
Take things apart. I have a lot of hobbies that involve making and breaking things. That’s one of the ways that I keep in touch with the reality of how things are put together in the world. If you knit, cook, garden or build things from wood, you stay in touch with how things are constructed. That ethos of “making” is something I take with me into the realm of making computer projects.