The Black Box

Algorithmic transparency is a hot topic in tech today. But is transparency the right standard to be setting?

Photo by Markus Spiske on Unsplash

In 2012, Target sent a teenaged girl in Minneapolis a mailer full of coupons for products like baby clothes, formula, and cribs. This was because based on the girl’s purchase history, Target had figured out that she was pregnant — before her own parents did.

Listen to the second episode of the Consequential podcast, The Black Box.

When we say that “Target” figured it out, what we mean is that an algorithm did. In a partnership with Target, statistician Andrew Pole pinpointed twenty-five products that, when purchased together, might indicate that a consumer is pregnant. So, unscented lotion — that’s fine on its own. But unscented lotion and mineral supplements? That shopper might be getting ready to buy a crib.

We know what Target’s algorithm was taking into account when it jumped to that conclusion. But what happens when we don’t? While Target’s algorithm came to a correct conclusion, what happens when algorithms have false positives? And what if those algorithms are making bigger decisions than which coupon book to send you — like whether or not you get hired for a job or whether you get approved for a line of credit.

The fact is, we don’t always know how artificial intelligence makes decisions. If we want to, we’re going to have to unpack the black box.

According to Kartik Hosanagar, professor of technology and digital business at the University of Pennsylvania, algorithms are already pervasive and will become even more central to decisions we’ll make going forward.

Algorithms are all around us. When you go to an e-commerce website, you’ll see recommendations. That’s an algorithm that’s convincing you to buy certain products. Studies show that over a quarter of the choices we make on Amazon are driven by algorithmic decisions,” said Hosanagar. “On Netflix, an algorithm is recommending the media you see. If you use a dating app like Match.com or Tinder, algorithms are matching people and so they’re influencing who we date and marry.”

But algorithms aren’t just responsible for individual decision-making. In addition to making decisions for us, they’re making decisions about us.

“[Algorithms are] making life and death decisions for us. Algorithms are used in courtrooms in the U.S. to guide judges in sentencing and parole decisions. Algorithms are entering hospitals to guide doctors in making treatment decisions and in diagnosis. Really, they’re all around us,” said Hosanagar.

Algorithms can seem like tech magic, but conceptually they’re very simple. An algorithm is a set of instructions to be followed in a specific order to achieve specific results. You can think of a recipe as an algorithm: a set of instructions, specific order, specific results.

Algorithms originated in mathematics — they’re how we do things like find prime numbers. But algorithms like Target’s are the ones that turn up in computer science. Essentially, algorithms are programs set up to solve a problem by using a specific input to find a specific output. If we take a step back in history, this was more or less how computing started — we made machines that were capable of receiving data and then processing that data into something we could understand. When it comes to AI, this still holds mostly true. Algorithms use models of how to process data to make predictions about a given outcome. But sometimes, how algorithms are using data to make certain predictions is really difficult to explain.

Because algorithms just follow orders. They do what they’ve been designed to do. When you put problematic data or problematic directions into an algorithm, it’s going to follow those directions correctly, for better or for worse. And we’ve seen first-hand how bad data can led to disastrous results.

That’s not to say we should rely solely on human judgment, because human judgment isn’t all that infallible, either. As it stands, human decision-making is affected by pretty significant prejudices, and that can lead to serious negative outcomes in high stakes areas like hiring, health care, and criminal justice. Human decisions are also subjected to human whims, which algorithms aren’t necessarily susceptible to.

When algorithms work well, they can offset or even help to overcome the kinds of human biases that pervade these sensitive areas of decision-making and promote greater equality. But that requires understanding how they’re making those decisions in the first place.

Molly Wright Steenson, professor of ethics and computational technologies at Carnegie Mellon University, believes that expecting an AI or robot to stop at any moment and explain what it’s doing is unrealistic. It’s not as easy as lifting a lid and looking inside.

“It’s not a matter of something explaining itself. It’s a matter of you having the information that you need so you can interpret what’s happened or what it means. And I think that if we’re considering policy ramifications, then this notion of interpretation is really, really important,” said Steenson.

In our latest episode of the Block Center’s Consequential podcast, you’ll hear our interviews with professors Hosanagar and Steenson, and we’ll dig deep into the intersections of algorithmic bias, transparency, and design. Who’s responsible for making sure things go right? And who’s accountable when things go wrong?

The Black Box is available now on Apple Podcasts, Google Podcasts, Spotify, Stitcher or wherever you listen to podcasts!

Related:

Professor Amelia Haviland, Director of the Block Center’s Artificial Intelligence & Analytics for Good Initiative, discusses the AI black box and the ethical utilization of AI in high-stakes decision-making.

--

--

CMU’s Block Center for Technology and Society
Consequential Podcast

The Block Center for Technology and Society at Carnegie Mellon University investigates the economic, organizational, and public policy impacts of technology.