Big Ideas: Algorithms and Society

Joel McCance
Pandera Labs
Published in
5 min readApr 30, 2018
An algorithm’s rendering of a three-dimensional fractal. (Image credit: By Ondřej Karlík [CC BY-SA 3.0], from Wikimedia Commons)

Every so often, we like to step back from our day-to-day work to focus on the big ideas. We select a topic that excites us, invite the smartest, most insightful people we know, and get together to talk about the big picture.

We call this the “Big Ideas” series. This week, our big idea was “Algorithms and Society”.

“Algorithms” is a pretty big area, since an algorithm is really just a rigorously defined pattern for solving a particular problem. There are algorithms for computer vision, generating fractals, and for navigating terrain. But for this conversation, we focused on a specific subset of algorithms: those that automate decision-making.

Over the past decade or more, automated decision-making has spread everywhere. Algorithms decide which posts we see on social media and the articles we see in our news apps. They give us recommendations on music, movies, television shows, and books. Companies use them to decide what sort of advertising to show to whom, guide hiring, and even punish people for suspected return fraud.

These sorts of algorithms — and the data that fuel them — are often the “secret sauce” of the companies that develop them. These companies are understandably reluctant to share the details of how they work, which makes it hard for us to judge whether they’re operating fairly and in our best interests.

Below are some of the themes that came up in our lively conversation.

We Kind of Like Having Algorithms At Our Beck and Call

An algorithm in a … kind of tube-ish shape. (Image credit: www.nextdayblinds.com)

While there are definitely cons to the pervasive use of algorithms, the group generally agreed that there’s also a lot to like. With so much media out there, for example, it’s great to have services like Spotify, Netflix, or Goodreads suggesting new things we might like. It’s also very convenient (if perhaps a little creepy) when Google can help us find the name of “that one Japanese restaurant we went to that one time”. And if algorithms can help reduce retail fraud, we can benefit from lower prices.

These are all nice things to have, and it didn’t sound like anyone was eager to give up them up. Nobody was under any illusions about the cost. We’re knowingly trading data about ourselves in exchange for services.

We’re Worried That Algorithms Automate Discrimination

The group often returned to issues of bias in algorithmic decision-making. This matches the response that Stack Overflow got in their 2018 Developer Survey. Nearly 30% of respondents were concerned about having algorithms making important decisions. Data scientists and machine-learning experts were nearly one-and-a-half times as likely to be concerned about issues of fairness in relation to AI.

The core issue here is that these sorts of algorithms are ultimately implemented by human beings and often trained on decisions made by human beings. Historically, humans have not had a great track record when it comes to fairness, and algorithms simply learn how to repeat those mistakes.

While everyone agreed there were problems here, opinions were divided about what to do about it. Some wanted to let the market sort things out, at least for lower-stakes scenarios like return policies. For bigger things, like loans or hiring, we talked about the need for some form of auditing.

Holding Algorithms Accountable Is Hard

Modern machine-learning algorithms do not keep rigorous records. (Image credit: U.S. Air Force photo by Airman 1st Class Danielle Conde/Released)

But accountability is complicated by the multiple levels of indirection involved. Company A develops the algorithm as a product, licenses it to Company B which trains it on data purchased from Company C, then sells it as a service to Company D. When it comes to light that this product is guiding Company D to make discriminatory decisions, who is to blame?

Even worse, with modern machine-learning algorithms we may not even be able to identify why discrimination occurred. At this point we’re writing programs that train other programs to make decisions, and even their implementers couldn’t tell you exactly why the resulting system acts the way it does.

One suggestion that came up was that certain algorithms should have to be deterministic. If something is important, we don’t want to leave the result to chance and we want to be able to explain exactly why the decision was made. It’s conceivable in the future we may need legislation about what kinds of algorithms are allowed in what context.

The Future of Algorithms Is Kind of Dark

Friend Computer just wants you to be happy. (Image: By Dirk Ingo Franke [CC BY 3.0], from Wikimedia Commons)

For the last part of our conversation, we discussed an excellent Twitter thread by François Chollet, an artificial intelligence researcher at Google. I recommend reading the entire thing, but I’ve extracted some highlights for you below.

Effectively, we’re building self-tuning, personalized Skinner boxes, an idea that made a lot of us nervous. On the one hand, this sort of manipulation has existed for centuries. But this new form is significantly more potent, and is definitely something we should be keeping an eye on.

Wrap-up

Algorithms are ultimately extremely useful tools, too powerful to put away but too valuable not to use. Going forward, it’s important that we remember to ask questions about how these algorithms factor into our lives, and how we allow them to be used in society. Where does the data come from? How do we ensure people are being treated fairly and that these systems act in our best interest?

Many thanks to everyone who came by, and we look forward to seeing you at our next “Big Ideas” meetup!

At Pandera Labs, we’re always exploring new ways to build products and iterate on our engineering processes, and we value sharing our findings with the broader community as our company and our technology evolve together. To reach out directly about the topic of this article or to discuss our offerings, visit us at panderalabs.com.

--

--