When an algorithm isn’t…

Suresh Venkat
5 min readOct 2, 2015

--

The popular press is full of articles about “algorithms” and “algorithmic fairness” and “algorithms that discriminate, (or don’t)”. As a computer scientist (and one who studies algorithms to boot), I find all this attention to my field rather gratifying, and not a bit terrifying.

What’s even more pleasing is that the popular explanation of an algorithm follows along the lines of the definition we’ve been using since, well, forever

An algorithm is a set of steps (the instructions) each of which is simple and well defined, and that stops after a finite number of these steps.

If we wanted a less intimidating definition of an algorithm, we turn to the kitchen:

An algorithm is like a recipe. It takes “inputs” (the ingredients), performs a set of simple and (hopefully) well-defined steps, and then terminates after producing an “output” (the meal)

I’ve used this cooking analogy over and over again when explaining what an algorithm is: to non-technical members of my family, to lay people curious about what I do, and to students who come and ask me about algorithms.

And it works ! It captures what’s important (well defined instructions) and ignores irrelevant details (what language, what kind of computer, Mac or Windows, and so on). It’s also a successful analogy, because it’s percolated into the popular understanding of algorithms and code.

The only problem is: it’s dead wrong, at least when trying to understand the bewildering universe of algorithms that collectively define machine learning, or deep learning, or Big AI: all the algorithms that are constantly in the news nowadays.

To understand why our common conception of an algorithm doesn’t work for machine learning, we should look at an actual recipe.

I (or at least my ancestors) come from the south of India, and if you’ve ever eaten Indian food you’ll have encountered sambar, a staple of South Indian cuisine that is essentially a spiced lentil soup.

Here’s my mother’s recipe for sambar.

Ingredients:

  • Split peas (1/2 cup)
  • Tamarind (0.5in piece)
  • Turmeric (1 tsp)
  • Sambar masala (spice) (1 tsp)
  • Chopped vegetables (your choice) (0.5 cup)
  1. Cook the split peas, vegetables, tamarind, salt and masala.
  2. Pressure cook the split peas and turmeric (and the other ingredients separately)
  3. Remove the tamarind and squeeze its juice out
  4. Boil the ingredients together
  5. Heat a little oil, add 1/4 tsp mustard and cumin to the oil and then pour over the sambar when it crackles.

It has a set of inputs, and an output. Each instruction is (relatively) well-defined, and the alert programmer will even detect a conditional (if this then do that).

More importantly, I can make sambar over and over again with this recipe. If someone asks me how I did it, I can explain the procedure. And if it tastes strange, I can look over the ingredients and determine why.

A learning algorithm is a game of roulette on a 50 dimensional wheel that lands on a particular spot (a recipe) based completely on how it was trained, what examples it saw, and how long it took to search.

But now let’s imagine that I lost my sambar recipe, I need it urgently for a party, and I can’t reach my mother (the time difference to India is horrendous). Or maybe I just want to learn my own sambar recipe.

I have this vague memory of the ingredients involved. A crumpled piece of paper lying on the floor of my kitchen has the following ingredients scrawled on it:

  • tamarind
  • <illegible> masala
  • turmeric
  • split peas
  • potato ? or is that tomato ? or moringa ?

I know they have to be combined in some form, but how ? As a scientist, I decide to do an experiment. I decide a few different ways of making what I think is sambar from the ingredients and my hazy memories of how to combine them and in what quantity. I make a few small batches (nano-sambar!), each carefully annotated with the procedure I used, and I present them to my friends when they come.

Howls of horror ! Loud retching sounds ! the occasional grudging compliment ! By the end of the evening I have a pretty good sense of which recipes appeared to work and which ones didn’t. Some of my friends helpfully even brought along samples of their sambar for me to try.

So I try this again (I have very patient friends and they all crave sambar).

And again.

And again.

Eventually I have a pretty decent sambar recipe. For some reason I have to twirl around three times while holding the split peas and water before putting it on the stove, and the the salt has to be ladled out using a plastic spoon, but the taste is great, so who cares !

But here’s the catch. I want you to imagine my three doppelgängers, one in Tokyo, one in Aarhus, and one in Rio. I want you to imagine the three of them trying to learn to make sambar in exactly the same way with their own circle of friends. Do you really think the three of us will end up with the same recipe ? Down to the twirls ?

I think not.

And that’s how a learning algorithm works. It isn’t a recipe. It’s a procedure for constructing a recipe. It’s a game of roulette on a 50 dimensional wheel that lands on a particular spot (a recipe) based completely on how it was trained, what examples it saw, and how long it took to search. In each case, the ball lands on an acceptable answer, but these answers are wildly different, and they often make very little sense to the person executing them.

Yes, we could just “look at the code”, but what we see is a mysterious alchemy in which each individual step might be comprehensible, but any “explanation” of why the code does what it does requires understanding how it evolved and what “experiences” it had along the way. And even then, you’d be hard-pressed to explain why the algorithm did what it did. If you don’t believe me, try looking at a neural network sometime.

I spend my days thinking and talking about algorithmic fairness, and when algorithms might discriminate. Most of the time, the reaction I get is “But algorithms are just code ! they only do what you tell them”. What this tells me is that there’s a fundamental disconnect between how people think about learning algorithms, and how they actually work, and thinking about this disconnect is what led me to write this.

Something I said off-the-cuff in an interview seems more and more true the more I think about it.

We’re trying to design algorithms that mimic what humans can do. In the process, we’re designing algorithms that have the same blind spots, unique experiences, and inscrutable behaviors that we do. We can’t just “look at the code” any more than we can unravel our own “code”.

--

--

Suresh Venkat

CS prof: interested in algorithms, geometry, and theoryCS