Machine Learning Reductions & Mother Algorithms, Part I: An Introduction

Max Pagels
Nov 5, 2018 · 4 min read

In the culinary field, there are five so-called “mother sauces”: bechamel, velouté, espagnole, tomato & hollandaise. According to this categorisation, any other sauce you make is, fundamentally, a derivation of one of more of the mother sauces. Take béarnaise, for example: it’s hollandaise with some additional flavouring (tarragon and shallots).

The five mother sauces. Image credit: Food52

In culinary education, learning to make mother sauces is the first step towards becoming a master saucier. The framework helps you to break down the complexity of making a new sauce by breaking down the process into manageable sub-problems, solving them, and composing the results. The same divide-and-conquer approach is used by humans so often that it’s hard to imagine a domain in which we don’t use it.

In the field of machine learning, we don’t have sauces, but we do have two learning algorithms so fundamental that I’d call them mother algorithms: simple regression and binary classification. As it turns out, lots of non-trivial machine learning problems can be broken down, or reduced, in a way that they are solvable using these mother algorithms. In this blog series, I’ll walk through some of the reductions I think practitioners should be aware of.

Two variants of ML “mother algorithms”: linear regression & logistic regression (for binary classification). Image credit: Machine Learning Plus

Why would you want to use ML reductions?

Reductions in machine learning have long been an interest of mine because I believe them to embody the ideal way we should approach problem-solving. When we tackle new machine learning problems, we have two options:

  1. Design new algorithms

Much of science entails building on past, peer-reviewed research to answer increasingly difficult questions. This allows us to advance our knowledge rapidly in a way that remains scientifically sound. If we can find a way to solve a new ML problem by reusing existing algorithms, we’re not only saving time & making less mistakes: we’re adhering to good scientific practise. Reductions also help us understand how different techniques relate to each other, which in turn gives us more options to tackle new problems.

How ML reductions work

To solve an ML problem using reductions, the general approach is as follows:

  1. Identify the the learning problem L (usually either classification or regression)

Intuitively, what we want to do is transform our original problem, our dataset, or both in such a way that they are compatible with an existing mother algorithm, and use that as a solution. If the framework seems straightforward, that because it, well, is. There’s nothing especially complex going on here.

What kind of ML reductions are possible?

ML reductions are an active and relatively new area of research, with new results being published on a regular basis. One of the most prominent researchers in this space is John Langford, whose publications and website I highly recommend.

As it stands now, for machine learning practitioners, it’s good to know that it’s already possible to reduce at least the following:

  • Importance-weighted binary classification to binary classification

Some ML reductions are well-known; others aren’t immediately apparent. The composable nature of reductions is precisely what makes them powerful: Many problems are solvable using a reduction, but we can solve even more complex problems by stacking reductions one after the other. In this series, we’ll cover some of the most useful reductions, and touch upon other nifty tricks besides (hashing & the one classifier/regressor trick).

Part II: Multiclass to Binary Classification

A special thanks to John Langford and the “The Reduction Approach to Machine Learning” course he held at the University of Helsinki back in 2006, for sparking my interest in this field of ML.

The Hands-on Advisors

Sights and sounds from the Hands-on Advisors at Fourkind.

Max Pagels

Written by

Machine Learning Partner at Fourkind

The Hands-on Advisors

Sights and sounds from the Hands-on Advisors at Fourkind. Expert services in strategy consulting & hands-on value creation through business & service design, engineering & data science. To tackle the hardest problems of tomorrow, today.

Max Pagels

Written by

Machine Learning Partner at Fourkind

The Hands-on Advisors

Sights and sounds from the Hands-on Advisors at Fourkind. Expert services in strategy consulting & hands-on value creation through business & service design, engineering & data science. To tackle the hardest problems of tomorrow, today.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store