An Overview of Pedro Domingo’s The Master Algorithm

Kenneth Robinson
6 min readMar 19, 2019

--

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, written by Pedro Domingo, yearns to give the reader a foundation of machine learning from the perspective of each tribe that has been researching the topic over the years. The goal of the book is to introduce the reader to each tribe/camp, in order to hopefully point them in the right direction to discover the one “Master Algorithm”, that will be able to use the strengths of each Machine Learning method, while avoiding the weaknesses that come with each. While each tribe has a very good solution for the type of problem they can solve, the master algorithm will be able to solve all types, without any issues or exceptions. The driving thesis of the book is “All Knowledge — past present, and future — can be derived from data by a single, universal learning algorithm.” (Domingo 25)

In order to even begin to look for this master algorithm, it is essential to have the wisdom of where humanity has progressed to in its studies in the field of machine learning as of today. In Domingo’s book The Master Algorithm, he explains that there are 5 camps of machine learning theorists, who believe, depending on which camp they reside in, that their method is the key starting point to finding the one true solution to all problems. These 5 camps/tribes include Symbolists, Connectionists, Evolutionaries, Bayesians, and Analogiezers. Each of theses camps solves a different type of problem really well, however each camp also has very apparent weaknesses that prevent them from being useful in all cases that arise in life. The master algorithm we are trying to discover will be able to solve the problems that each camp can solve really well, without any of the holes and issues that arise in each. As this article is being written halfway through reading the book, I’ll only be touching on the first three camps in this article: Symbolists, Connections, and Evolutionaries.

The first camp we are introduced to in The Master Algorithm is that of the Symbolists. The Symbolists’ current best solution of “The Master Algorithm” involves inverse deduction. Inverse deduction is the process of creating sets of rules based on data that has already been collected and using that data of the past to make inferences about what will happen in the future. This strategy however has an obvious flaw, in that it assumes all occurrences of the future will be exactly like we have already seen. Generalizing life to this extent will often miss new cases of randomness that we haven’t seen before. Sets of rules that follow this strict of an if/then flow will barely apply to any real-world problem, as the real world is not that black and white. An example of this weakness given in the book, is the story of the inductivist turkey. If a computer sees in the data that a turkey has been feed at 9:00 a.m. every morning for its entire life, it will always conclude that the next morning, the turkey is to be fed at that consistent 9:00 time. However, the next morning is Christmas, so instead of being fed, the turkey is slaughtered and made into a Christmas evening dinner.

In the second camp we visit in The Master Algorithm, we meet the Connectionists. My personal favorite camp thus far in the text believes that back propagation is the closest idea we have to get to the master algorithm. Back propagation is basically a method of mimicking how the brain learns. We know that humans use their brains to store data and learn as we take in more and more data every day, so if we can mimic that process, we should be able to expedite the time in which it creates memories and learns from them. The brain learns by taking in what it sees and reads and storing it in memory. These memories are given certain weights and are reinforced by firing the same neurons when they are revisited. The weakness to this approach is that the human brain is very complex, and it may be a very long time before we ever figure out exactly how it works in its entirety. Currently, computer systems that attempt to replicate the brain have a hard time weighing very specific instances of randomness, where millions of parameters may have only a few differences between other instances. This means they may have a hard time recognizing different occurrences of something to be the same instance of that thing. For example, if the computer sees a cat from the front, it may not recognize the cat if its head is turned to look to other way, or if it is wet, dried and fluffy after taking a bath. Back propagation also runs into the issue of returning “good” solutions rather than the “best” solutions, by finding local maxima rather than global. This human flaw may be introduced into these algorithms should back propagation be the way to go.

Finally, the last tribe we read about in the first half of The Master Algorithm is the Evolutionaires. Their current best solution for the master algorithm is genetic programming. Genetic programming mimics evolution as Darwin observed it, with a similar survival of the fittest/natural selection mindset. This process run millions of versions of a solution/algorithm at once (one generation), and the ones that produce the best results are kept in the next generation of solutions. New solutions are then reproduced almost “sexually” using crossover: by combining what works from each algorithm to create a new algorithm to pass to the next generation. This variation over generations will get us closer and closer to creating the perfect algorithm to solve the problem at hand. The cool thing too about generations of an algorithm, is that computational time is much faster that the human life span, so generations last seconds rather than decades. Weaknesses that arise from genetic programming is that a single algorithm once produced, cannot evolve itself to get better, it has to wait for the next generation to produce a better algorithm with the help of another. Also, this type of genetic programming will produce unintended mutations in even the best occurrences of the solution. To use life once again as an example, humans are by far the most evolved intelligent species on the planet, yet we all have blind spots built into our eyes immediately adjacent to the area of our sharpest vision, due to the fact that the optic nerves in our eyes connect to the front of the retina rather than the back.

Now that we have explored the first three of five camps of machine learning, I want to share the point in The Master Algorithm that stood out and resonated with me the most. The most important curve in the world — the S curve — shows up all throughout machine learning, and actually all throughout life. The S curve is exactly what it sounds like: a curve in the shape of an S. The curve represents the idea that for literally any process in life, the input will start slowly and constant, until a breakthrough occurs, resulting in an exponential boom of progress and improvement until a threshold of progress is reached, and whatever it is that was improving cannot get any better. This S curve mirrors of overarching theme of the book, that we are currently looking for the breakthrough in machine learning, and once we find it, the implications of the improvement that will follow is unimaginable, and will continue to boom exponentially until the instance of superintelligence/singularity is achieved. The S curve resonates with me personally because I see it everywhere in life, from popping popcorn to life itself. In life, you have a constant rate of learning throughout your childhood, constantly taking in new information, until, from my point of view, you spend 4 years at college. I feel that I am at the breakthrough point of my life, and once I complete my degree, the exponential improvement that I will be see throughout my career using my degree is unimaginable.

Reading The Master Algorithm has also made me think about some personal pain points in my life that progress in machine leaning and artificial intelligence can help to solve. With Dementia/Alzheimer’s disease running on both sides of families genes, it is very likely I will be diagnosed in the late stages of my life. If advances in AI enlightens humanity as to how neuron connections in the brain deteriorate over time, perhaps it can then learn how to better predict the disease as well as help treat or prevent this cognitive decline before it begins affecting me. ScienceDaily News says AI could predict cognitive decline leading to Alzheimer’s disease within the next 5 years, which makes me very hopeful.

The Master Algorithm, when discovered, will change the world as we know it in ways we can not even imagine. I’m very excited and a little scared for the future, but hopefully humanity will be able to control the super intelligence that The Master Algorithm by Pedro Domingo gives us the tools to invent.

--

--