The Master Algorithm

Tom Connor
10x Curiosity
Published in
3 min readMay 20, 2020

Understanding the main schools of machine learning algorithms

One of the most useful summaries I have read to introduce concepts of data anlytics is the wonderful primer to machine learning “The Master Algorithm” by Pedro Domingos

Writes Domingos:

Rival schools of thought within machine learning have very different answers to these questions… Symbolists view learning as the inverse of deduction and take ideas from philosophy, psychology, and logic. Connectionists reverse engineer the brain and are inspired by neuroscience and physics. Evolutionaries simulate evolution on the computer and draw on genetics and evolutionary biology. Bayesians believe learning is a form of probabilistic inference and have their roots in statistics. Analogizers learn by extrapolating from similarity judgments and are influenced by psychology and mathematical optimization

ref Rodriguez

Jesus Rodriguez writes further about each tribe:

  • The Symbolists: This group of machine learning practitioners focus on the premise of inverse deduction. Instead of the classical model of starting with a premise and looking for the conclusions, inverse deduction starts with a set of premises and conclusions and works backward to fill in the gaps.
  • The Connectionists: This subset of machine learning is one of the most well-known as their focus on re-engineering the brain. The most famous example of the connectionist approach is what today we call “Deep Learning”. At a high level, this approach is based on connecting artificial neurons in a neural network. Connectionist techniques are very efficient in areas such as image recognition or machine translation.
  • The Evolutionaries: This machine learning discipline focuses on applying the idea of genomes and DNA in the evolutionary process to data processing. In essence, evolutionary algorithms will constantly evolve and adapt to unknown conditions and processes.
  • The Bayesians: Another well-known group within machine learning, the Bayesians focus on handling uncertainty using techniques like probabilistic inference. Vision learning and spam filtering are some of the classic problems tackled by the Bayesian approach. Typically, Bayesian models will take a hypothesis and apply a type of “a priori” thinking, believing that there will be some outcomes that are more likely. They then update a hypothesis as they see more data.
  • The Analogizers: This machine learning discipline focuses on techniques to match bits of data to each other. The most famous analogizer model is the “nearest neighbor” algorithm which can give results to neural network models. Probably the most famous example of this type of machine learning, is the Amazon or Netflix recommendations: “If you have watched/bought this, you will; probably like…”

Domingos suggests some excellent resources for learning more

Online Courses

Texts

Let me know what you think? I’d love your feedback. If you haven’t already then sign up for a weekly dose just like this.

Get in touch… — linktr.ee/Tomconnor

More like this from 10x Curiosity

--

--

Tom Connor
10x Curiosity

Always curious - curating knowledge to solve problems and create change