Algorithms to Live By — Brian Christian & Tom Griffiths [Book 3/52]

Samia Haimoura
Feb 16 · 5 min read

Genre: Science/Strategy

Disclaimer: This book caused me to both 1.miss many train connections, hence came back home late or fail to attend meetings on time on multiple occasions (with a valid excuse causing yet another 10 min of elaboration on amazing concepts in the book if the other person is sparked) 2. become ever more obsessed with computational neuroscience.

What can I say about this book? Except I can already predict that it will rank amongst my favourite books all time, and when you hit such a record early on in the year, it sets you up for a fierce competition to choose wisely afterwards, otherwise you’d be put at the risk of literary, and intellectual deception.

This book is essentially computer science algorithms applied to how we intuitively and most often unknowingly use them to make decisions in our life, whether it is to choose your next apartment or wife, schedule your day, decide to pick a restaurant you’ve already been to on a date or choose a new one at your own risk, how to optimize memory, why we ‘forget’ certain things as we age and others not, why it’s best not to take into account multiple factors and rather penalize for complexity, why randomness sometimes is the optimal solution, and how to deal with situations that lend themselves to paradoxical incentives.

The authors essentially break down 11 computer science concepts down to their very intuitive explanations of how they came about, including where they came from, some historical (and at times apocryphal) anecdotes.

1 — Optimal Stopping
Let’s suppose a theoretical scenario where you’re on a hunt for an apartment to rent/buy. You have a number of apartment viewings lined up — you visit the first one and you love it, but you decide not to settle and continue looking in the hope that you’ll find a better one. You iterate this over the following apartments, but as time goes by you realize you sometimes passed great opportunities (which are no longer available) and start stressing over maybe not finding golden opportunities anymore. As you move forward you are taking a risk that will either work in your favor or against you.

So how do you maximize your chances of renting/buying the best apartment you can get? Optimal stopping says the golden number of trials is 37% before you pick.

2 — Sorting
Crazy how this chapter really explained in simple terms (from sorting my own closet) how this principle formed the basis of the internet (eg. Google Search), and different algorithmic processes that defined the best strategies for it. Topics discussed here various mathematical ways of sorting from the Big O Notation to sorting algorithms (bubble, insertion, merge and quick).

3 — Explore/Exploit
This chapter discusses the two approaches of exploring (gathering information) vs exploiting (using information), the tradeoffs that are made in each situation, and the balance of balancing favourite experiences versus new ones, while keeping in mind the importance of the interval on which we plan to enjoy them.

4 — Caching
This chapter was about memory hierarchy and how computers function in retrieving memory and how this affects speed by which they can operate to do so (arguably also applies to humans). The authors here specify three ways of retrieving information (Random, FIFO, LRU) with the last one arguably the most optimal.

5 — Scheduling
This chapter made me rethink all the productivity apps I ever used before. Which at a certain point I thought was in and of itself just additional noise I introduced to my daily routine. The truth is, scheduling is also a science. This chapter is about how to optimize focus time to get the most things done and reduce outside noise, and calculting output using the sum of completion times. Some really interesting concepts are introduced here such as thrashing, interrupt coalescing, preemption and uncertainty.

6 — Baye’s Rule
I only learned about Baye’s rule in my statistics class, and apart from the formula, I didn’t think it through any more than that. This chapter really explains in a beautiful way the importance of setting important priors to predict future events, the use of the Copernican Principle in predicting the likelihood of an event to happen, and the knowledge of the three basic probability distributions (multiplicative, average, additive)and knowing which to use against which type of prediction.

7 — Overfitting
Machine learning practitioners will love this chapter. This one is focused on making a case against complexity, and explains that it is usually a wiser choice to not include all information, and that incomplete information can actually play in our favor. Not to mention that trying to overfit the data to make a decision may most likely result in additional noise, and a baseline, sometimes random prediction is more effective than one where much thought has been put in.

8 — Relaxation
This chapter joins the last point the former one in that there is wisdom in actually thinking less. Sometimes the possibilities are way too endless that we have neither the computational capability nor the time to solve for the optimal solution. If the outcomes are difficult to determine, it would make sense to rewrite the rules or solve for an easier problem, and start introducing penalties and constraints as we go (reverse solution)

9 — Randomness
Or when to leave it to chance. Some very important concepts and methods are discussed here: The Monte Carlo Method, Rabine’s Algorithm, Metropolis Algorithm, Simulated Annealing etc. This chapter really makes a case when randomness can actually have a better outcome than calculated optimization.

10 — Networking
This chapter starts with a great introduction in how protocols in networks came about from phone companies to the world wide web, to voice and their unintended consequences in social interactions and social anxiety. Really interesting computer science concepts explained intuitively: how to avoid congestions (additive increase, multiplicative decrease), to establish backchannels and how to avoid bufferbloats.

11 — Game Theory
Finally, and a great way to end. This chapter starts with the famous Prisoner’s Dilemma, where two individuals acting in their own self interest may not result in the optimal outcome. Insightful mathematical concepts are introduced here (Nash Equilibrium, Dominant Strategies, Tragedy of the Commons, Information cascades, Mechanism Design and Infinite Recursions) and the authors make a point that no matter how well intentioned some of our decisions may be, paradoxical outcomes will call for changing the rules of the game instead of winning the bottom race.

This article is part of my 52 Books in 2020 Challenge. If you found this useful and would like to stay up-to-date with what I’m reading this year, don’t hesitate to follow my channel.

Samia Haimoura

Written by

Tech Entrepreneur @SEON, Data Science/AI Consultant, Duke University - Fuqua grad ‘19. Blogging @ gradientdissent.org.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade