Why the “Best” are not as good as you think

A Monte-Carlo Simulation

Nuwan I. Senaratna
On Economics
4 min readJan 31, 2023

--

An Experiment

Suppose you want to read a book. You have a choice of 10, and you want to pick one.

Now suppose a group of experts have read all these books and rank them according to some objective measure. Hence, the best book gets a score of 10; the worst gets 1.

Suppose you probabilistically pick a book, giving the books with a higher expert score a higher probability, and lower score books with a lower probability.

If we would repeat this “experiment” many times, you’re likely to pick the book with 10 a lot more often than 1. In fact, you are most likely to pick 10, followed by 9, then 8, and so on.

Now suppose we repeat this “experiment” several times (a sort of “experiment of experiments”) and we look at the average score of the book at the top of the list, the next one and so on. As you might expect, the average score for the top rank (1) would be the highest, followed by the next, and so on, in strict descending order.

I simulated this experiment of experiment 1,000 times on a computer and got the following average scores; no surprises here.

An Experiment of Experiments

Now, suppose a friend of yours also wants to read a book. Like you he considers the expert scores above. Unlike you he also considers your choice. In other words, if you have picked a certain book, he will give this book a higher probability.

To be concrete, let’s suppose your friend givens a 20% score to the expert score, and an 80% score to your choice.

Now, suppose a second friend wants to read a book. Suppose he makes his choice based on the expert score, your choice, and your first friend’s choice.

Suppose we repeat this process until you and 99 friends have made 100 choices. Let’s assume all your friends use the 20/80 rule to weigh experts scores and their friends’ choices.

Now suppose we simulate the experiments of experiments as we did before and look at the average scores of books by rank.

What we see is interesting; perhaps surprising. The 4 top ranks are very similar. In fact, the average score of the books ranked 2nd and 3rd are higher than the top (1st) ranked book.

Explaining the Experiment of Experiments

Why did this happen?

The cause is “luck” or randomness, which is innately part of the selection process.

Your choice has the lowest “luck” element. But since you don’t directly pick the book with the highest score, and instead use the scores as probabilistic weights, even your choice has a luck element. In theory, you could pick the lowest scored book.

The “luck” element from your choice propagates to your first friends choice. And accumulates with each subsequent friend’s choice.

Do our results change if we change the numbers? The number of friends (100 in our case) the number of books (100) and how we consider the expert scores (20/80)? They do, quantitatively. But not qualitatively.

So let me conclude with some qualitative conclusions:

Qualitative Conclusions

  1. The “Best” are not as good as you think. Often the 2nd or 3rd best might be better than the (1st) best.
  2. The “Best” are not the “best” (1st) both relatively (i.e., compared 2nd and 3rd), but also absolutely. We saw that the average scores for the top rank are more like 7 than 9 or 10.
  3. In other words, there is little difference between the people at the top. We shouldn’t obsess about any perceived differences. These are likely to be noise.
  4. It is better to have to pick 4 candidates out of 10 (say in an interview) than pick 1 out of 10. The latter is certainly likely to be unfair. At best, it is a lottery among several people who are roughly the same. At worst, you might be picking a worse candidate over several better.
  5. There should be no “Top Student”, “Top Candidate” or “Top Book” prizes, unless the selection is completely objective (i.e., based on an objective score). For example, an Olympic Gold Medal for a 100m is objective, since it is based on a simple objective score: time to complete the race. The Oscars, on the other hand, are completely un-objective. There should be 10 best actor awards. Not just one.
  6. If you have scored ten “candidates” (books, people, movies etc.) with scores from 1 (worst) to 10 (best), the actual scores of the top 4 are likely to be around 7, as opposed to 7, 8, 9 and 10. This applies in any situation where the score is not objective; even (especially??) if you are an expert.

Finally, are there examples of this type of “experiment” in the real world? Do we follow 20/80 rules? In fact, almost every example of choice in the real world, follows this pattern.

If anything, the real world is more like this experiment, than this experiment.

For example, it is common to choose books based on “best seller” lists (i.e., what your friends bought), as opposed to objective expert scores.

If anything, the 20/80 rule is more like a 1/99 rule; making my conclusions even more conclusive than they appear to be at first.

--

--

Nuwan I. Senaratna
On Economics

I am a Computer Scientist and Musician by training. A writer with interests in Philosophy, Economics, Technology, Politics, Business, the Arts and Fiction.