The Loop of Loops

Amogh Mahapatra
3 min readMay 7, 2023

--

We discussed the bayesian loop last time, today we shall dive into one of its superpowers in action.

Did you just buy “The Old Man and The Sea”?

Let’s take a trip down memory lane to the mid-90s, back when shopping websites like Amazon were just starting out. Their recommendation system was simple: if you purchased a Hemingway book, for instance, the system would recommend “Animal Farm,” as over 90% of time these two books namely, “The Old Man and The Sea” and “Animal Farm” were bought together. This technique is called item-item collaborative filtering and is loosely based on item-to-item co-occurrence.

As the industry progressed and more data was collected on items, users, and behavior patterns, more sophisticated methods emerged. But at their core, these methods still relied on the basic concept of capturing item-item, user-item, user-user, user-theme, theme-theme etc. based similarities. Many more sophisticated methods capture third and fourth order similarities such as: user to user similarity based in the latent space of a few items. Example: Capturing the similarity of two users in the space of fantasy novellas.

The first loop

There are numerous methods for capturing these similarities, but we will now explore one of the most widely used techniques in practice, and you’ll soon see why.

Let’s start by writing a basic loop for a book recommendation system. Suppose we want to measure the likability of books displayed to adults in Los Angeles. At time t=0, the initial belief can be based on a highly complex machine learning algorithm, or simply an average. However, the beauty of this loop is that even with moderate sized user-base, the following loop will typically converge to its true value after only a few thousand iterations, assuming no anomalous events or days.

belief_at_time(0) = books_purchased_by_adults_in_LA/books_shown_to_adults_in_LA
while (the conversion rate changes):
belief(t+1) = belief(t) * conversion_rate(t)

Now, let’s take things up a notch

Imagine designing thousands of these loops, all updated relentlessly in real-time. Here are a few examples:

  • The purchase rate of classic books by women in New York
  • The sharing rate of cookbooks by men on weekends
  • The rate of adding sweet treats to the cart by men with a high spending rate

As you can see, these loops capture collaborative estimates at various semantic levels. The only limit here is the creativity of your engineers.

The mega loop

These estimates are typically plugged into one large learning algorithm, to predict something very simple — would you read/buy this book? Industry-scale models are often a combination of thousands of these basic models, making them incredibly powerful and dynamic.

Habits eat motivation for lunch

Habits. Good habits. Bad habits. Healthy habits and unhealthy habits. Some you have worked hard to build and some you want to get rid of. There are plenty of excellent books to convince you of the efficacy and wisdom of forming good habits. Habits are analogically speaking, naive feedback loops. The good ones, the ones you’re proud of, lead you to a state of relaxation and equilibrium, just like the well-designed loops we discussed earlier. So the next time you’re struggling with a habit, maybe consider questioning the state it leads you to, rather than the habit itself.

--

--