How We Tripled Our Notification CTR With Personalisation

Dhruv Mathur
From LBB
Published in
4 min readJan 21, 2019

At LBB, we reach out to millions of users every month, and help them find businesses, brands and products across 300+ categories & 22,000+ localities. To re-engage such a diverse consumer base on our mobile apps, it’s important for us to develop a highly targeted and scalable notification platform so that we can give timely, personalised information.

P.s. there’s a lot of great reading out there by some of the best in the biz, but very little from earlier stage companies doing this in small teams — so we thought we’d share our learnings with you.

Why We Built This

Why did we build our own platform v/s relying on the many external tools out there? We started out sending notifications manually, through handpicked content & copy specific to certain customer segments and cities. Though this worked in the early days of the product, our CTR did not hold up as our audiences & businesses grew.

After using some of the best notification platforms (we still use CleverTap & Firebase extensively), we had two key challenges. 1) The way customer segments are defined — usually based on specific actions taken or preferences selected by a user v/s a broader consumption pattern across different use cases. 2) These systems typically don’t understand our content, and couldn’t pick which notification to send to which user.

To solve these problems and scale personalised notifications, we set out to build a machine learning powered platform that identified what to send to who, and when.

“Defining” Personalisation

With a multi-category/ use-case platform like LBB, it was tricky to “define” personalisation based on user actions or customer segments alone. And given our mission is to introduce you to a new business, it was also not as straightforward as re-enforcing what you had already consumed or done, as many social/ content platforms do.

Mapping out use cases early in the process to figure out different parameters that applied to each

We started by outlining how people made decisions — what parameters they cared about when looking for the businesses they found on LBB for different use cases; e.g. proximity, time of day/ week/ year, budget, and so on. Then, we looked at our search data to figure out correlations between these different parameters, and for what use cases consumers would “bend their own rules”; e.g. a person might go shop at a boutique far from their location if there was great value, or if it was for a specific occasion.

This gave us a great framework to start figuring out what to send to who and the variables that needed to improve over time through machine learning.

Another important (unforeseen) outcome was that this framework created a common lens for different teams (engineering, product, marketing, content) to understand personalisation.

Personality & Scalability

LBB as a product is supposed to feel like a cool friend, the one who knows about what’s going on and gives you ideas for what to do (and FOMO!). It’s tricky to marry this feeling with a scalable solution at times — but not impossible.

The key was to identify the copy & content that felt personal and send it for the right context / use case.

For the copy, we looked at historical engagement data to figure out the tonality, emojis, and words that made people click — and use that for our notifications. We also had the benefit of having a high quality community of content creators on the platform whose natural language in posts gave us a large volume of copy to test across categories.

For the content, we used the parameters we had identified, and looked at data to identify when people wanted a set of options (e.g. something close by) v/s something specific (e.g. a place trending in their neighbourhood).

Both of the above formed the selection criteria for picking out what to send and when to send it.

Success Criteria & Outcomes

With the parameters set, our engineering teams set out to build an MVP that would demonstrate outcomes and set a foundation to improve over time. To identify the “data MVP”, we ended up following a lot of the advice in this post.

The main success criteria for our efforts (and for our machine learning model) was of course CTR on notifications. We identified another important criteria later on, which was the Reach of personalised notifications (see Things To Improve). CTR was fed back into our learning model to improve the understanding of different variables for different users & contexts, and improve the content selection and filtering criteria over time.

In the first 6 weeks of implementing the system, we were able to 3x–5x our CTR when compared head to head with handpicked/ human sent notifications. Today, 70% of the notifications sent on LBB are powered by our platform, and the best part is — CTR has stayed at the same levels even as our audience has grown by 50%.

Things To Improve

Cold Start — Our personalisation only starts getting effective after a certain level of engagement from users, and hence limits the reach and effectiveness of notifications. We are looking to solve this problem through more fine tuned on-boarding & activation, as well as leveraging collaborative filtering to match new users with existing ones.

Gender Relevance — One of the variables that has been tricky to account for has been our user’s gender. Although there are differences in what men and women prefer to engage with, it’s not at all binary. Understanding this better is important for us to improve relevance overall.

A/B Testing — We’ve been making changes and improvements to our model over time v/s trying multiple models at the same time. To shorten improvement cycles, we’re trying to test different models in the same period of time.

Your Feedback

We’d love your feedback on our approach — drop in your response below. If you like working on problems like these, consider joining our team — drop us a mail at careers@lbb.in

--

--