Real-time context based smart type-ahead suggestions

The journey towards incorporating a short-term context into type-ahead suggestions for an e-commerce platform

Hoosein
Myntra Engineering
5 min readNov 16, 2018

--

Chapter I: The Motivation

TL;DR We want to provide smart type-ahead suggestions, based on short-term user activity context. We aim to reduce the friction of finding the right product for a shopper at Myntra.

Our Contextual Type-Ahead in Progress. Elucidation: Here we glance over regular suggestions, set Cushion Covers as the context and then revisit.

By and large, user suggestions are personalized using a long-term history of events. But we must also incorporate the recent activities that build up a short-term context of a user. It can, in real time, cover up for the knowledge acquired by factors outside of the system. We should incorporate the short-term context while it is alive and relevant.

Google is a good example to support this claim. Try searching — game of thrones followed by another partial query like sea… You land on — Sean Bean, Season 7 as the top suggestions. However, try searching — Donald and see the context disappear! We at Myntra want to build this understanding into our type-ahead suggestions. We understand that context differs from business to business and hence it requires its own custom tuning and understanding. Also, we must incorporate actions beyond just search queries, factoring in the complete user journey.

Measuring Impact

As an important engineering aspect, we must measure the impact in terms of CTR, click depth and quality of suggestions. We want to improve the user experience, leading to better conversion rates. For Myntra, it also means better discoverability of its products.

Chapter II: A First Cut Model

TL;DR We have an evolving model that incorporates a short-term context that is dynamic and available in real time.

Well, we’ll sift through the requirements for the model. First up, further tinkering with the Google example above, we can conclude that:

Context grows with every user signal, or else decays over time

Second, surfing through the Myntra-specific user patterns, we observe sporadic, ephemeral nature of context. We conclude that:

Context dies and sometimes resurrects

Third, on a huge scale, we require a real-time contextual model that is efficient.

The Model complexity should be constant in space & time

We came up with a model based on these findings (More around it later) and set out to engineer the solution.

Chapter III: The Engineering

TL;DR <Not this one>

Let’s gather the requirements for the model in real-time.

Main Components and Subsequent Flow

Note that we are operating at a scale of thousands of requests-per-sec.
Let us have a glimpse at the high level architecture that takes into account the functional and performance requirements.

Our final architecture. We will discuss the details as we move forward. But feel free to speculate from here.

I. Events & Pipeline

Examples of an Event can be what the user searched, its underlying semantic understanding or a user’s subsequent interactions with the search results. We need to convert them into a generic payload containing a set of annotations. Annotations are just an array of key-value pairs with associated weights. The annotation weights determine the importance of each annotation in an event, a number between 0 and 1.

An Event

{
"userid": "john.doe@gmail.com",
"annotations": [
{
"field": "category",
"value": "shoes",
"weight": 1
},
{
"field": "brand",
"value": "nike",
"weight": 1
}
],
"timestamp": 1539863345763,
"source": "search"
}

So a search like nike shoes converts to annotations brand=nike, category=shoes, while a filmography page can translate into annotations like, celebrity=Tom Hardy, title=The Revenant. If the annotations are equally important, then they take the same weight, such as 1.

Events generated from multiple sources are converted to an Event payload and pushed to the Event store.

Our stack: Kafka is a scalable choice to handle event streams. We use standard Kafka functionalities like message batching and intelligent partitioning for efficient processing.

II. Processing Events with the User-context Model

Next, once the pipeline is up, we need to process this payload in near real-time. We need a processing engine, which can help us smartly batch the events, process them in parallel and pivot into a User-level Context. Let’s now come up with a contract for the Context of a user.

User Context

{
"userid": "john.doe@gmail.com",
"annotations": [
{
"field": "category",
"value": "shoes",
"weight": 0.7
},
{
"field": "brand",
"value": "nike",
"weight": 0.7
},
{
"field": "brand",
"value": "gucci",
"weight": 0.02
}
],
"lastTimestamp": 1539863345763
}

The annotations in the definition of both Event and Context, are representative of the feature space we use in the Model. Our converters transform the events to vectors needed by the Model. As newer events arrive, older, less relevant context annotations age out.

Our stack: We use Apache Storm for event processing. A topology with a Kafka Spout, a Batching Bolt — using tick.tuple with a custom batch size to emit batches — and a Context Bolt invokes the Model and calculates User Context.

Our Context Data Store is a key-value store (Redis) with userid as the key.

III. Integrating context with existing type-ahead suggestions

A Logic to integrate the context into Type-Ahead Suggestions

NOTE: Diversification makes sure contextual suggestions don’t dominate. It gives user some room to deviate from the context.

Our stack: At Myntra we use Solr to serve Type-Ahead Suggestions from a corpus of top query terms. Solr provides us with a boostQuery functionality, that adds a boost score (annotation[weight]) to a type-ahead suggestion if it is present in the user context annotations (annotation[value]). Boost score gets added to the existing relevance score of a document.

Secondly, to diversify, we fetch both, contextual and regular type ahead suggestions, and blend them in predefined slots.

Chapter IV: Where we are…

TL;DR It’s a good start. We have set up the pipeline that allows us to experiment with different algorithms. First version has shown an improved CTR.

Architectural Benefits

  • The Model plugs in and out of the system; helps in rapid experimentation.
  • The Event(s) contract allows better configurability across event sources.
  • The Context contract allows reusability across use cases. For example, we can now recommend similar products based on context.
  • We can also ship the model with the app itself; albeit it has its own challenges.

What’s next?

We rolled this out as an A/B test and found an increase of 3% in CTR for contextual type-ahead users. This has encouraged us to roll it out to all the users. Do give it a shot!

We are studying the user behaviour and the business metrics to understand the impact. We are also working to onboard more events that contribute to the context. We are looking to enhance the model with concepts such as (L)STMs. On the type-ahead front, we are looking to have diversity in suggestions, leveraging concepts such as MMR.

We shall keep you posted on the progress. Stay tuned! Thanks for the read. Comments welcome!

--

--