Rocket Speed on 2G!

redBus_Blog
redbus India Blog
Published in
4 min readJul 24, 2017

As with any other e-commerce company, Mobile is a big growth channel for us at redBus. On the app’s home screen, when a customer searches for bus services from a source city to a destination city for a date, we present a list of Search results — henceforth called SRP, short for Search Results Page. Needless to say, SRP and its backing APIs are extremely critical in customer’s decision to book her trip with redBus. We recently optimised this flow, and how!

Fig 1 : SRP on Android app

The Problem

Let’s consider a popular search of Bangalore to Hyderabad for a Friday travel — this SRP API request responds with 111 bus services and a response size of 104.4 Kb (gzipped of course!). The response has all that it takes to display the search results, all that’s required to filter the results based on user filter criteria, everything to group the results for an operator who has a lot of services(like APSRTC in the above image) and so forth. Now, this is not too heavy for a good wifi or a 4G connection; relatively good on a 3G connection, but is painfully slow on a 2G connection.

And then, we achieved this:

Fig 2 : Comparison between response times of Search API on different network conditions

You could notice that the response time is reduced by 3.5x on wifi and 4G, 4x on 3G and a whopping 5.3x on 2G. This could happen since, of other things, we reduced the SRP response size of 104.4 Kb to a mere 2.3 Kb on the first load of SRP.

Note: Response time here is the time difference from the user tapped on “Search” until she sees the first interactive result on SRP.

How we achieved it:

Meta-data to our rescue

We realised that our SRP API response was filled with data that’s not necessarily required to show the results but to cater to other requirements such as sorting, filtering, grouping and sectioning the results. We did the following logical steps:

  • Moved Sort and Filter functionality to server. Afterall, <25% of our users filter the results! Alternatively we introduced a lightweight cached Filterable API which responds with the data required for Filters Screen — for the above route of Bangalore to Hyderabad, the response size of this API is 9.2 Kb. This doesn’t hinder the SRP and can be independently fetched. One drawback of this approach is that we lost the ability to show quick feedback of number of results when the user taps on different filters.
  • Introduced meta-data object in the SRP API response. This contains all the information required to construct the SRP, number of results overall and in each group and section and so on. Meta-data is part of only the first request made on SRP — yes, we now paginate the response.
  • Cut down on all the unused properties from the SRP API and made it light on size.

Lazy load the results

We realised, all that the user is looking for is the first set of results which fill up the screen as early as possible. So we paginated the result set and included the meta-data along with first set of results on the first SRP API call. The response size was reduced to 2.3 Kb for the first request and 1.9 Kb for the subsequent request with a page size of 10. As soon as the scroll hits 7th result, we go ahead and call the next page further reducing the time to wait for users.

Any Filter applied or a group tuple tap would result in an API call and again, lazy loaded.

Plain Vanilla, please

More often than not we prefer to go in for that library which seems to do more than that you could ever build. We prefered to cut the slack by moving back to the Android RecyclerView which works just fine for us. We in fact convinced our designers to keep the SRP clean and transition to a new list when required.

Fig 3 : SRP with Grouping

Number says it all

We ran an A-B experiment to measure the impact of this implementation over the old one. We measured a simple metric of Throughput from SRP screen to the next screen on V1 and V2. The overall results are encouraging — We achieved a 4.8% points increase (9% increase) in our throughput!

--

--