How to influence users’ decision using customer reviews

Kishore Sai
redbus India Blog
7 min readNov 3, 2021

--

One of the most important product goals of any e-commerce company is to convert as many users as possible. There is a constant pursuit by product teams to optimize the funnel, remove friction, make the experience seamless and recommend relevant results to the users so that they convert on their app rather than their competition. At redBus, we are no different. Also, redBus being a completely transactional app, it makes even more sense to help users to decide faster and complete their transactions. As a product lead for User Generated Content (UGC) charter, one of the objectives I took up was to leverage UGC to help users decide better.

Typical to any e-commerce user’s shopping behavior, redBus users also do a lot of exploration and comparison. In this process, there is a particular segment of users whose decision to proceed further is dependent on the feedback from other travelers. Feedback is an important component of decision making in any shopping journey. Even if a user has identified a product which meets his needs and preferences, he would use feedback as a checkpoint to make the final decision. While in most applications, this feedback surfaces as the average rating of the product, there are also detailed reviews available. As per this survey by Bright Local, users see on an average 10 reviews before making a purchase.

Before we dive into the analysis of the impact of ratings and reviews on users’ decision making process, I will quickly explain how the ratings are presented at redBus. We represent the color of the rating block in the search results page based on the average rating of the route to give users a quick impression of the overall feedback. This is how they are categorized.

Green (Average rating greater than 4)

Amber (Average rating between 3 and 4)

Red (Average rating less than 3)

Users can view the detailed ratings and reviews (R&R) by clicking the “Reviews” tab on the tuple

Note: For all the discussions and analyses that follow, I will restrict the context to the redBus desktop channel as this is where we implemented the solution.

The segment of searched users who view the detailed R&R on redBus website is approximately 10%. On an average 34% of all the users who search on our website go ahead and select a seat (search to seat selection throughput). But if you divide the users who searched into two segments — those who viewed detailed R&R (10%) and those who didn’t (90%), the throughput of these segments are 11% and 36% respectively. The 10% of the searched users who viewed detailed R&R has significantly lower throughput than those who don’t even bother to look and that has pulled the overall throughput down by 2 percentage points (from 36% to 34%). Now this is counter intuitive; how can users who are better informed through R&R have lower throughput?

One possible explanation is that the users who view detailed R&R (10% of the searched users) of amber and red buses decide against them and might drop off due to the lack of green buses available in the route they are interested in. The inventory distribution also shows that the majority of the buses are amber and red.

However, this distribution remains the same for the segment of users who don’t look at the detailed R&R (90% of the searched users) and still experience the color coding bias in the search page. But their throughput is 36% as mentioned earlier. So, maybe it’s not the lack of green buses because of which users are dropping off, but actually the reviews seen by the users. The problem in the current implementation is that users see the most recent reviews and they may or may not be in line with the average rating of the bus. In case it is not in line, it would actually put users into confusion.

For example, this is how the recent reviews look like for one of the green buses

The 1 star recent rating with detailed description of the bad experience would put the user on the fence making it difficult to decide. Whereas the 5 star rating review is so short that a mere “Good” is not helping the user in any way to decide and hence the 1 star rated detailed review would have a higher influence on the user.

A similar kind of experience for a low rated bus as shown below

This study shows that reviews which are affect-rich, influence users’ decision making process significantly. Putting these findings together, we should be ideally showing the “right” reviews to the users in the top order to help them decide better and solve the above problem. And that is what we did.

“The reviews seen by the users have a high influence on their decision making process”

We classified the reviews into different buckets based on the sentiment and verbosity of the review.

The parameters that were evaluated were structural complexity, sentiment and valence. I will not get into the details of these parameters here as it would require another blog on its own. For now let us simply understand based on some examples.

Once the reviews were classified into different categories, they were sorted as per the below logic so that the top order reviews are “inline” with the average rating of the bus.

Within each section under each category, the reviews were sorted by recency. Further, we set up an A/B experiment to study the results. 33% of the traffic was shown the modified sort order while the remaining 67% was divided into two groups and were shown the original sorting.

Modified Sort

Original

We can see how the version with the modified sort has slightly more descriptive reviews than the original and 5 star rated reviews are shown first from the recent past.

At this point, some of you like our internal stakeholders might start to think that by changing the sort order, is redBus manipulating the user and hiding the bad feedback? Isn’t this unfair? — Absolutely not! It is only fair to both the bus operators and the users to showcase the perspective of the majority of the passengers who have given the feedback rather than highlighting the outliers. The underlying principle of average rating is also the same. And redBus wouldn’t be hiding the bad feedback as users can always view all the available reviews. The only thing the modified sort order does is to remove the confusion for the users. Plus any changes in the overall performance of the bus would automatically reflect in the average rating and the bus would move from one category to the other eventually changing the sort order of the reviews as per the above-described logic. So we collectively came to a consensus that we are on the right track.

The following is the output of the experiment

The throughput of the two segments with the original version got settled at around 11%, while the segment with the modified sort got settled at 15% — an increase of 4 percentage points! Going one step further to understand the impact of the modified sort order across different bus categories, let’s look at the throughput of red, amber and green buses separately.

While the throughput of amber and red doesn’t change much from the original version, we can see a significant improvement in throughput for the green buses — an improvement of 7 percentage points!

Now that we have proven showing the “right” reviews to the users will significantly impact their decision making, the next steps in order to improve the overall funnel throughput would be to

  1. Improve the visibility of R&R such that more number of searched users consume the detailed R&R (Increase the segment size from 10%)
  2. Improve the classification algorithms so that the reviews are better categorized into affect-rich, moderate and baseline
  3. Add an element of personalization in the sort so that the reviews which talk about users’ preferences surface up in the top
  4. Work with bus operators to move more buses to green through improvement in overall travel experience

Kudos to our talented engineering team who pulled off this seemingly simple yet complex feature by implementing review categorization (through sentiment analysis) and dynamic sorting — Vikas Pandey Bhairavi Shah ajith anburaj

--

--

Kishore Sai
redbus India Blog

Product manager, an entrepreneur, a realist, spiritual and a family man! Blogs about start-ups, tech, products, social issues.