How to get more and higher ratings for your products? Just ask.

Yorgos Askalidis
3 min readFeb 22, 2016

We’ve all received emails from retailers (or establishments) asking us to submit a review for a product (or service) we recently purchased. eBay, Airbnb, OpenTable, Amazon sellers. You name it.

Would reviews change without email prompts?

And, what’s the effect of these ‘email reviews’ on future buyers?

The answers are “Yes”, and “a lot”.

Through my involvement with the Spiegel Research Center and our collaboration with PowerReviews, I gained access to a unique dataset: the entire history of submitted reviews for four major online retailers. With my collaborator Prof. Ed Malthouse, we discovered insights on many questions but in this post I want to discuss one insight in particular: the effect of the email promptings on the reviews that a platform receives.

For some of the retailers in our dataset, email promptings are a relatively recent phenomenon. For many years since their creation these retailers relied only on reviews that were submitted by self-motivated users right on the site itself; exactly like Amazon and Yelp operate even today. So, we approached the introduction of the email prompting as a ‘natural experiment’ and sought to understand it’s effects. There are two types of effects we think are important, and we briefly discus both below.

What was the effect of the email promptings on the review platform overall?

As one would probably expect, we find a substantial and statistically significant increase in the volume of submitted reviews

The promptings did indeed work, by increasing the volume and the ratings.

But of greater impact, products not only got more reviews but they also get higher ratings as well. The increase in ratings can be anywhere between half and full star in a 5-star scale.

The explanation for the rating increase is something that marketers have known for quite a while: customers that are dissatisfied are more likely to be vocal about their dissatisfaction. Hence, dissatisfied customers are more likely to submit a review without a prompting. Without the email promptings, there is no force to balance these negative experiences out.

What was the effect of the email promptings on the existing population of reviewers and their submitted reviews?

Did the email promptings ‘cannibalize’ self motivated reviews? i.e., did they just redirect users that would have submitted a review anyway? Did the new and more positive ratings incentivize the existing reviewing population to lower their ratings as a reaction?

To address these questions, we use a control set of reviews to understand what we should expect the existing reviews to look like, if the email promptings were not introduced.

We found no evidence of any effect of the email prompting on the existing reviewers or their submitted reviews.

More specifically, there was no decrease in the volume of self-motivated reviews or any change in the submitted star-ratings.

What does it mean for marketers?

By prompting your customers to write a review you incentivize an entirely new segment of customers to submit a review, without affecting at all the segment that would have submitted a review anyway. Hence, the reviews overall become more representative. Since all the emails are sent only to verified customers, all the resulting review submissions are from verified buyers. Hence, the reviews overall become more credible. Finally, since the ratings submitted as a result to an email prompting are consistently higher, the reviews overall become more positive. Really, a WIN-WIN-WIN.

So should a platform prompt and prompt again?

More research is needed here. It’s not clear whether insisting beyond one email prompting is a good idea. While we do expect that an even larger segment of the population would submit a review, further understanding is necessary on the effects of possible annoyances. It’s also not clear if monetary incentives will drive the same quantity and quality of reviews we researched.


The quality and quantity of reviews impact sales, and our work suggests a simple way to get both more as well as higher ratings. So, marketers. . . take note! You can improve your business results by doing your own experiments to prove the impact in your business.

Yorgos Askalidis is a PhD candidate at Northwestern University graduating this summer. He plans to use the insights from his research on user reviews to nudge his thesis committee into giving him higher ratings.



Yorgos Askalidis

Data Scientist at Instagram NYC. Previously at Spotify. Ask me about data, soccer, or data about soccer (or anything else).