The Brokenness of the 5 Star Rating System
As I was formulating the below I had hopes that I would be able to call this post “Fixing the 5 Star Rating System,” but I think it might just be plain broke beyond repair.
The system is broken in such a way that the information it provides is not only of dubious quality, but such that it can be argued that it provides no information whatsoever.
Contextualization of Ratings
If you are staring down the barrel of a 4.5 star rating, the root question that you can not answer is why any one of the critics chose their particular rating. One issue we run into in considering the answer to that question is of context. If you order Lou’s at dinner time and it shows up half an hour late you might shrug it off. If you order Sarpino’s drunk at 2am and it shows up 5 minutes late you probably proffer a 1 star review before it even arrives at your doorstep.
And so the review is dripping with contextually driven subjectivity.
A Universal Measure (Lack Thereof)
Even if we were able to distill out context, we still lack a universal standard of measure. What one person defines as five stars another may say is two. I posit that the customers who frequent Sarpino’s are far more likely to give it a high rating than those that frequent Lou’s. The Venn diagram between these may in fact have minimal overlap indicating that a 5 star review for Sarpino’s might mean something like “people who regularly eat at Sarpino’s like Sarpino’s.”
Even that imputes considerably more humanity to the rating than it deserves since “likes” is a powerfully subjective indicator. “People who regularly eat at Sarpino’s are likely to rate it highly” may be more accurate and certainly more objective, while considerably less interesting as a guide to my dinner ordering process.
The Stigma of Good
Let’s further presume that we can arrive at a universally accepted standard of measure. We must still contend with socioeconomic dynamics influencing a rating. Any rating that’s not 4+ is pretty damning. This stigma of good is prevalent across most crowd sourced review contexts. The video game review scale is particularly affected. If a rating drops below the 90th percentile you’re probably looking at a full blown Sunday Funday scenario.
Personally I am hesitant to ever provide a bad review online, because I know some people look at these stars to inform their buying decisions and I don’t want to be a burden on someone else’s bottom line. This is also the feeling of many of the people I talked with on the topic in my extremely scientific study (asking two people) leading up to this post. Either people won’t leave a rating or they will rate average food and service a 5.
The stigma of good leaves us with massively over-inflated ratings where the difference between good and great is 0.1 star.
Closing Statements
Based on this we can say that if we could do away with contextual influence, if everyone judged the same attributes of the food on the same scale (and ideally had the same sense of taste), and could give an honest rating without irreparably damaging the entity reviewed, we may have some chance of gleaning some insight from these stars.
Now… Does any of this sound similar to the traditional annual review process?