Hey Andrew, it’s really great to see you have given some serious thoughts to my article. Thanks a lot for taking the time to share some!
I will try to address your issues one by one:
I. The demographics of those who rate movies
The demographics of those who rate movies is irrelevant inside the framework of my reasoning. Let me resume it.
I made these two assumptions:
- Movie ratings should reflect movie quality.
- Most people think that most movies are of an average quality.
These two assumptions led me to infer that movie ratings should be normally distributed.
So the demographics is irrelevant, as long as the distribution is normal. That’s the main condition that has to be satisfied.
An interesting statistical angle would be to see if there’s any relation between the demographics of those who vote and the distribution of ratings.
II. Reporting bias on IMDB
I love how you’re trying to explain the IMDB distribution. It’s a really good hypothesis you got.
III. IMDB users are more likely to watch average and above-average movies
It may be so, but the objection does not apply for my analysis, because the ratings analyzed corresponded to the same set of movies for all four websites.
There were 214 movies for the 2017 dataset, and each movie has ratings from the four websites.
In other words, I have not analyzed some movies for Metacritic, and some others for IMDB. They were all the same movies, for all four websites.
IV. Your two beliefs
I tried to clarify in previous responses the nature of belief, the status of my assumptions, and how is it possible to construct a different reasoning.
If you are interested, please take a look at this response.
Also, I don’t think that you can explain the IMDB distribution by saying that bad movies rarely make it to the voters. It’s not like IMDB is having different movies than other websites do. Pretty much the same set of movies reach voters on every site. You have to explain the IMDB distribution in relation to the distribution of the other websites’ ratings.
Your previous hypothesis on the IMDB distribution was far better.
V. Should the distribution of the tomatometer be normal?
Yes, I have argued in the article that it should be:
“Anyway, I guess it should still boil down to the same normal distribution, with most of the movies having a moderate difference between the number of positive reviews and the negative ones (rendering many ratings of 30% — 70% positive reviews), and a few movies having a significantly bigger difference, in one way or the other.”
I guess the only way to normalize this positive-negative rating system is to do what the Rotten Tomatoes team does, which is to take percentages of these ratings.
But I think there’s a huge risk of falsely misrepresenting the actual rating this way.
If you have 10 critics who voted 6/10, you have 6 as the average rating. However, if you normalize by taking percentages, you will get a 100% percent “rating” for that movie.