What my ratings mean, and why I use the five-star system
Those of you who read my book reviews or see my ratings on Goodreads might sometimes wonder what it means when I give a book a certain number of stars, why I rate things at all, or why I use this particular system.
Here’s the system and what the ratings mean.
- ★☆☆☆☆ (1 star): Unforgettably bad. So bad that it stands as a lifelong example of the sort of work you would prefer never to waste your time on or endure the displeasure of again, and of the qualities you should strive to avoid in any such work that you produced yourself.
- ★★☆☆☆ (2 stars): Memorably below average. Bad enough that you would recommend others seek out something else instead. Provides some noteworthy examples of the qualities that good creative work should not display.
- ★★★☆☆ (3 stars): Good. An average, competent work, perfectly enjoyable, but offering no reason to continue thinking of it afterward, either well or badly, and no reason to recommend that others either seek it out or avoid it.
- ★★★★☆ (4 stars): Memorably above average. Good enough that you would recommend others seek it out instead of something else. Provides some noteworthy examples of the qualities that good creative work should display. Additionally or alternatively, provides important knowledge or fuel for reflection.
- ★★★★★ (5 stars): Unforgettably good. So good that it stands as a lifelong example of the sort of work you would prefer only to spend your time on or to experience the pleasure of, and of the qualities you should strive to emulate in any such work that you produced yourself. Additionally or alternatively, offers potentially life-changing insights or knowledge, or long-term cause for continuing reflection.
The system is symmetrical on purpose, and its central principle is that works should be rated on how strongly and memorably they diverge from the merely competent.
Why I use this system
You could guess that I use the five-star system because it is already common for reviews of books, films, music, and other artistic works. You’ll have seen it in magazines and on websites. Goodreads uses it for books, which means I’d have to think of a star rating there anyway, but I’ve been using it long before Goodreads was around, and not merely because it’s popular.
During the 2000s, I read some discussions of ratings in videogame review magazine Edge, which uses a ten-point integer system, whereas many other gaming mags rate on a percentage scale or equivalent decimal scale, often with an overall mark derived from an average of scores in categories including graphics and gameplay. Edge observed that many such magazines don’t use the full range of their rating system, typically rating all games between 70% and the high nineties, with anything notably above average scoring a 90% or above.
As far as I can recall, Edge mentioned that other magazines rated this way in part because of fear that a sub-70% mark could be viewed as an insult by a publisher or developer, and result in reprisals. This is just an explanation, though — aside from its being dishonest, it doesn’t explain why a percentage rating system is a bad idea, or why something else would be better. Edge maintained that in a ten-point system, average games should get a five, not a seven.
I don’t recall whether Edge raised the issue of granularity, but for me it’s the key to the five-star system’s excellence. A five-star system offers just a small number of potential ratings, especially if you refuse to give half-stars. With just five options, you can be clear about what each rating means, and choose easily between them. By contrast, a percentage system offers so many choices that it is impossible to make meaningful distinctions between them all. Just what is the difference between 97% and 98%? Even if it were possible to distinguish those ratings clearly, would the difference be useful enough to readers that it’s worth making at all? (Answers: practically nothing, and no.)
Excessive granularity is arguably why videogame magazines that use percentage ratings give most scores between 70% and 100%: it reduces the number of choices from 100 to thirty. But a thirty-level rating system is still unmanageable. If you try to write down what the ratings mean, you will end up splitting your thirty levels into ranges, and then conceding that your choosing, say, 74%, indicates its falling on the high side of “merely acceptable” on the basis of your subjective feeling. That a work rates 74% rather than 73% will still communicate little to a reader.
In contrast, the five-star system that I use offers few enough choices that clear distinctions can be made and explained reliably. Edge believed that a ten-point system fulfilled this criterion, but I still find it more granular than necessary. A five-point system tells you whether a work is merely indifferent, or diverges from the average enough that it should be recommended or avoided. In the difference between two stars and one, and four stars and five, it additionally shows the strength of that recommendation: whether it’s a “man, you have to see this” or a “it stinks; don’t read/watch/listen if you value your sanity” as distinct from an “it was pretty good, you should go” or a “you probably shouldn’t waste hours on it while the clock ticks towards your death”.
What the five-star system doesn’t tell you
A five-star system lets you sort works into five groups based on their “goodness” according to some criteria. My version of the system indicates how “good” or recommendable a work is, but not what makes some works good and others bad.
There are hints in my descriptions: good creative work (for me, chiefly books), must be thought-provoking (not merely entertaining or diverting), pleasurable to experience, and wise or well-informed. That last criterion is arguably not relevant to abstract art or instrumental music.
But you don’t have to agree with me about what makes good art to use the system. You can strip this bit away and focus on the degree to which a work is memorable and exemplifies good creative practice, or on something else. One author friend of mine, in his reviews, considers whether a work succeeded in achieving whatever it set out to (to depict something, to communicate an idea, to evoke a feeling or create an experience, etc.) a central element of its quality. I agree with him — but it’s not as central to my system.
Finally, a rating, alone, tells you nothing about how a reviewer arrived at their evaluation: nothing about the content, nothing about its particular qualities, no reflection on the work’s social and historical context, or on its personal significance for the reviewer and its potential significance for the reader. It gives you little to think or talk about. That’s what the reviews themselves are for.
Why you won’t see many one or two-star reviews here
Review-focused publications should be expected to use the full range of their rating system because they’re often obliged to review everything that comes out if it’s significant in some way: commercially, historically, or aesthetically. Magazine film reviewers, for instance, might have to review as many formulaic superhero movies (a source of ones and twos, for me) as they do inspired and aesthetically accomplished dramas (where I would get most of my fours and fives).
A blogger or Goodreads reviewer is unlikely to feel under such an obligation: as they’re just one person, they can’t review as many books/films/games/records as a magazine could, and in any case, they wouldn’t want to be that indiscriminate. As individuals, we will tend to consume what we feel is necessary or pleasurable, with some forays into what is recommended, popular, or classic. So it is natural that an individual reviewer will give more threes, fours, and fives than ones and twos. Threes and fours should be relatively common, fives very rare, and ones rarer still (since we will often have advance warning to avoid them). So it is here, and because twos and threes are defined by how little there is to think or say about them, they will seldom appear at all — and that is very much for the better.
Originally published at benhourigan.com on January 24, 2016.