What’s in a rating?
This month, US News and World Report released their rankings of diets. Rankings were published for “Best Overall Diet” as well as subcategories of diets such as “Best Diabetic Diet” or “Easiest Diet to Follow”.
There’s something comforting about rankings. They’re easy to consume. They take a lot of the hard thinking out of the equation. They seem legitimate and “scientific” because… numbers.
However, when you read how the rankings were created, you start to see how the story starts to unravel. A panel of 41 experts were assembled and asked to rate diets based on a collection of factors on a scale of 1–5, “based on available evidence”. US News then, “…converted the experts’ ratings to scores and stars from 5 (highest) to 1 (lowest)…”
It doesn’t actually matter what these factors were.
Here’s the thing about ratings and scores: It’s really hard to tell what they mean when you don’t know anything about the story behind them.
1) If an expert gave one rating on one day, how likely were they to give the same rating on a different day? (i.e. do they reliably agree with themselves?)
2) What inherent biases existed in these ratings?
3) How were those rating converted to scores? Average? Highest? Lowest? Most frequent? Whim?
4) How much did the experts agree or disagree in their ratings?
When an information source pretends to be scientific, but isn’t, it gets to benefit from the aura of scientific rigour without doing the work, and without the rigour. It’s entertainment disguised as useful information.
Question the story. Look behind the words. They’re always hiding something; and sometimes, it’s important.