Ben Redan
2 min readNov 20, 2018

--

Great article Jez! I definitely agree this may relate to political polarisation where much new and old media engagement seems driven by outrage at the Other and solidarity with one’s own ‘side’, similar to Nick’s point about great and terrible experiences being over represented in rating systems to the expense of less emotive, more moderate experiences that may well constitute the hidden, boring middle of the bell curve.

Weighting ratings and other approaches may well help with ratings trust, as Nick suggests, and avoid these systems from becoming absurd or non-informative.

Not sure how such metrics could help with social media and polarisation unfortunately, especially as any measures might be seen as censoring one side or favouring another at the algorithmic level (regardless of intent or reality)!

I wonder if a lot of this relates to the anchor effect and the need for simplicity in making decisions – to quickly understand some particular by framing its relation to some norm or reference point. I remember reading that a hurricane rating systems had been problematic when an extreme category was downgraded to a less extreme but still extremely dangerous category. Many residents anchored their interpretation of the second in terms of the first – that the downgrade meant the risk was fading – even though they were still in great danger, and tragically did not evacuate.

The five-star system’s ‘normalising’ of five star ratings seems similar in how anything less than 5 or high 4s are often anchored as abnormal and substandard deviations from a 5.0 norm, which is ridiculous and calls the whole system into disrepute by making perfect normal.

But maybe it happens partly because of the cognitive advantage of simplicity. A 5 star rating implies fantastic overall – that nothing particular substantially detracted from the whole experience. If (overall) rating systems normalised in the 2-3 range with outliers either side, what this ‘means’ might seem far more ambiguous to the reader, both in terms of individual ratings and their aggregation. Is 2 good, is good good enough, what does 3 refer to specifically and so on? 5, by contrast, seems very definite. Making ratings more granular or using multiple ratings could help here, but the more effort involved the less likely they will be used.

As absurd as 5 as the norm may be, it seems to give the platforms’ community some norm or anchor that simplifies things. The challenge might be how to keep a system simple enough to orient users and easy enough to actually be used without this orientation becoming ultimately counterproductive in all the ways your article highlights. Fascinating and topical issue!

--

--