Mari kita bicara pemeringkatan

Dasapta Erwin Irawan
Good Science Indonesia
5 min readFeb 6, 2021

Artikel ini adalah salinan dari sebuah utas (thread) ini https://twitter.com/jelena3121/status/1356210115406397443?s=20. Sekarang gambar saya punya arti. :)

Available as CC0 in Wikimedia

We talk a lot about rankings here: what we think & how we feel about them, why we (dis)like them, what’s wrong with them, and so on. In this

I will try to disentangle the phenomenon. So, buckle up y’all, for a sociological ride through the world of rankings.

  1. Rankings are *comparisons of performances*. Think of rankings as a stage on which actors perform, in front of a jury (ranker) and an audience. Like in figure skating. Only here, it’s the jury who *creates the stage* by putting actors on it and having them play by its rules.
  2. In a ranking, two or more performers are never considered equal. That’s because rankings are *zero-sum* comparisons of performances. No matter how well you perform in absolute terms, there will always be the same number of possible positions. An occasional tie is not a counterargument. The overall picture still suggests a *hierarchy* based on *relative* evaluations of individual performers.
  3. Most rankings aim to be inclusive. As long as you are a country, you are in principle eligible for a country ranking. Same for universities or restaurants. You *only* need to *perform* on their criteria.
  4. In the reality fostered by rankings, you can only perform better at the expense of others. You can only go up if someone else goes down. You can’t improve unless someone else is worse off. That’s the world rankings create, although rankers may insist they “only measure” it.
  5. Rankings do not measure quality “out there.” Rather, they produce a specific kind of performance-based reputation. The Human Development Index says, look at Norway — that’s what it means to be a developed country. Uni rankings say, Harvard — that’s excellence in higher ed.
  6. Some rankers say that there is an opaque reputation system in place known only to “insiders.” Because we don’t want students or policy makers to spend money unwisely, rankers claim they increase transparency and “level the playing field.” This is misleading.
  7. Rankings also suggest that reputation is scarce. Their zero-sum tables tell us there’s not enough of it going around. The only way to get some of it is by competing with others. In a ranking. In reality, however, reputation for performance is, in and of itself, not scarce.
  8. Rankers suggest that status competition is natural. They may say it’s always been there, but only the “insiders” could see it, whereas the rest (“stakeholders”) were left in the dark. Status competition is not natural. It is socially constructed by, among others, rankings.
  9. The fact that status competition is socially constructed does not mean it is not real. “If men define situations as real, they are real in their consequences.” (Thomas theorem). To be sure, status competition exists beyond rankings. But the competition rankings produce is of a different kind and it’s *specific to rankings*.
  10. Rankings want you to believe that competition will improve quality. There are many ways one can improve. But there is little to no evidence that climbing a ranking is one of them. Besides: “When a measure becomes a target, it ceases to be a good measure.”
  11. Rankers typically claim that they are just neutral arbiters presenting “objective reality” with “hard numbers.” Not really. Rankings produce a social reality in its own right. But — they try very hard to wash their hands off this reality. If you confront a ranker with adverse effects, you may come across smth (something) like: “We don’t want this to happen! Mind you, we are merely observers who present their observations to the public. Look, we even publish guidelines on how to be cautious when consulting our tables.” There are many ways reality can be presented. There are many different social realities one can produce. Let’s ask them again and again — in tweets, publications, at conferences, or workshops: Why of all — a ranking?
  12. Rankers admit their methodology is not perfect. They often know its flaws better than anyone else. And they may say, well, that’s the best we’ve got. Any better ideas? Watch out! It’s a trap. Rankers listen to the critics because they like to say they listen to the critics. Rankers often honestly believe that all the issues they have come down to the methodology and the quality of data. Often, too, rankers are wrong. Many critics honestly believe that rankings are “fixable.” Many of these critics are scholars who typically think something like “rankings are here to stay, so we better try to improve them.” It is exactly this type of critics that are rankers’ most important allies.
  13. In view of 1–12, fixing a ranking’s methodology won’t fix the adverse effects. Even if you made them super transparent, accountable, and well-governed, that could even aggravate the adverse effects. A transparent and methodologically rigorous ranking based on sound data can lead to the effects as adverse as a non-transparent ranking with a shaky methodology and incomplete data. In fact, it might make things worse. Worse because it *appears* more legitimate and makes criticism look “unreasonable” or “nonsensical.” Whenever rankers use terms like these, you might be onto something. This is why they try to get you back into a line of criticism they are more comfortable with.
  14. There are basically two things rankers care about when it comes to the “how” of the ranking: a) Rankings have to be PLAUSIBLE b) Rankings need to *look* SCIENCY. Plausibility of rankings: Regardless of the numbers, if the top 10 or so of the world’s “best” doesn’t include the usual suspects, people won’t buy it. So they’ve got to do it in such a way so that it resonates with what is widely believed to be “the truth.” “Scienciness” of rankings: Rankers need to convince that their rankings are based on rigorous calculations and are done the way science is done. The more complicated, “robust,” lengthier methodology sections are, the more credible they seem. But: sciency ≠ scientific. Scholars engaging in methodological debates with rankers (see pt. 12) contribute to the much-needed scientific legitimacy of rankings. Same goes for scholars taking part in their surveys, sitting in their panels, attending their events, etc.
  15. Critics usually target the “how” of rankings, such as the motive(s), calculations, transparency, etc. Although not unfounded, this distracts from the fact that rankings’ effects also have much to do with the nature of these devices, regardless of the motives or methods.
  16. We often overlook that rankings are a *practice* whose legitimacy and plausibility goes beyond what some rankers do or don’t do. Rankings feed on this legitimacy. We tend to obsess over “successful” cases, while routinely disregarding the legitimacy of the practice itself.
  17. Points made in 1–16 go more or less for most rankings. But not all rankings are created equal. Some are made to, say, influence policy. Others to make money. Some seem so “natural” that no one questions them (sports). Some are highly contested (states, universities, arts).
  18. Remember the stage metaphor? A “successful” ranking is the one which has managed to convince the audience that its show is the reality itself. Rankers work very hard so that the audience keeps buying into this illusion and keeps giving them the much needed attention.

/ to be continued /

--

--

Dasapta Erwin Irawan
Good Science Indonesia

Dosen yang ingin jadi guru | Hydrogeologist | Indonesian | Institut Teknologi Bandung | Writer wanna be | openscience | R user