Thanks for the data and analysis, Leon. It sheds some nice insight into the limitations of the h-index.
More generally, I’m not sure ‘though that one can just take at face value that not all citations are the same, since they form the “atomic unit” off which these bibliometrics are based. How often do we see survey papers that nobody seems to have read, yet have hundreds of citations? Is such a paper the pinnacle of achievement in research? We ought to be quite careful about how we design these metrics. With public opinion of research and “experts” quite low, the last thing the scientific community needs is to actively promote a way of ranking researchers and publication venues that is demonstrably easy to game.
I wonder how differently these metrics might come out if only counting “highly influential citations” reported by Semantic Scholar? At least, we ought to try some ideas for distinguishing citations that indicate a paper has been directly influenced by another paper from those that are used purely for rhetorical purposes.