Another victim of the replicant police

Jd Eveland
Socio-techtonic Change
4 min readOct 19, 2017

There’s a very interesting article in the New York Times describing the troubles attracted by a social psychologist named Amv Cuddv who has become the latest target of the social psychology replication police. The article discusses her research and the questions raised about it, as well as the social dynamics in the field in response to this confrontation. It’s another example of the kind of “sociology of science” analyses that combine discussion of science with interpersonal soap opera. Enlightening, but not particularly rewarding.

I have no particular horse in this particular race. But I have done a lot of social research over the years, and as I see it, there are two fairly separable issues involved here.

First, there’s the issue of “replication” itself. I’ve written on this on previous occasions (e.g., here and here), and I find this to be in many respects a red herring. Replication is a concept that works well in small closed systems, particularly in physical and biological sciences. But by its very nature, research in the behavioral sciences is heavily context-dependent and findings are inevitably a function of time and place as well as research procedures. In fact, it’s not really possible to replicate any major finding in behavioral sciences; inevitably, the results will be different because the circumstances are different. The whole replication movement to me smacks of intellectual opportunism — a path to the big leagues for a few guys who picked up on this as a way to establish their own niche in a complicated and crowded field. It played right into the hands of those already crippled by the kind of “physics envy” periodically overcomes psychology.

I should note that I’m a big fan of making data available to others, but Eveland’s First Law of Data — “Treat your data as you would wish to be treated, if you were a datum” — has to be respected. That is, those who propose to take your data and reanalyze them have to understand the circumstances under which they were collected and their interaction with their participants. The data aren’t just numbers that can be tossed into a data analysis hopper and puréed; they must be interpreted within the context of their collection.

I’ve had my own experience with a lack of replication. When I was teaching statistics, I liked to present the students with real data in conjunction with the articles based on the data; in other words, approach data analysis with context in mind, not just as an exercise in mathematics. For one major study of Internet use that got a lot of publicity back in the 90s, I prevailed on the project PI (an old friend of mine ) to let me have his data for teaching purposes. The students and I working together found some significant problems with his analysis that essentially negated his original findings. I wrote a careful note to him explaining these difficulties and inquiring about possible explanations. Not only did I get no reply; the guy never spoke to me again. I did not ever publicize this or try to make a big deal of it, although I could have. It certainly never dawned on me to accuse him or his team of bad faith in research.

The second separable part of the issue is how the field has responded to replication issues in general and to Cuddy in particular. There is a definite pile-on mentality in social psychology and related fields — a tendency to go chasing after the latest big thing, and to kick back at those who are a little later than you climbing on board. And there is always been a tendency in the field to kick those who were down for one reason or another. It’s not a particularly collegial field, perhaps because it is so crowded and it’s so hard to stand out. Cuddy is simply the latest example of someone who seemed to be really getting ahead of the field, which would have made her a target for tear-down in any event. It turned out that replicability was a convenient rubber chicken with which to whack at someone rising above the pack, but if it hadn’t been that, it would’ve been something else. Right now, the replicability police are riding high, but it’s pretty much inevitable that they in turn will be whacked down for something sooner or later, as people come to realize that replication is really a kind of phony issue in this kind of research.

I can’t really comment on the merits of Cuddy’s research or the attempts to replicate it. Perhaps the original claims made for it were exaggerated. If so, that would place it in the company of probably 95% of what does get published in the field, which is rampantly prone to overgeneralization. I think there’s enormous room for improvement in how we should structure and conduct research in our field, so that a measure of credibility can be restored to our findings. But I don’t see the current replicability movement as contributing substantially to this improvement.

--

--