Whither Science Journalism? A Rant
Thoughts on “How Journalists Can Help Hold Scientists Accountable,” a recent Pacific Standard article that caused a bit of a stir, and which contained a few valuable ideas … embedded in a lot of sloppy thinking and misleading examples.
a) Plenty of journalists write regularly about science but don’t fall into the science writing genre or the community about which the PS piece says, “science journalists tend to position themselves as translators.”
This reflects the weird separation of the field into stories about what’s generally regarded as science — nominally science-y things like physics or animal biology or astronomy — and stories about matters of immediate public importance, which even when they’re inextricable from science are classified differently.
This separation being part of the problem with journalism about science (1). The PS article ignores this, then conflates the mainstream science journalists who are its targets with the sort of non-science generalists responsible for things like the idiotic This American Life piece…
… and that piece wasn’t problematic so much because it was based on fraudulent research, but because it was a 23-minute nationally-broadcast story, purporting to explain human nature and show the way across a profound social divide, all based on a flimsy one-study foundation. Which even a cheerleading science translator wouldn’t do, and using it to illustrate the problems addressed in the article is misleading if not intellecutally dishonest.
b) For that matter, the Bohannon pseudo-study didn’t actually entrap science journalists, but harried/incompetent daily-publication generalists. Which the PS story notes, before quoting Bohannon acknowledging this fact yet insisting it revealed something about mainstream science journalism.
To me that reads like … well … someone cramming the results they got into the conclusion they hoped to find, regardless of whether the results actually support it. But wait a second — isn’t that what careless, ambitious scientists and their oh-so-uncritical science journalist enablers are being accused of doing in the first place? Oh the irony!
c) Speaking of harried journalists, the elephant in the room here is the economics of the industry. If your job is to crank out articles and/or you’re not being paid a reasonable wage, then it’s hard to go deep and (if cranking) almost inevitable that you fall back on “a new study”/embargo-released stories.
And I don’t think study-based stories are a bad thing at all: it’s rare that a study is some standalone breakthrough, and usually is just the latest manifestation of a line(s) of inquiry, and publication just makes for a convenient news hook into a rich lode of thought. But they need to be done right, and sure, it’d be great if the assembly line workers at LiveScience could go deep, but the fact they don’t has far more to do with their work parameters than science journalism’s uncritical roots.
(And, seriously, putting that on the post-World War I Scripps Science Service? One of the nice things about covering science is that it gives you a healthy skepticism for untestable hypotheses presented as causal connections between historical events and complex contemporary phenomena. And, you know, cherry-picked just-so stories.)
d) Lastly but most importantly … the reproducibility crisis isn’t about the entirety of science; it’s mostly about areas of social science (especially psychology) and biomedical research (basic/preclinical research and molecular biology). And within the latter field, the problems are not evenly distributed, but most manifest in a) all-star journals like Science and Nature, which reward novelty to a problematic degree and are so prestigious that the competition to publish there is destructively fierce and b) low-end journals that anyone with expertise take with appropriate grains of salt, for obvious reasons.
Pick up, say, Brain or Current Biology or NEJM, and “a substantial portion” of those findings are not going to be false — much less Conservation Biology or Mycology or Journal of Applied Physics. It’s just a bullshit notion. A more nuanced point could be made about published findings reflecting prevailing biases and the limitations of current knowledge, but it’s not made. And getting this wrong to the degree the PS article did — to say something like, “A substantial portion — maybe the majority — of published scientific assertions are false” — is beyond careless. It’s inexcusable, even retraction-worthy. It’s about as accurate and useful as saying that economic activity leads to weapons manufacture.
Which isn’t to say the reproductibility crisis, and more broadly the confidence one can have in certain types findings, isn’t a problem. Especially when findings involve big, complex models, logistically complicated and sensitive methods, or hard-to-gather data, people need to be careful. (I sometimes wonder if ecologists aren’t in for some painful surprises in the future.) And when biomedical science proceeds poorly, people get hurt …
… though arguably there’s disproportionate emphasis on biomedical science as the primary source of health progress in the first place — which is precisely the sort of deep critical systemic question that articles like this never seem to deal with, preferring instead to score easy points with fake studies about chocolate and make self-congratulatory calls for reform.
And reform might indeed be necessary: there are flaws and tensions within science journalism, including (in my opinion) in some quarters the techno-progressivist bent identified by the PS article; the shunting of journalism with active economic/social/political sensibilities from what’s usually regarded as science journalism; and a tendency to present complex issues (e.g., agricultural biotechnology) as simplistic binaries or elide perspectives that merit consideration (2). There’s entire fields of study — genomics, social science — about which coverage probably ought to be read with rock-sized grains of salt, and conclusions treated as hypotheses by default. But the PS piece is uncritical thinking about uncritical thinking. It obscures as much as it reveals.
(1) Although I understand why the separation happens, and it makes some thematic sense. And lord knows it’s a wonderful thing that the more esoteric fruits of humankind’s curiosity get column inches and pageviews alongside national security and News You Can Use.
Also, I’m eternally grateful to Betsy Mason, my former editor at Wired, who encouraged me to go after things with economic/social/political aspects while also doing science-y stuff, and let me define science journalism as broadly involving anything to which the methods of science were applied. Mantis shrimp eyes? Poverty and child development? Chimpanzee research ethics? The origins of consciousness? Republican attacks on the EPA? It’s all science.
(2) That said: there are so many excellent science journalists and journalists writing about science right now. The AP’s redoubtable Seth Borenstein names a few off the top of his head here; there are scores if not hundreds more, and dozens of excellent publications or science sections, and many have been influenced by the very critiques the PS article offers — which, after all, have been around for years, to the point where they’ve changed the science journalism landscape. That landscape is not what it was five or ten years ago.