Social science is utterly unconvincing. Like the claims of fad diets, social science’s conclusions are often anecdotal, rather cursory, partial to the implications of their studies, and non-reproducible.
It’s not hard to see why: social situations are complex systems with thousands of independent variables. Even condensed matter physics is simpler. Ideally, one would identify behavioral variables that consistently associate with one another and use their low covariance to unveil an underlying psychological factor.
But instead of factor analysis, the stingy researcher mired in publish-or-perish would much rather observe a one-dimensional relation between a pair of these variables and call it a day. Once published, these simple studies can’t easily sift out of existing literature.
In STEM, one verifies a paper by rebuilding the experiment, modifying some key variables, and seeing if the damn thing works. It’s imperfect, but factor analysis regulates reproducibility in STEM to an extent. Without a similar scientific standard in social science, it’s significantly more difficult to see which studies are even accurate, especially when the studies directly contradict one another.
This opens the floodgates for third-party interests like administrators, school boards, unions, and corporations — after all, if you can fund social studies that, by design, can support any conclusion you want, why not tilt the leading social paradigm in your favor? Coca-Cola did it in fad diets to artificially downplay the role of sugar in heart disease; why not the social sciences?
And thus is born the paradigmatic behavior of sociology itself. Instead of generating conclusions from behavioral studies, social scientists do the opposite: they begin their study with a fabricated conclusion and cherry-pick behavioral data to support that conclusion, oft at the behest of the opinions of the institutions they’re associated with.
No scientific field is immune from external influence. But social science is the least insulated from confounding incentives. If social scientists stray from predetermined conclusions, they get the boot. Even if their work was sound in its reasoned conclusions, social science is ultimately guided not by rigor and merit, but by paradigms and politics.
The most recent example was in an interaction between the Wall Street Journal and a paper published in PNAS. To say that the WSJ’s coverage of a factually sound but unpopular sociology paper raised eyebrows is an understatement: the publicity led to firings and compelled the original authors to retract their paper. The story follows a tragic — albeit hilarious — turn of events that see social scientists contradict themselves, academic administrators pushed over the ledge by unions, and a general microcosm of the greater erosion of of meritocracy and rigor in research as a whole.
In 2019, David J. Johnson of the University of Maryland and Joseph Cesario of Michigan State University, among others, published “Officer characteristics and racial disparities in fatal officer-involved shootings.” The paper was published in PNAS and was reviewed by Kenneth W. Wachter of UC Berkeley.
Their paper had three key findings: A) as the proportion of white officers in a fatal officer-involved shooting increases, the victim is not more likely to be of a racial minority; B) race-specific county-level violent crime strongly predicts the race of the civilian shot; C) while the available data is not comprehensive, there is thusfar no overall evidence of anti-Black or anti-Hispanic disparities in fatal officer-involved shootings. They continue by making the contrasting point that in non-lethal incidents, there is a clear racial bias against Black and Hispanic individuals.
Their findings are backed by Ross (2015), Fryer (2016), Winterhalder (2016), and several others. But it does not easily fit the public narrative surrounding police brutality. Their results are disputed by the world of freelance journalists who, ironically, refer to actual data scientists as armchair statisticians. Their criticism arises from the wrong idea that what’s being measured in these studies is deaths per police encounter by race. What’s actually being measured is the probability that the victim was of a particular race given that there was a fatal shooting.
A minor clarifying correction to the paper was issued in April 2020, but the authors stood by their original conclusions and demonstrated that, thusfar, the critiques against their conclusions had no evidence to back them up.
The paper lay mostly unnoticed until June 2020, when Heather Mac Donald wrote an op-ed in the Wall Street Journal citing the paper. Interestingly, she wrote about the same paper in a similar capacity in September 2019 but went unnoticed, likely because police brutality wasn’t in popular online discourse yet. Mac Donald, by the way, has been described by the Claremont Colleges as a “fascist, a white supremacist, a warhawk, a transphobe, a queerphobe, a classist, and ignorant of interlocking systems of domination that produce the lethal conditions under which oppressed peoples are forced to live.” Man, she must be evil.
According to the Boston Globe, these were Heather’s most controversial points in the op-ed:
“Crime and suspect behavior, not race, determine most police actions.”
“The [MSU and UMaryland] researchers found that the more frequently officers encounter violent suspects from any given racial group, the greater the chance that a member of that group will be fatally shot by a police officer. There is “no significant evidence of antiblack disparity in the likelihood of being fatally shot by police,” they concluded.”
“Research by Harvard economist Roland G. Fryer Jr. also found no evidence of racial discrimination in shootings. Any evidence to the contrary fails to take into account crime rates and civilian behavior before and during interactions with police.”
“Hold officers accountable who use excessive force. The Minneapolis officers who arrested George Floyd must be held accountable for their excessive use of force and callous indifference to his distress. Police training needs to double down on de-escalation tactics.”
This is fascist rhetoric? These points are backed by a mountain of evidence that continues to elude social scientists who try to invalidate it. Now, it doesn’t undercut the police brutality argument — it’s clear that racial biases exist in the exertion of nonlethal force, and that the probability of a shooting given the victim’s race is currently unknown — but it clearly doesn’t cohere with the current police brutality movement, which predicates its existence on the idea that racial biases are most salient in police-encounter shootings.
The WSJ op-ed kicked off a firestorm at Michigan State University. The Graduate Employees Union compelled the MSU press office to apologize for the “harm it caused” by mentioning Mac Donald’s article in their newsletter. Physicist Steve Hsu, who had approved funding for Johnson and Cesario’s paper, was sacked under pressure from the Graduate Employees Union. PNAS, under fire from MSU administrators, editorialized that the paper had been “poorly framed” — even though it got through their own three levels of editorial, peer, and factual review.
PNAS’s public statement on the “poor framing” of the paper has nothing to do with its evidence or conclusions. Instead, it spends some time elucidating the minor correction, which had long since been issued by the time the op-ed was written, and then justifies the paper’s “poor framing” by observing that the op-ed “prompted renewed attention to the original [paper] and the ensuing debate.”
Unlike Mac Donald, a “responsible reader would see that the paper does not speak to differing rates of White or Black officers killing Black civilians,” according to PNAS’s editorial. But Mac Donald never attributed anything like that to Johnson and Cesario’s paper.
When Johnson and Cesario submitted their original retraction request, they noted that their paper had been misused by Mac Donald to “support the position that the probability of being shot by police did not differ between Black and White Americans.” There’s clearly a racial difference in the probability of being shot by police — just take the number of people shot of a race and divide it by the total number of people of that race who live in the United States.
But where did Mac Donald even imply this? All she’s noted is that there’s no clear evidence for racial discrimination in the present data once you control for crime rates and civilian behavior — the same conclusion as Johnson and Cesario’s paper. She never supported the idea that the probability of being shot by police didn’t differ between Black and White Americans; these are two fundamentally different measures. That’s not a misinterpretation on Mac Donald’s part; that’s a lack of reading comprehension on the paper authors’ part.
Beyond Mac Donald’s op-ed, the retraction provides no further evidence or reasoning why they retracted their paper. Instead, they imply that their conclusions are inappropriate in the context of the current public debate over police brutality. They didn’t describe how those conclusions were inappropriate, or how they even overstepped their bounds. It flies in the face of their correction statement in April where they stood by their conclusions due to their factual veracity.
Cesario directly contradicted his original retraction when he wrote a letter to the editor of the WSJ stating that the “reason for the retraction had nothing to do with the claims made by Ms. Mac Donald.” He directly cites her in his original retraction. He also mentions two emails he wrote to Mac Donald about the op-ed, neither of which could be produced by him or Mac Donald (she never received anything from him).
But let’s assume that the results of the study were indeed misused by the press. The precedent set by retracting a paper in lieu of the public reaction is unnervingly dangerous. If politics can directly dictate what specific conclusions can be published, the entire basis for academia flies out the window.
This is not an issue localized to social science. Administrative boards are beginning to disregard faculty input in faculty matters. California State University recently built a new curriculum for ethnic studies and made it a required prerequisite for graduation without consulting the ethnic studies faculty. Through a vote by the board of trustees, the University of California system slashed the SAT and ACT consideration from admissions until 2024, despite outcry from the liberal arts faculty.
As in the case with Johnson and Cesario’s paper, morality is increasingly being used as justification for disregarding data and healthy academic debates. The male variability hypothesis, criticism of Beijing, and several other topics that don’t conform to contemporary moral narratives were quickly pulled after public backlash, while the Grievance Study Investigation that did conform to contemporary moral narratives — but were purposely and obviously fake — were quickly published in the highest standard humanities publications.
Increasingly, it’s not the findings themselves that pose a threat, but the hypothetical possibility that others will use these facts to justify discrimination. But it’s important to distinguish between an idea, the researcher positing that idea, and the real potential for negative behavior. It’s also important to observe that morality is a fluid construction and should not dictate the manner of censorship in academia. The problem exists in the same vein as discussion surrounding cancel culture and the right to free speech vs right to not be offended (aka the push to legitimize ad hominem attacks).
Factual evidence, no matter how unnerving or nauseating, should never be censored or retracted in an academic setting purely because of its moral optics.
The prevailing morality of the time placed obstacles toward an accurate characterization of civil rights issues in the 1960s-80s. You couldn’t even publish a paper that went against the narrative that race and IQ shared a causal relation in the 1920s. Why would we want a repeat of that?
Conor Friedersdorf, who disagrees with Mac Donald in many areas, wrote an awesome Atlantic article recognizing the need for open debate and disagreement in academia. He highlights the troubling trend that academic institutions tend to see themselves as moral authorities on social and economic matters, often (ironically) regardless of what the data may actually suggest.
I haven’t seen many oxymorons as moronic as “moral authority.”
Sweeping declarations of one’s racism, xenophobia, bigotry, and whatnot don’t fuel academic debate. Instead, they betray a deeper anxiety in the minds of university students, faculty, and to a greater extent their administrators: they might be wrong, or worse, they may partially be at fault for some of society’s issues.
But no social progress was ever inspired from or sponsored by an academic institution. Progress was issued by individuals who stuck their necks out for legislative reform and showed society, person by person, that their alternate social paradigm was legitimate and preferable to the prevailing sentiment.
As Stephen Fry paraphrases Yevgeny Zamyatin: “Progress isn’t achieved by preachers or guardians of morality, but by madmen, hermits, heretics, dreamers, rebels and skeptics.”
Academia has always aspired toward simply being the engine for describing data-driven ideas. Purposefully, its objective goals ought to insulate it from the morals and prejudices of its time and reveal information (mostly) untouched by human emotion. The Kinsey Reports are the best example of this.
The issue of police brutality, like all important issues, is nuanced and complex. So let us solve this problem expediently and comprehensively — and not interrupt research for any reason other than rigor and merit.
Politics? Morality? Leave academia alone.