Viral Videos and Research Methods’ May Be Exciting, But Zeynep Got it All Wrong

--

Oof. Science, amirite?

By now you’ve likely all seen the controversial catcalling video made by Hollaback. In light of the piece going viral, Zeynep Tufekci nobly wrote an article about the importance of research methods (yes!), asserting that because the methods of this catcalling “study” were flawed, the study was, as well (okay, still with you, Zeynep…). Specifically, because the video did not feature as many white men as men of color, due to alleged uncaptured or poor quality footage of the former, the data, she postulates, is invalid (er…wait…). She concludes:

“And that right there, is indefensible, methodologically or substantively. The only neutral explanation is that there is a lot of construction, ambulances and sirens going on in more white parts of New York, and somehow they just cannot catch a catcalling white guy. That sounds implausible to me, but if — a big if — that were the case, an ethical researcher would redo their study because the data is no longer valid because of the confounding variable — noise at non-minority neighborhoods.”

Except that this “study’s” hypothesis was never about whether men of color harass more than white men. It was looking at the rate of harassment. Contrary to what this author asserts, the data absolutely is valid because it is looking at frequency, not the race of the offender. It’s looking at a binary variable, not a nominal variable. There is no need to redo the “study.” This author is confusing the validity of the data itself—a quantitative value which still included all perpetrators, regardless of race and the quality of the video shot—with the validity of the visual aids representing the data, which, indeed, may have not show an accurate sampling. However, the latter isn’t reason to throw out the “study” all together.

To continue with the author’s analogy in her piece, let’s say I had a study about how often ice cream falls off the cone per hour, and I am presenting at a conference. As a visual aid on my board, I’ve displayed photos of ice cream on the floor after it has fallen. The floor upon which I am conducting my study happens to be white tile, so the vanilla ice cream doesn’t show up as well on the white background when photographed. The chocolate ice cream, on the other hand, is picked up better by my camera, so I predominantly use those photos. However, the data itself that is presented on my board represents the accurate rate of falling, regardless of whether this event was captured on camera or not.

Is my entire study flawed now? Would it be rejected on those grounds alone? No. Because it was a preliminary study, for which all variables were not controlled. And more notably, no, because the visual aid has no bearing on the data itself. Which color ice cream I show on the ground does not affect how often the ice cream itself fell.

In the discussion section of my paper, I may, however, note the observation that coffee ice cream seemed to fall more than strawberry ice cream, and then suggest that perhaps a follow-up study controlling for these variables and exploring the discrepancies would be of interest. Sure. Because that is how most studies work. That’s in large part what the discussion section of any research publication is for: to note any shortcomings in your study, interesting/unexpected observations, and suggested areas for further inquiry. (It’s worth noting that the makers of the video seemed to have attempted this with their note about perpetrators being of all backgrounds, and later clarification that many of the white men unfortunately didn’t have quality footage associated with them.)

I’m not arguing that it wasn’t perhaps socially irresponsible to not more accurately represent the sample in the visual aid, but this does not mean the data is invalid and the study should be thrown out in full. I would argue, however, that if we’re going to advocate for science, let’s at least be scientific about it. CEREBELLUM’D!

--

--