This is a quick note on altered data in Lewandowsky, Oberauer, and Gignac (2013). This was a false, garbage study published by the journal Psychological Science. Take a look at these columns below. They’re survey item responses on a 1–4 scale, from the dataset Lewandowsky posted at Bristol:
What do you notice? Is there anything strange about any of them, anything missing?
You might have noticed that there are no 4s under FMThreatEnv, and no 1s under CO2TempUp. (If you scroll down, this holds for the entire dataset.) That’s very odd given the data Lewandowsky posted at the University of Western Australia:
This is supposed to be the same data, with the same subset of 1145 participants, yet there are some 4s under FMThreatEnv and some 1s under CO2TempUp. (You’ll notice that FMUnsustain looks the same in both datasets.)
I noticed this when I first looked at the Bristol data a couple of years ago. I held silent on this at first because I was waiting for Steve McIntyre’s team to publish their paper. I’m not sure if they’re working on it anymore. Then I waited because my own health limited me for a time. In any case, this is rich because in the meantime Bristol’s psychology chair sent a letter about me to my department chair at ASU, complaining about my public debunkings of Lewandowsky and trying to shut me down…
As of February 20, 2019, it looks like both datasets are now hosted by Bristol — that is, Bristol is hosting two datasets that contradict each other, at least one of them being altered. On the Bristol page, click on PSCI 2013 and PSCI 2013 Extended — each has a CSV file. (The Extended version has a few extra variables/columns, and is missing the 4s and 1s— they both have the same 1145 participants). Act fast because Lewandowsky may try another switch.
What do these data pertain to? FMThreatEnv refers to the item that reads:
“Free and unregulated markets pose important threats to sustainable development.”
CO2TempUp refers to the item that reads:
“I believe that burning fossil fuels increases atmospheric temperature to some measurable degree.”
Lewandowsky used a crude 1–4 point scale for each item, where 1 meant Strongly Disagree, 2 meant Disagree, 3 meant Agree, and 4 meant Strongly Agree.
FMThreatEnv was reverse-scored. Not having any 4s in the released dataset would actually imply that no one answered 1-Strongly Disagree to that item.
CO2TempUp was not reverse-scored. Not having any 1s implies that no one answered Strongly Disagree to that item.
These are raw participant responses and we should never see these sorts of differences between datasets — this data shouldn’t be touched (except for something like reverse-scoring, which wouldn’t explain these differences). Obviously, someone altered this data in one or more files. This casts doubt on the data as such since we don’t know what data is real here. I have no idea which data file is the altered one. They could both be altered. I don’t trust — and I don’t recommend that you trust — any of Lewandowsky’s data, for any study. For example, read about his 32,757-year-old and his 5-year-old. I’ll contact Bristol’s research ethics department about this matter.
José Duarte, PhD, is a social psychologist and data scientist. You can reach him at firstname.lastname@example.org.