How We Are Making Sure The Science We Share Is Good

Annie Neimand, Ph.D.
Science of Story Building
6 min readJul 20, 2018

--

By Kelly Chernin and Annie Neimand

Insights from the social sciences have the potential to be useful, but not all research is created equal. Many areas of study suffer from systematic flaws that limit the usefulness of the research they produce. Before trusting what a study has to say, it’s important to critically evaluate the assumptions and conclusions the researchers have made along the way.” -T. Frank Waddell, Journalism assistant professor, University of Florida College of Journalism and Communications

Findings that have long adorned psychology textbooks and found their way into the popular press, like the Stanford prison experiment, have come under scrutiny when their findings were critically re-explored. Therefore, we must accept that not all research is created equal.

To do this we gathered researchers from the University of Florida College of Journalism and Communications with expertise in qualitative and quantitative methodologies to build a guide for assessing the research uncovered for this project.

We want to ensure that this guide reflects the best thinking from scientists interested in this endeavour. Below is our first draft of the Good Research Guide. We will use this guide to decide which research is strong enough to draw insight from for this project and others at the Center for Public Interest Communications at the University of Florida College of Journalism and Communications.

If you are a researcher, please add suggestions, recommendations and other questions that should be included to this Google document. We will continue to vet and update ideas as they are shared with us. And we hope this guide will be useful to you in your work.

Good Research Guide:

  1. Has this work been published in an academic peer reviewed journal?
  2. Have you consulted with an expert in the field? Before you apply an insight from a specific study or area of research, run your ideas by a researcher with expertise in that area. Ask the expert about any potential vulnerabilities in the research you should be aware of.
  3. Have you explored this body of literature as a whole, especially checking for recent replications, as opposed to focusing your insights on a solo research finding? It is becoming more common for researchers to attempt to replicate the findings of existing studies. For example, seventeen labs attempted to replicate the 1988 study that led to the “facial feedback hypothesis” and eventually concluded that the study’s original findings did not hold up. Focusing your understanding of an idea on a singular study can lead to overconfidence of the insights a study or field has to offer.
  4. Has the paper been retracted, or has other work by the study author been retracted? Check here to find out whether a study has been retracted.
  5. Has the researcher explained how the sample size was determined? If a quantitative study has a complex factorial design, an experiment with more than one independent variable, the study will need to have more participants. Sample sizes are usually noted by the symbol N. For example, if you see N = 527, that means that 527 participants took part in the study. Usually, the larger the N, the easier it is to make a more generalizable claim. In general, a qualitative study will have a smaller sample since these studies are not attempting to generalize their findings to a larger public.
  6. Are the sample demographics (characteristics of the people included in the study) appropriate for the research questions? Studies from universities often rely on student participants since this is an easy demographic to access. For some projects, this sample is irrelevant. If you are interested in memory loss in older adults, a study that uses undergraduate participants will not be useful. Avoid studies that are not transparent about their sample demographics.
  7. Are the claims made exaggerated considering the methodology and sample? Possible signs of exaggeration might include findings that are too good to be true; scholars who make definitive claims; and studies that seek to apply their findings to people/groups beyond those who took part in the study.
  8. Is there evidence of p-hacking? P-hacking “occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant,” according to Megan L. Head, evolutionary biologist at the Australian National University, and her colleagues. P-hacking is difficult to determine unless you have access to the original data. However, if p-values are close to 0.05, statistical significance hovers between 0.01 and 0.04, there is cause to be suspicious. It is possible that the researcher may have manipulated the data until they received the desired result. To understand how easy it is to manipulate findings see Christie Aschwanden’s political survey example.
  9. What is the source’s reputation (journal and researcher)? Does this study come from a predatory journal (check here and here)? “A predatory publisher is an opportunistic publishing venue that exploits the academic need to publish but offers little reward for those using their services,” says Megan O’ Donnell, Data Services Librarian at Iowa State University.
  10. Is the study registered (when possible)? Registering studies is a new practice being introduced by some journals and research organizations to provide a check for over exaggerated findings. Elisabeth Pain, contributing editor to Science, explains, “Before researchers even begin the experiments, they submit a manuscript presenting a clear hypothesis that they plan to test and their proposed experimental methods and analyses. In a first peer-review phase, the journal evaluates the research question’s importance and the proposed experimental methods’ rigor and soundness. Upon acceptance, authors are invited to conduct the experiments, with the guarantee that their results will yield a publication regardless of whether they are positive or even statistically significant… Once the experiments are complete and the researchers are ready to publish the results, the completed manuscript goes through a second-phase peer review that aims to check that two conditions have been fulfilled.”
  11. Is the effect size too good to be true? When studying subtle effects like the influence of media in our everyday lives, researchers need a large number of people to be confident in the results that a study observes. A researcher may say that eating a banana a day is good for your health, but that overall effect on your health would be relatively small. Saying that eating a banana a day is good for your health is not a misrepresentation of the effect size, but saying that eating a banana a day is all you need to be healthy would be too good to be true.
  12. Has the researcher published multiple articles in different journals using the same dataset? This practice is also commonly referred to as salami slicing. Researchers Vikas Menon and Aparna Muraleedharan argue, “what we would get is merely incremental or repetitive findings that are at best, of limited value and worse still may end up distorting scientific literature.”
    Vesna Šupak Smolčić, medical biologist at the University of Rijeka writes, “Even though there are no objective ways to detect this sort of redundant publication, manuscripts suspected of being salami publications often report on identical or similar sample size, hypothesis, research methodology and results, and very often have the same authors.” Brian Wansink’s research provides a recent example of this problematic practice. We recommend checking out the author’s other publications and examining their methodology.
  13. Has a study received funding from an outside source? If yes, does the funding source have a possible conflict of interest? There have been a number of cases where research funded by an industry resulted in faulty or incomplete results. In the past, cigarette companies funded research that claimed smoking was good for your health. It was even common to see advertisements of doctors who endorsed smoking. Recently, the alcohol industry funded similar research to promote drinking as a healthy habit. It is best to stay away from industry-funded studies who may compromise the results.

While this is not an exhaustive list, nor one that seeks to solve any of the major issues currently being discussed in academia, we believe this will serve as a helpful guide to identify and assess studies and create a safety net for sharing questionable work.

Many of these guidelines may be hard to spot or difficult to interpret. The most important thing to remember is that research should always be examined with a healthy level of skepticism. Science is a process and our understanding of how things work is constantly changing.

— -

Kelly Chernin, Ph.D., is a research associate in the Center for Public Interest Communications, manager of the Journal of Public Interest Communications, and an adjunct professor in the College of Journalism and Communications at the University of Florida.

Annie Neimand is the research director for the Center for Public Interest Communications and frank gathering, and a Ph.D. candidate in the Department of Sociology at the University of Florida.

--

--

Annie Neimand, Ph.D.
Science of Story Building

Director of Research for the Center for Public Interest Communications at the University of Florida.