Avenues of Improving Statistical Research Practices

Verica Buchanan
Human Systems Data
Published in
3 min readMar 15, 2017

One of the most valuable elements of this week’s reading (Greenland et. al., 2016; Gelman, 2016; Gelman, 2017) was that the authors provided a detailed account of the numerous misconceptions about p-values, confidence intervals, and power. Additionally, they stressed that researchers across the board should move away from the restrictive and ‘simplistic’ significance reporting. Instead, researchers should account for an array of values. For example, they should report: confidence intervals and power, as well as assumptions made during the hypothesis generating/testing phase, exclusions/inclusion of participants, etc. In short, research findings should provide a precise and detailed account of everything that could affect the research outcome. However, why is this advocated?

As it turns out, there are numerous reports of sloppy research practices and misconduct. In fact, some researchers claim that the majority of published work is false (Ioannidis, 2005). This claim is exacerbated by recent findings that many studies cannot be replicated. For example, out of 53 American drug landmark cancer studies only six could be replicated whereas the German pharmaceutical company Bayer only reproduced 25% of its 67 landmark studies (Economist, 2013). This replication problem is not unique to the medical domain, but exits across numerous domains, including psychology (Cumming, 2014).

In order to curtail sloppy practices researchers are encouraged to apply a wider variety of statistical analyses, and at the same time, scrutinize their own work and their fellow researchers. Although these guidelines are helpful, I believe that more is needed. Table 1 provides a list of problems statements along with possible approaches to address them. For instance, colleges and universities need to better prepare their students for a research career by teaching them ‘good’ statistical methods. Additionally, universities could provide better resources to students and faculty by employing statisticians as consultants. There are numerous other suggests listed in Table 1 all deemed to improve research as a whole.

REFERENCES

Cumming, G. (2014). The New Statistics: Why and How. Psychological Science, 25(1), 7–29.

Gelman, A. (2017). Statistical Modeling, Casual Inference, and Social Science. Measurement error and the replication crisis. Retrieved from http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/

Gelman, A. (2016). Statistical Modeling, Casual Inference, and Social Science. What has happened down here is the winds have changed. Retrieved from http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/

Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, P. European journal of epidemiology, 31(4), 337–350.

Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e128.

The Economist (2013). How Science Goes Wrong. 409(8858), 3 & 26–30.

--

--