How good management leads to better science with Daniel Lakens

Arjan Haring
I love experiments
Published in
7 min readMay 1, 2016

--

Daniel Lakens is Assistant Professor in Applied Cognitive Psychology at the Eindhoven University of Technology. Professor Lakens is working at the Human-Technology Interaction group at Eindhoven University of Technology. His main lines of empirical research focus on conceptual thought and meaning, behavioral synchrony, and color psychology.

Professor Lakens also publishes on research methods, (meta)-statistics, and reward structures in science. He loves to teach, especially about research methods to young scholars.

He prioritises review requests based on how much the articles try to adhere to Open Science principles.

He believes science is a collaborative enterprise.

Be sure to follow professor Lakens on twitter.

To start off, you are an experimental psychologist. That means you know a lot about experiments, right? And you know a lot about psychology. But in my experience experimental psychologists often also know a lot about statistics. Not different in your case. Could you explain the connection?

The goal of experimental psychology is to make causal statements about human behavior. A strength of experiments is randomization: You can vary one thing you are interested in, and keep everything else constant. Statistics and methods is an important part of learning to become an experimental psychologist. So on the one hand, you are taught a lot of statistics. On the other hand, there is also a lot to be desired about the level of understanding.

Psychologist often have a rather superficial understanding of the statistics they calculate. They commonly misunderstand the meaning of p-values, for example, while they largely rely on p-values to make statistical inferences about their data. We now see that a deeper understanding of statistics is acknowledged to be an important skill. Many journals in experimental psychology now regularly publish articles about applied statistics.

I never found statistics interesting as a student, but in recent years, I have gained a new appreciation for gaining a better understanding of statistics. It’s a fun challenge, if you take the time for it, and it really helps you to become a better researcher. I think a lot of psychologists feel similar about this, so there is a nice group of psychologist writing articles and blogs about statistics.

And then this happened: the replication crisis, what was your role in it?

My role was twofold.

First, as many other experimental psychologists, I contributed to it, by publishing research that, looking back, were not as robust as I had thought. For example, my first ever publication (Jostmann, Lakens, & Schubert, 2009)was a set of studies in a high impact journal that, looking back, suffered from what I now realize was flexibility in the data analysis, and inflated alpha levels. We recently looked back to these studies, and we are no longer convinced there is sufficient evidence for the idea (Jostmann, Lakens, & Schubert, 2016).

At the same time, I have been interested in ways to improve the quality of psychological research since about 2010, when I received my PhD. Early on, I was especially interested in convincing people to do more replication studies, which were very difficult to publish (for example, Koole & Lakens, 2012).

Around 2012 Brian Nosek invited me to co-edit a special issue filled with exclusively pre-registered replication studies, which came out in 2014 (Nosek & Lakens, 2014). I continue to be amazed that it has taken us so long to start to think about how to do replication research in psychology, given that it is a cornerstone of an empirical science.

We are rapidly learning, but there is still a lot of discussion on how to do this best. I participated in a large scale replication project in which 100 studies were replicated (Open Science Collaboration, 2015) because I think researchers should actively contribute to improve the robustness of psychological knowledge.

More recently, I’m especially interested in giving researchers practical advice to do better research (Lakens, 2013, 2014).

Do you like what you have read so far? Get a quarterly update of what I am busy with.

What does it mean for science?

The current developments are hugely important for science in general.

Psychology is spearheading improvements in the way science is done, making it more robust, transparent, and reproducible. We see other fields are paying close attention, and changes that are introduced in psychology are now spreading to other disciplines.

If you look at the history of science, then science is in a continuous crisis, so I’m not tempted to become too dramatic about current developments. We see a lot of room for improvements, but in 20 years, young scholars will tell me all the things I have been doing wrong for 20 years. Science progresses, and it would be silly to assume we are currently at the peak of understanding how to do good science.

As long as you try to continuously improve, that’s the best you can do.

Another question on the replication crisis: Why do you think it became so personal?

I am probably the worst person to ask. The idea that science is personal, or that you could tie your ego to the outcome of your work, is something almost incomprehensible to me.

Some people consider Mertonian norms such as disinterestedness and organized skepticism as unrealistic ideals, but I personally find them a good reflection of how I work. All my work is flawed, and most of it will not stand the test of time. If people take the effort to show how I was wrong, I appreciate it.

Sometimes I feel people criticize my ideas unfairly — in such circumstances, I have always been able to talk about it, and the disagreement always boils down to a simple misunderstanding or just a slightly different focus.

But not everyone feels like this, and science is a social endeavor, so we should take these feelings seriously. I think there is a lot of uncertainty, and this drives all sorts of responses. Some people lose their motivation for science. Others feel personally attacked. The only way to get over this is to commit to continuously trying to improve yourself.

A better understanding of statistics helps here as well. If you realize how difficult it is to gain knowledge, and how variable and uncertain the conclusions from the average scientific publication are, I think more people would see that we should not become too attached to anything we have done.

Following up on things getting personal. Experimenters often have to endure quite a bit of push-back from the organization they work for. How can experimentation and getting it wrong become a positive thing?

Predicting what will lead to the best science is difficult — currently, I don’t think we really know how to create an incentive structure that works optimally. As a consequence, there are many different people, who are trying to do science in different ways.

There are some problems in psychology that people are practically in universal agreement about, such as the low informational value of many studies, often due to small sample sizes.

If you try to improve the way you work, managers need to understand this will come at a cost. Being a highly productive crappy scientist is much easier than being a highly productive good scientist. If managers don’t understand the difference between crappy science and good science, that’s a big problem.

I don’t think prestige and quality are strongly related in science at the moment. If your manager mistakes prestige for quality, you will have a hard time (see the coda in Lakens & Evers, 2014). But in the next few years, a realignment process will take place. We will slowly get more realistic ideas about what good science is, and how much work it is, in psychology.

Another thing I see you involved in is open science. Why is this important?

As scientists paid by tax-payers, we have a moral obligation to spend their money as well as possible, and give them as much knowledge as possible. Open Science is an easy way to achieve this. Sharing all data, materials, and publications openly will be much more efficient than keeping these things to yourself. To me, this is really a no-brainer. I find it very difficult to justify closed science.

And how do you think incentive structures could help science move forward? How are you going to test that?

It’s important to realize that we are all part of the incentive structure. There is a lot we can do, now, to help move science forward, without waiting for anything.

For example, citations are a very valuable commodity in science, and you are pretty much free in what you cite. Be a good reviewer, with realistic expectations of what real data looks like. If you are in a position where you hire people, hire good people, and if you manage people, clearly communicate your viewpoint on what you think will move science forward.

So far, one way I have been part of a change in the incentive structure is by convincing the Dutch science funder NWO to fund replication research, which they will issue a call for late 2016, hopefully.

I’d love to spend more time in the future to examine how we can improve science further. Meta-science seems like a very worthwhile research topic to me, and so far, we have surprisingly little empirical data on how to improve science.

Do you like what you have read? Get a quarterly update of what I am busy with.

--

--

Arjan Haring
I love experiments

designing fair markets for our food, health & energy @seldondigital - @jadatascience - @0pointseven