Professor Dennis A.V. Dittrich on the purpose of scientific & business experimentation

Arjan Haring
I love experiments
Published in
8 min readJun 21, 2016

--

Dr. Dennis A. V. Dittrich is a full-time faculty member and Professor of Economics at Touro College Berlin.

Professor Dittrich received his graduate degree in Economics from Humboldt-University of Berlin and his doctorate from Friedrich-Schiller-University Jena. His expertise ranges from experimental economics, behavioral economics and finance, personnel and labor economics, game theory and mechanism design to economic psychology.

In his research he pays special attention to the heterogeneity of economic agents and demographic changes in society. His research interests include the heterogeneity and stability of preferences, in particular social preferences. He investigates the interaction of these heterogeneous preferences with economic and social institutions and their effect on strategic interaction and individual decision making under ambiguity, uncertainty and risk, and their application to intra- and inter-firm relations, the design of economic institutions and social policy.

I also recommend following Professor Dittrich on twitter, because he shares a wealth of information there. Let’s start the interview!

I think it was Al Roth who spoke about economic experiments as part
of 3 big conversations: “speaking to theorists,” “searching for
facts,” and “whispering in the ears of princes.” (Roth (1995a)). What
kind of purposes do you see for experiments?

By quoting Al you have already given a very good answer yourself.

When I was studying economics there were three mandatory tracks and two elective tracks that you had to choose for your degree and collect a certain number of credits in. The mandatory tracks were economic theory, economic policy, and public finance. Among the economic theory courses that were offered was one called experimental economics. I took that course.

In that course, you had to run your own experiment and you had to analyze the data that you had generated, so you had to have a research idea, a hypothesis, and you had to test it with real data. At that time, I found it strange, maybe ironic, that one of the most hands-on courses that I took was offered as an economic theory course.

Now I think different.

It is all part of the scientific method. You make observations that lead you to come up with a hypothesis. You need to confront that hypothesis with the real world, experiments are one way to do this.

Then you have to either modify your hypothesis and repeat or if your tests consistently confirm the hypothesis you can call it a theory. Unless you are working in pure mathematics and can prove your hypothesis, theory needs to withstand empirical testing.

Experiments, economic experiments contribute to this process of scientific knowledge generation at two different stages. They can stand at the beginning. They can lead us to invent new hypotheses if we observe something new or unexpected. Maybe as a result of an exploratory experiment that we run because the existing theory does not give us enough guidance to make a testable prediction for a new context or maybe as a result of an unexpected observation we make in an experiment that was designed to test something different.

And then, of course, experiments also contribute to the knowledge creating process when we explicitly test a hypothesis. Hence, experimentalists are theorists.

Finally, and this addresses the “whispering in the ears of princes,” experiments also inform policy makers. The policy maker can fine tune a specific policy or mechanism design as was done for example with the spectrum auctions to increase government revenues. They can also assess the efficacy of a specific policy program, something that we now more often see in development economics as evidenced by the many randomized controlled trials.

Experiments, whether in the laboratory or in the field, have become an essential tool in business, economics, the social sciences in more general, and policy making.

Do you like what you have read so far? Get a quarterly update of what I am busy with.

Connecting experiments to the larger idea of doing research/searching for facts, how do the different experiments fit in the research toolbox and how do they compare to the other research methods like surveys?

Surveys and experiments are each just one tool in the social scientist’s tool box. For a very long time, many economists opposed the idea that experiments are possible in economics, that experiments can contribute to economic research. They did this even though experiments were already successfully contributing to economic research.

Now, we have laboratory experiments, field experiments, and lab in the field experiments where the actual target groups of a policy change takes part in a laboratory experiment.

Similarly, economists may be skeptical about surveys. There is this desirability bias, people answer what they believe they should answer instead of what they truly think. There is an intention-action-gap in self-reported measures. Respondents may not even understand the questions. Why would anyone answer honestly? Still, research using surveys is an essential part, for example, in sociology and psychology. Fields that also employ experiments.

They have learned how to deal with this kind of data. Survey data may be rather noisy but survey studies are much cheaper than fMRI studies where we would see the honest brain response and they are also cheaper and easier to implement than similar large-scale experiments that would allow us to observe behavior in a controlled environment.

All these different data gathering research methods are complements. Each is suited for one specific type of assessment in particular. Combined they are a very powerful toolbox.

You can, for example, add survey measures to a lab experiment. If you find that behavior in the experiments correlates with these survey measures, a large-scale survey may then allow you to make better predictions regarding the observed behavior in the larger population.

Here I think it’s also very interesting to understand what the difference is between academic and business experiments. Although the lines are getting more blurry every day there is the still an obvious distinction in sampling. Let’s take student samples vs real target groups for example. What are your ideas on the validity of the different types of experiments.

The different experiments — laboratory experiments with students, field experiments with the real target group, and artefactual experiments that are just lab experiments with non-standard participants: the real target group and not students — have a different purpose.

The lab experiment is to test a hypothesis. A theory that is supposed to be generally true must also be true in the very specific environment of a laboratory full of students. In this sense, the well designed and professionally run lab experiment is perfectly valid.

We have less control over the world outside the lab. Context matters. Idiosyncratic preferences may matter. Prior experiences may matter. Group norms may matter. All the factors that we try to eliminate in the lab to identify the causal relationship of just one variable on the outcome, they may matter outside the lab.

To assess the strength of a causal relationship that we have, in principle, established in the laboratory, to fine tune our newly designed mechanism or policy experimenting with the real target group and maybe also in their natural environment may be necessary.

As you might know, my last startup existed by the mere existence of heterogeneity in the effects of persuasion. How do you see mean
effects of treatments vs heterogeneity in effects?

That is a fascinating part of research in the social sciences, we are not perfectly deterministic in our behavior. In contrast to an atom we have free will, this makes individual human behavior so much noisier than the behavior of inanimate matter in a physics experiment.

Even though we may have identified a causal link between a variable and our behavior and therefore an outcome, the strength of that relationship may differ between different individuals and between us today and us at a different time and between different contexts. Other factors may influence the strength of that causal link.

I believe heterogeneity in treatment effects is vastly under-researched. Concentrating only on mean effects is a remnant of the concept of a representative agent. An idea that dramatically simplifies many models, but also an idea that may dramatically mislead.

As a simple example consider the existence of just two types. Given the same information, say average energy consumption in the neighborhood, one type may decrease and the other type may increase their energy consumption.

Your policy goal is to decrease energy consumption. Depending on the relative share of the two types in the total population the mean effect can now be negative, zero, or positive and therefore you would recommend either to give or not to give the information on average energy consumption that causes the change.

If, however, you identify the heterogeneity you may be able to implement a more targeted policy and decrease energy consumption by a much larger amount.

And to finish I would like to see where this field is headed to. You are also big on methodology. Can you give an update on recent developments, what you think are relevant changes that will affect business experimentation as well?

Experimentation in business has driven the field historically. Just think of Gosset and the t-test for small, cost-effective samples. Optimal experiment design and effective experimentation seem also be driven by the quest for better products — that includes drugs or other medical treatments — and their cheaper production.

In academia I now observe an increase in the number of meta-analyses, the reanalysis of a larger number of smaller experiments on the same topic, to obtain a more precise and maybe trustworthy estimate of effects, if any effects remain, and to identify influence factors that may have been ignored so far as their impact was too small in any individual study.

Big data, large happenstance data sets, may not be as helpful as some may hope. It may be too hard to tease out robust causal relationships from these large data sets. Therefore, I would guess that businesses will keep running experiments and also that they aggregate their smaller studies more often. This is just a matter of cost-effectiveness.

In this context, I can also imagine an increase in the popularity of Bayesian methods. They seem the more natural approach to include prior knowledge in the analysis of new data and they seem the more natural approach to identify and quantify heterogeneity.

Computing power is now abundantly available, so there is no reason to stick to only the frequentists’ methods for the analysis of data. This implies, of course, that statistical methods and people trained in them not just remain important for business, their importance will increase.

Do you like what you have read? Get a quarterly update of what I am busy with.

--

--

Arjan Haring
I love experiments

designing fair markets for our food, health & energy @seldondigital - @jadatascience - @0pointseven