I am back from the centennial meeting of the Ecological Society of America. I met a lot of great people, saw a lot of great talks, and had lovely discussions. One thing that has been in the back of my mind for a while though, is the question of how much methodology should go into an oral presentation?
Methods are important — over the last two years I have found that this has been the part of papers I criticize the most during peer review. Any result is only as robust as its less robust element and there are, in ecology, enough sources of variability that we do not want methods to add any more. As a consequence, appreciating a result and its robustness require that we be able to understand and evaluate the methods by which this result has been obtained.
There are a few elements to evaluating a method. Does it rely on a sound and tested theory? Is it properly applied? Is the method correctly implemented? All of these questions (and some more) should be asked — and answered inthe affirmative — before we decide to accept a result. If not, we are putting ourselves in the position to blindly accept what we are being told.
Let me illustrate. Imagine that I say, or write, “I modelled the growth of this population using a Leslie matrix to estimate the age distribution”. What I am really meaning is “I have a population of interest. I have correctly determined biologically meaningful age classes. I have a correct estimate of the number of individuals in each class. I know, or can estimate, the rate of transition between these classes, and their fecundity. I assume that these rates are not dependent upon other parameters. I assume that linear algebra, which I used to estimate the distribution of ages, is correct.” Each one of these clauses has to be evaluated in the same way because the correct result (the distribution of ages in my population) can only be reached if all of these clauses are correct too.
Of course, some of the underpinnings of methods we use are widely accepted. There is no need to re-explain linear algebra, calculus, or basic probabilities. Nonetheless, this leaves open the questions of “Is the method appropriate?” (which gives insurance that the appropriate thing has been done), “Has the method been applied correctly?” (which gives insurance that the appropriate thing has been done well), and “How does it works?” (which gives an indication of the possible biases).
For an example of why this actually matters, look no further than Christie Aschwanden’s entry in FiveThirtyEight: given the same data, to ask the same question, it is not at all uncommon that different researchers will use different methods, and therefore reach different conclusions. Any evaluation of the quality of a result requires the evaluation of the quality of the method.
This brings me back to the question of how much time should we spend on describing methods during an oral presentation? The average presentation, leaving time for discussion, and for people to change rooms, is probably barely above 15 minutes. This is not much time to introduce the general context, build a narrative, and walk people through the implication of the results. Let alone go into all of the details of the methods that have been used.
In fact, my own presentation uses a method that I present in two slides. The original presentation of this method (in PDF) took 42 slides and one paper with copious appendices! Had I wanted to give a full explanation of how it works, I would have run out of time, and used the time slots of the two speakers that came after me.
Spending more time on the method can be boring for the audience (and there might be equations). Not spending any time at all (I’ve seen this) is effectively hand-waving. Spending a moderate amount of time (I’ve done this) is difficult. Saying which method was used is rarely very informative. It is better than nothing, probably, but it is not great.
As an audience member, this places me in a very uncomfortable position. On one hand, I really want to trust all of this new, intriguing, exciting science. On the other hand, I am not prepared to suspend my disbelief long enough to forget about the fact that the result is meaningless unless framed within the context of the methods used to reach it. And a conference talk is a very poor way to do the later —even posters are better at it!
So what should we do? Many fields publish papers as part of the conference proceedings; this is one possible way. Unfortunately the tradition in life sciences is that conference proceedings have very little value.
Using something to supplement the talk itself would be good though. Do we have something like this? Yes! We have preprints. Preliminary versions of a manuscript that are nearly good enough to be submitted (or in fact, are under review already), that anyone can read.
So here is my proposal: give a big boost to talks that are accompanied by a preprint (or even a published paper). Highlight them in the program in some way, or require them for organized sessions. If we spend so much money on attending conferences, it is because we consider them an important part of the scientific activity. It would therefore make sense that their content is held to the same rigorous standards of quality and scrutiny than the rest of our scientific production.
Talking is well and good, but show us some text.
I thank Chris Grieves from Methods in Ecology & Evolution for copy-editing.