Member preview

The three most powerful words in science

What keeps us honest?


Blimey, you’re thinking (if you’re British), bit bold isn’t it? Go on then, what are the three most powerful words in science?

Honestly, I don’t know.

WHAT!?! You come in here, pretending to know the deep secrets of science and you don’t know? Why, if I catch you, I’ll stick a…

Woah there! Sorry, let me punctuate that properly:
Honestly, the three most powerful words in science are: “I don’t know”.

This simple, seemingly benign phrase is at the root of science. It shows us the edge of knowledge; it shows us how good our evidence is; and it keeps science honest. Applied throughout science, “I don’t know” is a powerful weapon. Let me show you.

The edge of knowledge is found whenever the answer to a scientific question is “I don’t know”.

These questions don’t have to be complicated. Some of our best insights have come from the simplest questions. A scientist may ask themselves: What happens if…? Or: What does that do? Or: Why did that happen? Some classics of this genre include:
Why did that apple fall down, not up? 
Why did those finches have different beaks?
What happens if I try and catch up to a beam of light?
And the classic scientist question: why can’t I find two identical socks?

“I don’t know” also pushes the boundaries of what science is capable of. It makes us build new tools to do science. We may ask “is it possible to measure neuron activity using only a video camera?” (This is a thing we wanted to do, so we can look at lots of neurons at the same time, and know exactly where they are). The answer was: I don’t know. So a group of neuroscientists set about showing it was possible. And now we can routinely record the activity of hundreds of neurons using a video camera. (Zoomed in a lot). Then, with new tools, we can ask new questions to which the answer is “I don’t know”.

Most importantly “I don’t know” gives us new ways of thinking. We may always ask: is this the only way of looking at these facts? The answer “I don’t know” can led us to revolutionary new ways of thinking about the world. Take the idea that the Earth is a single, unfathomably complex system, of interlocking interactions between atmosphere and sea, rocks and plants, animals and insects, and everything in between. This idea was born from Alexander von Humboldt wondering whether there was a different way of looking at separate facts he had collected about plants and moss from across different parts of the world. He realised his facts spoke of a single continuum of plant life that reached across the world in one, interlinked system. He called it “Naturgemalde”. Such ideas reached their zenith in Lovelock’s Gaia hypothesis. Thanks to the all-pervasive influence of humans on the Earth and on its climate, we now take the Earth-as-a-system idea as our starting point for any discussion, without thinking about it. Yet before Humboldt this idea didn’t even exist. His “I don’t know” forever changed our way of thinking about the Earth.

The are few certain things in science. Those few certain things have come about because huge numbers of scientists have answered “I don’t know” to the question: is the evidence good enough for this idea? They went out and found more evidence, and different kinds of evidence, and mathematical theories to stitch that evidence together. Of those few certain things, let’s name three: the theory of evolution by natural selection; the theory of relativity; and man-made climate change.

For everything else, the question “is the evidence good enough?” has to be answered with “I don’t know”. This is a reminder that all scientific results have to be treated with a healthy scepticism. That if scientists forget to treat science with a healthy scepticism, then we end up with a “reproducibility crisis’’: if we instead treat each paper’s results as sacrosanct, then we are on a crash-course for disappointment.

To check if the evidence is good enough, we have two options. We can check existing results more thoroughly, by using better experimental methods, or larger samples, or more accurate analyses — or all three at once.

Or we can go and find different evidence, independent evidence, and check if it comes to the same conclusion. We can test for man-made climate change by measuring atmospheric C02 or atmospheric methane or ocean acidity or global temperatures or glacier retreat or Arctic ice loss or sea levels or the frequency of storms or the intensity of rainfall or wind patterns or the start of Spring or migration patterns of birds or coral bleaching or… Thousands of scientists, in wildly different fields of research, have checked if these have come to the same conclusion. And they have: “the thing I measure has changed, and changed fast. Too fast.” 
All this driven not by ideology, but by answering “is the evidence good enough?” with “I don’t know”.

The final power of “I don’t know” is in keeping science honest. When we read a technical paper we often ask: how did they do that? Or why did they do that? If the answer is “I don’t know”, then that immediately raises a little red flag. Just a little one, mind you — we could always have missed something, or just not be equipped with the intellectual tools to fully grasp the hows and the whys.

(Not that we’re dumb; we’re just out of our comfort zone. For example, when I read an astrophysics paper in Nature, there are many questions about it for which my answer is “I don’t know”. But that’s because I’m not an astrophysicist. So it’s about the same as my 3 year old reading a Heston Blumenthal recipe — “daddy, what’s a sous vide?”; “I’ll tell you when you’re older. Great French pronunciation by the way”.).

But when a whole bunch of those “I don’t know” little red flags go up, then something is amiss. Key questions here include:
Why did they chose that statistical test?
How did they do that statistical test? 
How did they choose their data?
How did they do that control?
Er, is that Western Blot the same as that one? (I mean, PubPeer does have things on it other than suspiciously duplicated lanes, right?)

Recently, the work of Cornell’s Food and Brand lab has been torn to ribbons in precisely this way. The lab’s leader wrote a blog post that raised a little red flag about the way his lab analysed their data. It all seemed suspiciously like a bunch of “Just So” stories: dig around for a difference between groups of diners at an all-you-can-eat pizza buffet that was “significant”, then work out a story of what could have made that difference. So a group of scientists read some of his lab’s papers, and asked those above questions (not the Western Blot one, obviously: when psychologists start doing their own Western Blots science will be really neck-deep in problems). And those “I don’t know” red flags went up so fast that it was lucky none of them lost an eye. Here they list 150 problems in just 4 papers. Then they found some more in a different set of papers. And some more. And now there’s a whole diagram of the problems. All because someone asked “have these data been analysed correctly?” and they answered “I don’t know”.

Which brings us neatly to narcissism. It would help a few scientists if they said “I don’t know” more often. Those who assume they are right, and not unsure. Those that refuse to acknowledge the gaps in their knowledge or skills. Those are the most concerning. Assuming they are right, they write eviscerating reviews of papers and funding applications. Assuming they know everything, when they do think “I don’t know”, they assume it the fault of the speakers or authors — not them. Assuming they have no gaps in their skills, they blunder in their analyses.

Such self-confident researchers can be intimidating. Can make other researchers question themselves, wondering why they are unsure about their research, their evidence, their analyses and experiments. But to me, being unsure is the motive engine of the scientific enterprise. Saying “I don’t know”: it’s not a weakness, it’s science’s universal acid.

Want more? Follow us at The Spike

Twitter: @markdhumphries