Why Scientists Feel Dumb
Or: the crushing inevitability of imposter syndrome
A few weeks ago I was reading up on a niche, esoteric topic, a way of analysing large data-sets so achingly obscure that a reader of The Spike reasonably claimed it didn’t exist. Pausing briefly for thought, that familiar feeling crept in: I don’t understand this — how dumb am I?
Then it struck me: Scientists are doomed to feel dumb. A scientist’s job is research. Research is, by definition, learning. It is finding out what you don’t know. And doing that for a job means being reminded every day, in every way, that there are things you don’t know, don’t understand, or both.
And there are so many ways.
Experiments are an exercise in confidence leeching. Stuff will go wrong, you will screw up, you will break stuff. The most advanced thing I’ve ever done with my own two hands is attempt to train a rat in a box to press a lever. It didn’t. I failed even at that basic task. The thought of trying to lower a piece of glass roughly the thickness of a human hair through the brain until it touches the surface of a single neuron’s body, without breaking the glass, the neuron, or your own mind in the process doesn’t compute. Getting good data can be a process of confidence-sapping, head-banging floundering.
Analysing the data is even worse. Statistics is an exercise in keyboard-snapping frustration, compounded by most scientist’s truly terrible training in statistics, which is no fault of their own. Which were you trained in: the cookbook; the t-test/ANOVA solves all; or absolutely bugger all? Few people deeply understand the principles behind the statistical analyses they do. Even fewer use them correctly; judging by the torrents of anger statisticians aim at each other, that includes most of them too. The already low confidence of most working scientists faced with doing statistics is now compounded by the many high-profile pronouncements that we’ve all been doing it wrong for decades. But with no clear guidance on what to do instead: p-values but no significance level; p-values but more stringent significance levels; no p-values but confidence intervals; no p-values but effect sizes; none of that Fisher or Neyman-Pearson rubbish, use Bayes (factors) — for a different arbitrary number scale to interpret instead. Oh for the heady days of Rutherford’s dictum “if you need statistics, you’ve done the wrong experiment”.
Reading scientific papers is worse. To know what is known, we have to read the literature. Every new research paper we read reveals to us something we didn’t know before. Broaching a new research topic — say the sub-unit composition of the GABAb receptor, the response of dopamine neurons to reward or lack thereof, the algorithms of hierarchical clustering — is like swallowing a firehose of our own ignorance.
The mere existence of the literature is worse. Science is a crushing flood of papers. More papers are published in your own research field than you can ever read. And more are published every year, every month, every day. You can never catch up. The scale of your ignorance writ large by your PubMed and Google search results.
And finally, there are your peers. Scientists spend much, perhaps all, of their time with other scientists. This is not healthy. Other scientists are smart. They know things you don’t know; can do things you can’t do; can understand things you can’t understand. All of them. Thousands of them. Go to the annual Society for Neuroscience meeting and stand in the half-mile long poster hall, and there are about 10,000 scientists in that single room that know things you don’t.
Put all these together and, whether you’re explicitly aware of them or not, they are calculated to make you feel dumb. Indeed, written down, this list reads like a well-honed recipe for how to maximise imposter syndrome, the nagging doubt that you are not qualified, not deserving to be where you are. Perhaps not surprising then that imposter syndrome is considered rife in academia.
(And compounded by the Dunning-Krueger effect, the evidence that in many cultures smart people systematically underestimate their abilities. In the UK no doubt this is partly through politeness.)
Fellow dumbies, what’s the solution? One springs to mind: you are “the other scientist”. The one who knows things others do not; the one who can do things others cannot; the one who understands things others do not.
Perhaps you know how to properly use multiple comparisons corrections. Properly. Not just slapping an archaic, pre-hand-cranked-calculator, Bonferroni correction on everything.
You know how to get a stable recording of a microscopic single neuron in the tiny brain of a titchy rodent using a piece of glass so small ants would use it as a light fitting.
You can create a beautiful sentence. Construct a compellingly argued review. Write good like what them book people do.
You understand how the energy drain of using neurons places powerful limits on what brains can do, or how they operate. Or the perplexing chain of molecular events that lead from a dopamine molecule locking into its receptor to the appearance of AMPA receptors somewhere else on the neuron. Or the cascade of events that leads from two colliding neutron stars to the quantum-level flicker of a four kilometre long laser beam.
You understand how to use the University’s computing cluster. Properly, without running a script directly on the head node and blocking all other users (yes, we’re looking at you, bioinformatics).
You know, understand, and can do stuff unique to you.
And another answer: acceptance. Feeling dumb as a scientist is inevitable. But it is in trying to conquer that feeling that we do our best work, and have our best insights. For some of the things you don’t know, no one knows. So in finding them out, you push science forward. After all, the most powerful words in science are: I don’t know.
Want more? Follow us at The Spike