Illustration: Created by Davide Bonazzi for Science magazine, “Research on research” | Disclaimer: All illustrations are included as an appreciation of the talent of science illustrators with reference to their name, the original source that commissioned the artwork (wherever available) or the portfolio of the artist

Public Mistrust of Science: Challenges & Opportunities

Konstantina Slaveykova
DataDotScience
13 min readSep 16, 2020

--

“Science is a way of thinking much more than it is a body of knowledge.” — Carl Sagan

We live in a strange time.

In less than a year NASA will launch its 2020 Mars rover mission probing the possibility of habitable conditions on the Red Planet. Meanwhile, YouTuber Logan Paul is sending people to Antarctica on a “visionary expedition” which intends to “reach the end of the world and prove the Earth is flat”. Advances in science and medicine have helped us eradicate smallpox and rinderpest and are working towards eradicating polio, yaws, rubella and various parasitic worm diseases. Yet the World Health Organisation (WHO) reported that in the first six months of 2019, there had been 365K cases of measles: an outbreak worse than any year since 2006.

How can we be simultaneously making giant leaps forward and taking such counterintuitive steps back? What are the biggest challenges for understanding scientific findings, and how can we use them to improve public trust? A recent talk by Prof. Richard Arnold at Victoria University of Wellington left me pondering on the topic.

Trust with the occasional spell of doubt

Despite a notable public divide on policy-related issues like vaccines, climate change and food science, surveys by the Pew Research Center and Gallup show that 72% of global respondents express trust in science. Despite recent controversies, it seems the long-term trust in the scientific community has stayed stable for decades in the US.

But what about the rest of the population? What are the primary sources of doubt in scientific findings?

Infographic by John Manoogian III and Buster Benson. Zoom in for more details here

Misleading intuitions & the trap of cognitive bias

People often conflate their intuitions about the surrounding world with a benchmark for truth. The groundbreaking work of Amos Tversky and Daniel Kahneman in the 70s and 80s demonstrated that despite our assumptions of rationality, human decision-making is prone to a variety of thinking errors.

Contemplating every single action and decision we make is unsustainable, so we automate most of what we do through easy-to-navigate mental shortcuts (heuristics) or rules of thumb. They are highly useful when we want to conserve mental energy but the downside is that they can be highly misleading and can easily turn into systematic errors (cognitive biases).

Imagine you have recently seen many news reports on snowstorms and unusually cold weather fronts. This information will be more readily available in your memory (availability heuristic), so you would be sceptical of reports on the rise in average global temperatures. Doubt will be even stronger if you already believe (based on what you read or discussed with others) that there is no anthropogenic influence on climate (confirmation bias). The stronger you identify with this belief (especially if you tie it to your moral and political intuitions), the more likely you would be to strengthen it, even when you encounter multiple disconfirming pieces of evidence (backfire effect). If social belonging is essential to you and your tribe of choice has a definite position on the topic, you are even more likely to forego specific evidence and support the view of the group (groupthink, bandwagon effect).

Cognitive biases are hard to override because they are automatic and intuitive (Kahneman calls this System I thinking in Thinking Fast and Slow). Well-educated and numerically literate people are not immune to bias: that is why it is so challenging to admit to biased reasoning. Switching to deliberate logical thinking (System II in Kahneman’s terminology) is a cognitively demanding process. It is slower because it requires focus and control to inhibit intuitive responses and proceed to deliberate and in-depth information processing.

Illustration: James lles for An Adventure in Statistics

“Damned lies and statistics.”

Biases are the first line of defence towards information we find counterintuitive. The second line of defence is even harder to crack because it involves a specific set of hard skills: understanding of statistics and numerical reasoning.

Public surveys show a generally high trust in official statistics and their importance (78% of the public in the UK, 74% in New Zealand). But trust in official government-released data is not the same in trust OR interest in how it was collected and analysed. Jim Grange at Keele University (UK) collected responses from 104 students, asking them to describe their thoughts about statistics. The sample was small and non-representative (focusing only on Year 2 psychology students) but the results capture a good general picture of popular attitudes. According to the survey, statistics, more often than not, are seen as confusing, complicated and difficult. And above all: boring.

This attitude often acts as a barrier to understanding and critically evaluating the reliability of scientific findings. To ad

What do you think about statistics? Survey results (n = 104) Source: Research Utopia

Sports statistics are one of the few areas in which the general public regularly engages with data from descriptive (quantitative summaries of performance) and sometimes inferential statistics (sports bets are based on making probability inferences about future performance based on the existing track record). Other than economists and occasional sports fanatics, in most areas of life, people rarely have to make any inferences about numerical data, so they struggle to understand even some of its basic properties.

  • Let us take a hypothetical statement: “Boys on average scored higher on math problems in this test than girls”. The general public typically interprets this as “boys are better than math than girls” or engages in anecdotal counter-arguments: “two of my female cousins are in the top of their class in calculus and help out male students so this is nonsense”. Average differences, however, do not say anything about individual performance. There are both girls and boys in both ends of the distribution curve (i.e. performing way below and way above average), and there are massive individual differences, moderated by factors in addition to gender. Thinking about the nuances of the data and examining it in depth is not an intuitive process. Understanding such nuances can be hard to grasp even in descriptive statistics. It is even harder when it comes to inferential statistics. As a result, public debates often end up with people misinterpreting and misrepresenting the data or furiously debating a fallacious oversimplified version of the actual claims of the research.

“If science was that reliable, why does it keep changing?”

Newly discovered information, improved research paradigms and technolofy often lead to changing, updating or completely replacing old models and theories. A lot of people misinterpret this as evidence that science is untrustworthy in the long-run, but this misses the real point of science. Science is not the content (a specific claim): it is the systematic testing of hypotheses which leads to reliably reproducible knowledge.

Scientists grapple with really complex topics. You cannot understand a complex issue without trial and error. The goal of science is not to “get everything right” from the get-go. It is to establish a reliable process which incrementally self-corrects and improves our knowledge.

Total certainty is an illusion (ask Nassim Taleb if you are up to the challenge). Good science navigates through uncertainty by emphasising a granular, and painstaking process of empirically testing hypotheses. Expansion of knowledge is incremental and iterative: it includes making mistakes and learning from them. The value of science is precisely in it being systematic and self-correcting.

Illustration: From the portfolio of illustrator Señor Salme

On the contrary: the scientific process by definition is designed to self-correct and deal with uncertainty incrementally. In other words, if you want to pursue the truth and not go along with existing dogmas, you must update your knowledge to what you learn from incoming new evidence. Philip Tetlock and Dan Gardiner call this process belief-updating. In “Superforcasting”, their brilliant take on numerical reasoning and the science of prediction, the authors point out that the core trait of good scientists, systematic thinkers and forecasters is the willingness to modify prior beliefs based on reliable and consistent new evidence.

“Beliefs are hypotheses to be tested, not treasures to be guarded” — Philip TetlockDealing with uncertainty

Illustration: Illustration by Davide Bonazzi for eLife

“That’s just a theory, isn’t it?”

A common misconception is that the everyday meaning of some words is identical to their significance in specialised terminology. The word schizophrenic is commonly used to describe the split personality (a different condition with different associated symptoms). People use evolution as an umbrella term for deliberate improvement over time, yet it is random and not goal-directed. Post-modernist and humanitarian constructs are not the same construct attributes in statistics and science.

The double meaning of theory is especially prone to such errors, and it creates a lot of public confusion over findings which are considered to be highly robust within the scientific community. In everyday English, a theory is an attempt to explain a phenomenon. This attempt might or might not be accurate: it is, in a sense, an educated guess. In contrast to that, the scientific term theory means a testable explanation of the natural world based on empirical evidence.

One commonplace criticism against evolutionary science is “This is just a theory. Gravity is real, that is why we have a law of gravity and just a theory of evolution”. Ironically, in scientific terms, laws are just statements which describe a given phenomenon. Theories go a step further by examining why the phenomenon occurs. Newton had a theory of gravity to conceptualise why the law of universal gravitation existed.

Why is this important? Data contains a lot of noise, and you cannot resolve a question just by summarising the data about it. After empirical testing is over, the data have to be analysed and interpreted. Scientific theory is the main framework within which you interpret these results. Using inferential statistics and significance testing is meaningless if you have not formulated a prediction, based on the hypothesis you derived from existing theory.

The role of (social) media in misrepresenting scientific findings

Illustration: Dusan Petricic for The Scientist

There is also the issue of properly communicating research findings to the general public. Specialized science journals (especially in social sciences) have a well-known publication bias for positive findings and “interesting” results. There is a markedly low (to none) interest in publishing negative or null results and replication studies. Traditional media outlets take this a step further as they focus only on unusual, “sexy findings”. Good science is thorough and nuanced and does not fit into a tweet or a head-turning headline.

“If journalism as a whole is bad (and it is),science journalism is even worse. Not only is it susceptible to the same sorts of biases that afflict regular journalism, but it is uniquely vulnerable to outrageous sensationalism”. — The American Council on Science and Health

Unlike peer-reviewed journals, traditional media caters to a much wider audience. Its main goal is not to educate but to inform and entertain. It usually does so via simplified and sensationalist summaries of scientific findings designed to capture the fleeting attention of the public. To add to the problem: despite the rise of data journalism only big influential outlets can afford a dedicated science journalist or a staff member who would have a thorough understanding of how to read and interpret scientific findings.

Illustration: From the portfolio of Davide Bonazzi

Most online and offline outlets cover scientific findings based on the information available in press releases. The problem with press releases is that the university team which sends them out to the press is usually the marketing and communications team rather than researchers themselves. As a result, research findings are often amped up, embellished or even seriously distorted. A correlational study in 2014 found that when press releases exaggerate findings, the articles based on them are more likely to contain exaggerations. Media coverage also often links to other media stories, rather than the original study making it even harder to trace misleading claims.

Social media further adds to the problem. The business model of major social media channels favors information sharing in self-curated information bubbles and contributes to existing biases (especially confirmation and bandwagon bias). It also encourages the sharing of sensational findings without proper scrutiny: 6 in 10 people are likely to share content without even reading it.

The result is an audience which lives under the illusion of being well informed while it skims through an overabundance of information which is rarely fact-checked.

Illustration: David Perkins for Nature magazine

Healthy scepticism vs Science denialism

To be fair: there are reasons to be sceptical of some scientific findings. There is a long history of bias and conflict of interest in clinical trials and biopharmacological research and it has been compellingly exposed by books like Bad Pharma and Bad Science by Ben Goldacre. There is a raging replication crisis calling into question the reliability and reproducibility of both small and seminal studies. This tricky incentive structure in the current science funding model also fuels questionable research practices (QRPs) and websites like Retraction Watch regularly report on numerous retracted studies and unreliable findings.

There are multiple reasons for unreliable scientific findings. Sometimes a simple coding mistake can ruin an entire study. Good research design and thorough methodology can be flunked by poor statistical analysis. Most of the reasons for lack of reproducibility are innocuous mistakes done in good faith. Outright fraud and extreme cases of QRP are not inherent to science. They are entirely antithetical to it and are immediately exposed by members of the scientific community.

The problem is that the general public takes questionable findings as proof that science is bad. Ironically, it is because of good science practices that we are able to identify such issues and expose them as bad science.

The problem is that the general public takes questionable findings as proof that science is bad. In reality, it is good science practices which help identify such issues and point them out as bad science.

The scientific community is already addressing the replication crisis and questionable research practices by advocating for Open science and more transparency and accountability through mechanisms like registered repots. The crisis might even turn out to be a strong impetus for improvement and stimulus for open research cooperation and public accountability.

It is important to make a distinction between being sceptical (which is the essence of science) and being a denialist. Critiquing methodological and statistical issues with scientific research is a good thing. Questioning empirically established findings and resorting to “out-of-the-box” denialist thinking is dangerous when it impacts public health decisions and important policy debates.

Illustration: From the portfolio of illustrator Señor Salme

Looking Ahead | Opportunities to connect better with the public

Trust & transparency

People who mistrust science often do so because they perceive it as elitist, opaque and out of reach. Clarity and accessibility are essential for any successful dialogue so a way to battle such negative stereotypes is to replace insular jargon with a more transparent and engaging presentation of scientific findings.

It is also essential to communicate to the public that examples of bad science are identified because of the work of methodical scientists who follow the principles of the scientific process. As in any field, we should not lose trust in all scientists just because of a few bad apples.

Accessibility

Scientific research is often heavily dependent on public funding so there is also a strong case for making research findings as accessible as possible. In recent years there is a surge in science outreach especially among young scientists willing to engage audiences of all ages. There are countless local and international science festivals as well as global initiatives like The Story Collider and Skeptics in the Pub which go beyond the conventional TEDTalk format and encourage personal storytelling by scientists across all fields and career stages. However, most of these events attract people who are already passionate or at least curious about science.

Andy Field’s graphic novel about metal music and statistical adventures!

A toolbox, not a repository for facts and formulas

Another useful approach is to encourage school education programs to reframe science lessons as workshops which give kids a toolkit of transferable skills.

It is hard to sell the importance of a physics formula to adolescent kids (and even their parents). But it is easy to connect with scientific concepts if they can help you solve a puzzle or explore an exciting problem in everyday life. Andy Field uses edgy graphic novels to make statistics engaging and my favourite app of the year Brilliant explains complex topics with bite-sized puzzles and brain teasers.

Growth Mindset

Psychologist Carol Dweck has hypothesised that the concept of innate talent is linked to a having a fixed mindset: e.g: “Scientists are born geniuses. There is no point in me trying to understand science if I do not feel a natural inclination for it”. A better strategy is to reframe skills and knowledge and see them as thinks you can develop and improve over time (a growth mindset): e.g. “ I struggle with understanding this concept but I can ask somebody to help and I can work harder until I get it”.

There is still mixed evidence whether a growth mindset does indeed lead to improved school performance but there is a good case for its use as an instrument to reframe attitudes toward taking science classes and battle stereotypes about who is better suited to excel in them.

Recognizing Bias

Cognitive biases are not a conscious thought process: we often do not know they are influencing our thoughts or behaviour. However, learning more about them can help us recognize and challenge them. Unconscious bias training is becoming a buzzword in corporate environments but it might be useful to design ways to implement a more accessible version of this training in early education.

Better Media Practices

Last but not least, the way scientific findings are communicated can benefit significantly if media outlets establish standards for fact-checking and reporting scientific results. PR and communications teams at universities and research institutions could also work more closely to coordinate press release writing with research authors to make sure their findings are accurately presented.

Above we must emphasize the importance of slow but methodical progress towards uncovering more about the world which surrounds us.

____

--

--

Konstantina Slaveykova
DataDotScience

Perpetually curious, alway learning | Analyst & certified Software Carpentry instructor | Based in Wellington, NZ