On truth in science

Paul Sztajer
6 min readMay 24, 2016

--

I’ve been thinking a lot about truth in science, or more specifically, about the fuzziness of scientific truth.

At its core, science isn’t really concerned with exact truths. It cares about truth and accuracy, but the modern conception of science recognises a couple of key facts:

  • ‘Absolute truth’ is never provable, and likely out of our reach
  • The ability of our brains to properly process and understand the core ideas of a lot of science is a key limiting factor

The first of these is equal parts depressing and exciting: sure, it means we’re all going to die without true understanding, but just think of all the amazing things we’ve been able to invent and develop from new scientific discoveries. If we reached an ‘endpoint’ of science, that would stop, and what would we do then?

The second is, in my view, much more interesting. Our brains are wired to fundamentally understand certain ideas and concepts very well, and are fundamentally bad at others. We are, as a species, chronically bad at statistics, and anyone who says that they can truly visualise 4-dimensional space is either a massive outlier or a liar. And let’s not start on the quantum world, where we mostly have to give up on trying to understand what’s going on and just use the equations that have been giving us the right answers so far.

So far being the operative words. Even our most exact models, the equations are just metaphors for the truth. We expect them to be proven imperfect at some point in the future. Science, then, is not truth, but a series of increasingly accurate metaphors.

None of this is particularly revelatory, especially to those who practice science, but it’s both important and subtle, and it’s one of the major issues plaguing the communication of science in the world at large.

For much of the population, science has always been presented as hard, exact truth. In school, we’re told what things are: the atom has electrons orbiting it like planets; there are three states of matter; and Brontosaurus is a dinosaur. We’re told that science is the acquisition of facts, and once we know something is true, it’s true, right?

But then, if you’re left with this impression of science, you very quickly get confused. Someone tells you that electrons don’t really orbit the nucleus, or you hear about the dozen other states of matter, or you read an article about how Brontosaurus is no longer a dinosaur. And then you’re hit with some god-awful science reporting (which was coincidentally covered recently on Last Week Tonight), and suddenly, given a choice between absolute bullshit and absolute truth, science looks like absolute bullshit.

And then the rest of us wonder why people can agree with climate change deniers or anti-vaxxers.

The current state of science reporting is a massive part of the problem, and while there’s huge improvements to be found there, I don’t see that problem ever going away. The fact is, even if you remove all the hyperbolae and bias, science is complicated. Every time you want to explain something to a layperson, you have to sacrifice some element of truth or accuracy somewhere else. We simply can’t expect that everyone’s going to be kick-ass at picking the perfect metaphors, or that everyone will interpret them in the ways that don’t cause problems down the line.

Plus, we’re Homo “Hyperbolae and Bias” Sapiens. That stuff’s not going away either.

Which leaves us with education - helping people understand just how scientific truth works. That’s not a small task: it doesn’t just involve saying “science isn’t absolute truth”. To actually understand the how to deal with science in the world around us, you need an innate understanding of levels of confidence. Without understanding just how incredibly accurate science can be (and also how inaccurate it can be), “science isn’t absolute truth” just becomes “science is confusing and arbitrarily true”.

In the phrase “science is a series of increasingly accurate metaphors”, the winner of Most Downplayed Word is “accurate”. The sheer accuracy of a large segment of scientific knowledge is astounding, and it’s where we get tripped up. Newton’s law of gravity is accurate enough to be useful and valid for pretty much everything we interact with on earth, but it’s not absolute truth. Einstein’s relativity is a whole order of magnitude or three more accurate than that, to the point that we’ve never found it to be inaccurate, but we already know that it’s likely wrong because it doesn’t play nicely with quantum physics (which has also never been found to be inaccurate).

When someone doesn’t understand the principles behind scientific accuracy (and, let’s be honest, even sometimes when they do), they’ll usually lump a finding’s accuracy against others they better understand. Climate, as an example, gets lumped in with weather, and weather is famously and fundamentally imprecise. Which means, at the end of the day, that a reasonably educated person who doesn’t understand confidence levels and isn’t predisposed to care about the environment probably took a while to get on board with climate change.

I’d argue that it’s not even necessary for people to really understand the differences between, say, six sigma and seven sigma confidence. Most problems we see are the differences between things that are ‘on a human scale of confidence’ (let’s say ‘up to 99%’) and things that are ‘on a scientific scale of confidence’ (which range from 99% to 99.letsjustsaytheresalotof9s%).

Unfortunately, ‘confidence’ looks a lot like ‘p-value’, which looks a lot like ‘statistically significant’, which is quite often used in the name of crap science. This brings us into the whole rabbit-hole of understanding and evaluating biases and statistics, both of which humans are definitionally bad at understanding. Those skills take years to build (and are arguably impossible to internalise). In our education system, as I experienced it, they’re generally built at the university level, if at all.

All of which means that education of this nature is hard, and finding ways to fit the subtleties involved into childhood education across the vast variety of existing students looks like one of those hard problems that’s probably going to be solved incrementally over a generation or two.

We also desperately need to teach these concepts to those who’ve already been past the education system, because we’re seeing an alarming number of social and political issues which hinge on the public being able to separate good scientific evidence from bad, and we simply can’t wait for the education system to improve and 2 more generations to grow up.

I don’t write this post because I have any real solutions — it’s taken me about 2 weeks to just work out how to pose the problem (and I’m not completely satisfied that I’ve done it that well). I’m writing it to, first and foremost, check my thinking (so if this is totally wrong please tell me), but also to ask how others are approaching this problem, whether it be from an approach of education, communication or something else I’ve not thought of.

Random Notes:

  • I’m painfully aware that I’ve done a horrible job of referencing this post that’s about accuracy in communication. I also really need to move on from this post and work on other things.
  • A couple of issues strike me about the adult education/communication approach. First, that we often try to teach adults in terms of issues that they’re emotionally charged about, which means that they might not be very receptive to listen. Secondly, a lot of attempts to do this fall prey to the echo-chamber effect — basically educating the people who already agree with your point of view (this blog post is a wonderful example of that).
  • There’s a 3rd issue that can lead to a mistrust in science, which is that the science itself can be wrong. This is a pretty great exploration of simple statistical errors (and their possible causes) in the field of psychology.

--

--