Can science be objective?
Spoiler: No, it can’t.
In our essay on “Science and Method”, we abandoned the possibility of demonstrating with certainty either that the core tenants of a theory are correct or that they are incorrect. With this, we also saw the possibility of objectively deciding which theories we should keep, and which we should abandon, starting to slip away from us. Still, chin up. We might hope that this is not so critical: even if we can’t be absolutely sure, we can still be pretty sure. After all, most scientists seem to agree on most things. Is that not good enough?
The vision of objective science we have previously introduced, however, is so tightly interconnected that we cannot reject one aspect without having to then re-evaluate every other aspect as well. Philosopher of science Imre Lakatos starkly drew the consequences of losing certainty in science: “Few philosophers or scientists still think that scientific knowledge is, or can be, proven knowledge. But few realize that, with this, the whole classical structure of intellectual values falls in ruins and has to be replaced.”
Let that last sentence sink in for a moment.
Naively, one would like to adopt a theory because it is right, or reject it because it is wrong. Popper showed us that we cannot prove that a theory is right. Quine showed us that we cannot prove that a theory is wrong. We must therefore look for other reasons to adopt or reject theories. There are many candidates for what qualities a good theory should have: it should be explanatory, or predictive, or unifying, or elegant, or useful. We might hope that if a theory has these qualities, in ever increasing measure, it would indicate that we are at least heading in the right direction. Moreover, any development of a theory which abandons them would surely be a step in the wrong direction. Let us consider this.
Must good science be explanatory?
At the start of the 20th century, there were some tough questions vexing physicists. They knew that matter was made of atoms, consisting of a positively charged nucleus orbited by negatively charged electrons. The known laws of physics stated that an orbiting charged particle should radiate energy. This led to a prediction of a physically observable effect: as the electron radiated energy it would spiral into the nucleus. They were able to experimentally test this prediction — that any and all atoms are unstable and will collapse in a burst of energy in a fraction of a second — and the results did not accord with theory.
You can do the experiment yourself: see if you can read to the end of this sentence without all of the atoms in the universe — including the ones in you — collapsing.
Did you make it?
Right. Something is clearly wrong with the theory.
People therefore looked for a theory that could explain why electrons don’t fall into the nucleus. They also hoped to answer some other questions along the way.
When you shine different colours of light through a gas, the gas absorbs some colours, while letting most colours through. When you run a current through a gas, the gas emits light of exactly the colours that it would have absorbed when you shone light through it. These colours are characteristic for each element: sodium is yellow, neon is red, argon is blue. No one knew why.
The colours emitted are not only characteristic of each element, but very specific. A sodium lamp emits light at wavelengths of 588.9950 nm and 589.5924 nm. Two lines, exactly 0.5974 nm apart, and nothing in between. No one knew why.
Classical physics could not account for these observations. In 1913, Niels Bohr developed an atomic model  which set the stage for what would later become a fully-fledged quantum theory. Quantum theory is now widely regarded as one of the best, and best-established, scientific theories ever. Of the modern technologies on which we rely, there are few which were not facilitated — either directly or indirectly — by insights from quantum mechanics. Knowing how successful the science has ultimately been, let us consider the explanatory power of Bohr’s atomic model. His model consisted of three postulates which between them responded to the three phenomena we want to account for.
Classical question: Why don’t electrons fall into the nucleus?
Quantum answer: Electrons orbit the nucleus without falling in.
Classical question: Why do atoms exhibit narrow emission / absorption lines?
Quantum answer: Electrons move between stable states by emitting or absorbing narrow line-width radiation.
Classical question: Why are the emission / absorption lines discrete?
Quantum answer: The energy levels of the electrons are discrete.
The first answer amounts to little more than a declaration of “Because!” From a grammatical point of view, it can just about be considered an answer; but from a semantic point of view, it is surely cannot be considered an explanation. As for the remaining two answers, while Bohr’s postulates moved the discussion, it is not so clear that they necessarily moved the discussion forward. The emission lines are discrete, we are told, because the electrons’ energy levels are discrete. But why are the electrons’ energy levels discrete?
Some people will insist that quantum mechanics (in its more fully-fledged form) is explanatory, and will point to subsequent developments in quantum mechanics which moved the explanation further. We can now ‘explain’ the electrons’ discrete energy levels as a consequence of the periodic boundary conditions on the electrons’ wave-function. This appeal to the later success of the theory brings its own problems, though. Without knowledge of the future, should we pursue a theory because, although it is demonstrably obscurantist right now, future iterations may give clarity, provided we pour in enough research effort, time, and money? If we do follow such a route, how long should we continue before we abandon hope? Do we pursue an apparently senseless theory for a year? A decade? A century? Is there an objective way to answer this question, on which all reasonable people can agree?
Even once the wave-function had been introduced, we are not free of the ‘explanation problem,’ because the wave-function has brought new explanation killers of its own:
Q: Why do I measure the electron to be here half the time and there the other half of the time?
A: Because that is how the wave-function is distributed.
Q: Yes, but why? What happened? What made the electron suddenly turn up over there?
A: The wave-function.
Q: Yes, but… Oh forget it.
It is only for a very particular notion of ‘explanation’ that quantum mechanics — at any stage of its development — manages to ‘explain’ anything. And in the fledgeling state of Bohr’s postulates, its explanatory power was very thin indeed. People were therefore faced with a choice: either quantum mechanics is not good science, or good science does not need explanatory power. Quantum mechanics had a good many detractors, not least among its own architects. But, in the end, it was accepted as science. And the requirement of explanatory power went the same way as Popper’s scientific method. Good science does not need explanatory power.
Must good science be predictive?
The electron’s states in Bohr’s model could be numbered (by an index, n=1, 2, 3…). Bohr’s model said that hydrogen should have a series of emission lines in the visible spectrum, corresponding to an electron falling into the excited state, n=2, from higher energy levels. It also said that there should be a series of emission lines in the infra-red portion of the spectrum, corresponding to electrons falling into the state n=3. Bohr published his paper in 1913, and the visible and infra-red series had been (respectively) measured by Johann Balmer in 1885 and Friedrich Paschen in 1908. Obviously, accounting for established experimental observations cannot be considered a prediction. It may or may not count as an explanation.
Beyond this, however, Bohr’s model also predicted that there would be an absorption series in the ultraviolet range. This is now known as the Lyman series, preliminary results of which were published by Theodore Lyman in 1906: seven years before Bohr published his ‘prediction.’ This sheds interesting light on the significance of predictions. If scientists were purely rational and un-swayed by others’ opinions of claims, it would make no difference if a theoretical claim was made before or after the experimental measurement was made. Predictions are only considered better than post-dictions because we recognise the possibility of scientists being tempted (consciously or unconsciously) to fudge results to align with what went before. This, however, means that something can count as a prediction even if it is was made after the event, providing the person making the prediction doesn’t know about the result. This, in turn, means that work by a scientist who is unaware of the research going on around them could be considered better science than exactly the same result obtained by a scientist who kept up to date with what was happening in the field. This counter-intuitive result shows that the idea of a ‘prediction’ is not an absolute notion, but varies depending on the knowledge of the scientist. Ignorant scientists can predict things that knowledgeable scientists cannot.
Such quirks aside, Bohr also predicted series that would later be known as the Brackett series (discovered in 1922), the Pfund series (discovered in 1924), and beyond. The dates show these to be bona fide predictions. Unfortunately for what would otherwise be a cut-and-dried case in support of Bohr, all of these series in hydrogen have something in common. Lyman, Balmer, Paschen, Brackett, Pfund: they are all almost exactly where Bohr said they should be. Very close, but not quite right.
If a theory makes a prediction and it is right, this is good. If it makes a prediction and it is wrong, this is bad. But what if the theory is almost right? What if it predicts a series of emission lines, and they turn up almost exactly where the theory said they would be? Does that stand in support of the theory, or against it? We intuitively feel that it must say something in favour of the theory; if it looks better than a fluke. But, it also tells us that there is something wrong. How far, we wonder — and in what ways — would an experiment need to diverge from a prediction to roll over from “basically supporting the theory” to being “basically refuting the theory”? And what should we do if not all people agree on the answer to that question?
Moving beyond hydrogen (which has one electron), when Bohr’s model was applied to helium (which has two electrons) it was way off. For atoms with more electrons than helium, it was a non-starter. Put simply, his theory did not match the data. This was freely admitted by Bohr. Even in the paper where he proposed his model, he knew it didn’t line up with experiments.
Maybe the theory was basically right, but needed some additional effect to be included. Maybe the experimental data was basically right but there is some minor systematic error (or, as Bohr suggested to account for one discrepancy, maybe the experimenters had accidentally measured the wrong gas. One feels he may have been clutching at straws). In any event, the failure of the theory to match with experiment was not seen as a major problem. Things seemed to be on the right track and the discrepancies suggest avenues for future research. Given a bit more research effort, and time, and funding, the problem might well be fixable. Karl Popper, of course, would say that if a theory does not match the data, the theory has been falsified. The only difference between a theory “being falsified” and a theory “having avenues for future research” is that the former statement means you should stop working on theory, and the latter means you should carry on.
Pre-dictive? Post-dictive? Wrong? Take your pick. In any event, a patchy record on predictions did not stop the development of quantum mechanics into a fully-fledged science. Good science can be predictive, but it doesn’t have to be.
Must good science be unifying?
Newton’s laws of motion were unifying: they showed that the laws that govern the movement of heavenly bodies are the same as the laws that govern the movement of bodies on earth. Maxwell’s equations for electromagnetism were unifying: they showed that electricity and magnetism were intimately related, and they showed that ultra-violet and infra-red radiation were describable in the same way as visible light. Science abounds with examples of theories that unified otherwise apparently disparate ideas. But do all good theories have to be unifying?
Again, Bohr’s model is instructive. Classical mechanics could describe the fall of an apple, the trajectory of a cannonball, and the movement of planets. Classical electrodynamics could describe the transmission of light, radio waves, microwaves, and X-rays. At the end of the 19th Century, science could explain almost every observed phenomenon except for atomic spectra and light-bulbs. In 1900, Max Planck solved the light-bulb problem . Inspired by Planck, Bohr set out to account for atomic spectra.
Bohr’s theory — described above — worked, more or less, for the spectrum of atomic hydrogen. It also worked (a little bit less well) for other atoms with one electron, like singly ionised helium and doubly ionised lithium. It did not work for neutral helium; indeed, it didn’t work for any atoms with more than one electron. It didn’t work for molecules; not even for molecules like singly ionised molecular hydrogen, which only has one electron. It had nothing to say about the fall of an apple or the movement of planets. It could explain nothing that had already been explained. It just described hydrogen. Imperfectly. If Bohr was doing science (and many people, particularly with the benefit of hindsight, argue that he was) then science, apparently, does not need to be unifying.
What is unification worth?
The deficiencies of quantum mechanics were there for all to see. History shows that many scientists — including many of the architects of quantum mechanics — claimed that quantum mechanics could not, or should not, be accepted as serious science. History also shows that many scientists embraced it anyway, regardless of its many obvious flaws. This raises two questions:
- Why did some scientists embrace such a problematic theory?
- Given that some scientists embraced it, why did some scientists not?
Classical mechanics had problems. Not many, but a few. Problems like how to explain atomic spectra. A lot of very smart people had spent a lot of time trying to solve those problems. All of the most promising routes to solving the problems had been shown to fail, and people were running out of ideas for what to try next.
Quantum mechanics was in exactly the opposite situation. It had a lot of problems. And they were big problems, like the fact that the basic principles seemed to lead to deeply unphysical consequences. But no one had spent much time trying to fix the problems. None of the routes to solving the problems had been shown to fail (because none of them had been tried). So, given their almost total ignorance of the situation, people had lots of ideas for things to try next.
A choice had to be made: should scientists pursue a (classical) theory that was explanatory, predictive, and unifying, but which was known to be fruitless regarding the questions at hand; or should they pursue a (quantum) theory that was confusing, piecemeal, and often wrong, but which opened up the hope of bearing fruit, if only we could somehow fix the bugs? This choice involves making a trade-off between explanatory power and new possibilities. In making this trade-off, we face the question: How many new possibilities do you have to gain to outweigh what you lose in explanatory power? Before attempting to answer this question, we shall take a brief excursus to explain why the question itself is problematic.
Consider the following questions:
- Would you rather be given a $20 note or a $10 note?
- Would you rather be given a $20 note or three $10 notes?
- How many $20 dollar notes would I need to offer you before you opted for them instead of three $10 notes?
These are all reasonable questions. If you asked a hundred people, they would most likely all give you the same answers. Anyone who said they would rather be given a $10 than a $20 note is probably a little crazy.
Now consider the following questions:
- Would you rather have a bottle of water or a bottle of wine?
You can get water for free out of a tap, but you have to pay for wine. So the wine is worth more money. But if you don’t have a corkscrew to hand, you can’t get into the wine, so it is no immediate use to you. But maybe you are going to a friend’s for dinner later, and don’t want to show up with a bottle of water. Or maybe you’re feeling dehydrated and the wine just won’t help. Or maybe, for health or religious reasons, you never drink alcohol.
Depending on the circumstances, two different people might give different answers to it. This does not mean that one of them is crazy. Depending on the context, either answer can be reasonable.
- Would you rather be given a bottle of water or three bottles of wine?
If you want a party, or you want to sell what you get to make money, you would rather three bottles of wine. If you are hungover, or you are just about to walk into an exam, or you are a teetotaller, you would rather a bottle of water. If you want to rinse ketchup off a white shirt, water is the only way to go. Different people, in different contexts, wanting to achieve different things, can very reasonably give different answers.
- How many bottles of water would I need to offer you before you opted for water instead of three bottles of wine?
If you have opted for the water in the last question, the answer is clearly ‘one’. If you are badly hungover, the answer might be zero: you would rather have nothing at all than have three bottles of wine. If you are off to a friend’s house, however, no increase in the number of bottles of water offered will change your mind: turning up to dinner with five hundred bottles of water but no wine is not socially acceptable.
Evidently, water and wine raise issues that never occur with money. One $20 note is worth two $10 notes. This is so reliably true that you can set maths exams having questions about money. It doesn’t matter if the notes are old, or crumpled, or if they were once held by the Emperor of Japan. It makes no difference if you are tired, or hungry, or on your way to a party. What you want to use the money for is irrelevant, and how much money you have already is irrelevant: one $20 note is worth two $10 notes. For all people, at all times, in all places. That is why money works. We get so used to this that we sometimes forget that many things do not work like that.
The value, usefulness, or goodness of a bottle of water is not simply different in magnitude to that of a bottle of wine, but different in kind. As such, water and wine cannot be compared in any general sense: it is not meaningful to ask how many bottles of water a bottle of wine is worth. This is known as incommensurability. There is no universal, totalising, objective sense in which water is better or worse than wine. It all depends on what you want it for, what you have already, and what you want to do next.
Incommensurability of scientific theories
Now let us return to our earlier question: “How many new possibilities does it take to outweigh what you lose in explanatory power?” If new possibilities are much like money — if they can be torn from their context and objectively traded — then this question might be answerable. If new possibilities are more like wine, then it may not be. Certainly, we should not expect that all reasonable people will give the same answer.
If you have spent your life getting to know a well-established and fruitful theory, and you want to use it to fill in a few gaps in our knowledge, then new possibilities are unattractive and probably unnecessary. If you are a young researcher and want to make your mark on a field, new possibilities are very attractive. If you already have a theory with open questions everywhere, why would you care to add more new possibilities? If you enjoy the adventure of being different, maybe you do care to add more new possibilities.
Faced with any given theory, it is possible that some scientists might opt for trying to fill in a few gaps with the tried and tested methods. Other scientists, faced with exactly the same theory, and in possession of exactly the same information, might choose to try something different, for the sheer adventure of it. Some departments or grant agencies might be rather conservative and prefer to support research on well-established theories and methods, while other departments or funding agencies might fancy themselves to be at the cutting-edge and prefer to support research on new, more risky theories and approaches. No amount of novelty will dissuade one scientist from their fruitful plodding, while no amount of lost explanatory power will dissuade another scientist from their adventure. Neither scientist is necessarily crazy. Neither scientist is necessarily being un-scientific, much less anti-scientific.
The hope of new possibilities is clearly like wine, not money. Good science can be explanatory, it can be predictive, and it can be unifying. It can be all of these things, or some of them, or none of them. And there is no objective standard to definitively say that in any given context unification is more important than prediction, or less important than hope.
What just happened?
We started this essay facing the fact that the scientific method could not definitively tell us whether the key ideas within any given theory were true or false. If they were true, it was hard to see how science made them demonstrably true, or that we could know their truth with certainty. But maybe, we thought, just maybe the problems wouldn’t spread. Maybe we could stop the rot. Maybe, even if we were not certain of the right path, there was at least some objective way of choosing the best way forward.
In considering various heuristics for signposting when we might be on the right track — whether the theory is explanatory, predictive, or unifying — we found that none of these is essential. More significantly, we found that, if given the choice between a theory which has explanatory power and a theory which is unifying, two different scientists might not make the same choice. This is not because one or other of them is ignorant, or unreasonable, or a bad scientist. They simply have different opinions regarding which scientific theory is worth pursuing. We are now a long way from the Enlightenment hope that subjective opinions could be eradicated from science altogether. We now find that subjectivity is part of science: moreover, it is a necessary part of science, and it reaches to the very core of the entire scientific endeavour.
Can we stop the rot here, though? No.
At the start of this essay, we noted Lakatos’ insistence that “the whole classical structure of intellectual values falls in ruins and has to be replaced.” So far, we have only undermined the scientific method (in the last essay) and objectivity (in this one). Objectivity is intimately connected to the network of other aspects of science. If different scientists can come to different conclusions about what research is worth pursuing, then what is considered to be ‘science’ need not be the same for all people, in all places, at all times. If objectivity falls, universality falls. We will discuss this in a future essay.
But wait! There is more!
We started this essay recognising that science cannot tell us truth with certainty. We end this essay recognising that science does not have an objective or universal notion of what ideas are worth pursuing. There is a major shift away from objectivity and universality. But, more than that, we have stopped even asking about truth.
A scientist may work on Classical Mechanics because it is fruitful, or work on Quantum Mechanics because it is exciting, or work on String Theory because it is unifying. But, in this discussion, none of them adopt their theory of choice because it is true. It is one thing to disagree about what is true. But we have now reached a point where, in a very real sense, science does not even care about what is true. We will discuss this in another future essay.
This essay and the Re-Assembling Reality Medium series are brought to you by the University of Hong Kong’s Common Core Curriculum Course CCHU9061 Science and Religion: Questioning Truth, Knowledge and Life, with the support of the Faith and Science Collaborative Research Forum and the Asian Religious Connections research cluster of the Hong Kong Institute for the Humanities and Social Sciences.
 Imre Lakatos (1980). The Methodology of Scientific Research Programmes: Philosophical Papers Volume 1. Cambridge: Cambridge University Press. p. 8.
 Niels Bohr (1913). “On the Constitution of Atoms and Molecules” Philosophical Magazine 26, 1–25.
 If this doesn’t seem like much of an ‘explanation’ to you, it means you are not a quantum physicist. There is no shame in that. Interestingly, the fact that a given sentence can prompt one person to say “Oh, that explains it!” and prompt another to say “Eh?” suggests that a thing is only ‘explanatory’ relative to the person trying to understand the explanation. If ‘explanatory power’ is not an objective property of the universe, but depends — to some extent — on the individual trying to understand the explanation, then explanatory power is not universal or objective. We shall not pull any more on this thread here, because universality and objectivity in science are unravelling fast enough anyway.
 M. Planck (1900). “On the Theory of the Energy Distribution Law of the Normal Spectrum” Verhandlungen der Deutschen Physikalischen Gesellschaft, 2, 237.