Fake News and the Manifest Truth Delusion: Part 3 of 3
A Ministry of Truth is Impossible
There are insuperable logical barriers to systematically stamping out fake news and, therefore, to any project of large scale “mind control,” the kind envisaged in Orwell’s book Nineteen Eighty-Four.
Orwell’s book is fiction, of course. However, many, such as Joe Rogan, suppose that such a fictional ministry is at least possible. They would accept that such a ministry presupposes the idea that falsehood could be determined on key issues. One key question is, could there be a ministry of fake news?
When people complain about the “pandemic of fake news”, they often assume vaguely that such a ministry is feasible but reject the idea because it would violate civil liberties of free speech and privacy. They are right to fear the loss of liberties, but wrong to assume that such an enterprise is a workable idea.
Imagine that you have been charged with the task of setting up such a department. It will be a bureaucratic computing machine whose input is the purported news that day from all sources around the world. By “systematic” I mean you’ll have to find or create an algorithm for separating fake from genuine news. An algorithm is a well-defined rule of procedure with finite feasible steps and an unambiguous conclusion. It would have two possible outputs: true news and fake news.
We have already seen how the manifest truth delusion doesn’t help. Truth is as often opaque as is falsehood. And as Popper correctly argued from Tarski’s work, there is no general criterion of truth either. So, let’s make it easier for you in this thought experiment. The thought experiment will make clear how delicate and ultimately impossible it is to use any general criterion of truth and falsity.
Luckily for you, your paymasters are enthusiastic proponents of the coherence theory of truth. You will only be required to fashion an algorithm that always detects when a piece of news report or commentary is inconsistent with the body of doctrine your paymasters wish to safeguard from criticism and reproduce from mind to mind through a population. This would be a coherence theory of fake news. And since you are already onboard with your paymasters’ ideological commitments, it looks like an easy job, since you only have to notice whether you agree with a news item or not. If you agree, it’s true news; if you disagree, it’s fake news.
To make matters even easier for yourself, since your paymasters love the BBC and CNN, you might imagine that for the algorithm you could substitute the proxy rule “scrutinise the candidate news item and consult the BBC or CNN and check whether it is consistent with their reports”.
The Unfathomed Meaning of What We Say —And of What the Censor Doesn’t Want Said
What I’ve done here is to steel-man the idea that a ministry of fake news is possible. Interestingly, it boils down logically to a problem of propaganda: the faithful reproduction of a message. This is one more place where logic, often claimed to be irrelevant to propaganda by commentators such as Scott Adams, rises to undermine the fantasy. In its fundamental case, the goal of propaganda is for a message — a doctrine, ideology, or other assertion — released and believed today, to match the same message adopted sometime in the future. The one must logically entail the other. At the very least, the later instance of the message must entail the earlier original message, since ancillary beliefs typically accrete to the main message over time, in its journey from one mind to another.
However, as with all live ideologies with any plausibility, commentary and interpretation of events in the light of the ideology must proceed in tandem and in line with the transmission of the core message or canonical doctrine. The provision of live running commentary (news) is the motivation behind your identification of fake news.
The problem here is that we never know in advance all of what we say. Logic shows us that the ramifications of what we say are literally infinite. This follows from the fact that the meaning of a statement consists, at least in part, of what it prohibits. “John is wearing a red hat” prohibits the possibility that he’s wearing a green hat, etc. The range of prohibited possibilities can be described as the class of statements that are logically incompatible with it. If I say that the golf-meeting is every Thursday, you know that you’ll miss it if you turn up on any other day of the week. In addition, the golf club house doesn’t open on bank holidays. If you know your bank holidays, you’ll also know that the meeting was called off twice since 2015. (Since 2015, there have been two Thursday bank holidays in the UK.) But you might be surprised by any number of other implications that could unfurl from my announcement if combined with other things you either already know or will only know in the future. In everyday life, we rarely operate with just one message or assumption, but rather are operating with a continually changing set of assumptions. Your regularity at the golf club is a trivial example, and we’ve all been surprised by the repercussions of an insurance policy or a Google user-agreement. It’s not just the fine print: it’s the unseen logical repercussions of the fine print. The intractability of this problem—of surveying the whole meaning of what you say — becomes even clearer in the case of complex international trade agreements and ideological systems.
The Ruthless Logic of Ideological Fidelity
In the history of ideas, logic matters. Let’s have a look at an example from history. The story of papal infallibility highlights the role of logic. On the face of it, conferring infallibility on the Pope looks like an “impregnable rampart” for the defence of the faithful doctrine and the authority of the Church. Any criticism can be dismissed as unfounded and any heresy can be clearly and definitively demarcated and crushed. But papal infallibility is not an impregnable rampart, but rather an inescapable straitjacket. For if one pope used it to define an article of faith or morals, he and later popes would be constrained by its implications. And if a later pope inadvertently contradicted this definition, they would both be incorrect and neither of them infallible. The Church would then face the dilemma: either later popes would be constrained by earlier definitions or infallibility would have to be repudiated. And, as we have just seen, the scope of what we say can surprise us.
The idea of papal infallibility was, contrary to the popular conception, a late development, emerging explicitly in the 13th Century from, to use Tom O’Loughlin’s phrase, the “creeping infallibility” that started in 8th and 9th centuries. The early bishops of Rome were regarded as first among equals rather than as persons of supreme authority over the whole Church. However, they became increasingly accustomed to both temporal and spiritual authority. They wanted to make pronouncements on faith and on the moral behaviour of royalty. In a world of changing balances of authority and power, they didn’t wish to tie their hands, but would rather to be free to adapt. Hence, although the Franciscans of the 13th century toyed with the idea of infallibility to prevent any future “degenerate” pope gaining authority and overturning their edicts, they quickly realised that, paradoxically, they and the Church would have greater authority without it. It received a boost in reaction to Luther, but has rarely been invoked, and when it has, its application has been strictly circumscribed and only used to endorse a few matters of faith already widely accepted in the Catholic Church for hundreds of years. Any ideology wishing to preserve its message intact beyond the myopic election cycles, could learn at least this lesson in humility from the Church. (I show in The Myth of the Closed Mind that even such humility is not enough to satisfy the ruthless demands of logical fidelity.)
The point is that if you take a candidate of fake news and you find that you disagree with it, and therefore consign it to the fake news bin, how can you be certain that by classifying it as false you don’t commit yourself and the canonical doctrine to rejecting something that has unforeseen implications that actually follow from the original message or doctrine? Similarly for news that you agree is genuine news. You may unwittingly adopt as genuine a news commentary whose unforeseen logical ramifications contradict your worldview.
The upshot of my argument, elaborated in my book The Myth of the Closed Mind, is that no one and no algorithm can survey all the potential ramifications of the ongoing classification of fake news. Of course, a government can make some headway in suppressing opposition posed by fake news, but its attempt to do so in the long-run will be plagued by ham-fisted blunders in which it unintentionally violates its own ideological program. The non-governmental organisation called Black Lives Matter is a smaller case study of how an assumed manifestly obvious ideological target for desecration — statues that support slavery — degenerates and becomes an amorphous and changing target, meaning simply anything that is associated with slavery. But since everything reminds us of the past, including the good, the bad and the ugly, eventually the target loses its meaning, and involves the denigration of all cultural heritage, including the values of equality presumed to be central to BLM. I suspect BLM is not an organisation that is even aware of its precariousness in the stream of ideological history and is a transitory phenomenon tied to political events. BLM does not show the meticulous concern over the long-term fidelity of its own propaganda as evinced by the Catholic Church. As with the history of the Church, such projects typically degenerate into random shaming, or burning conspicuous heretics.
A ministry of fake news as a systematic enterprise is a fantasy. The idea of such a ministry carries with it a world of such outrageous metaphysical and epistemological presuppositions that even the gods themselves, were they not omniscient, might shirk the task.
it’s a task greater than all of science, not least because science is a big part of the news — COVID, wildfires, climate change, economics, etc. Adjudicating on these issues requires settling them, and they’re not settled.
Even if we grant the fantasy that one can establish in most cases the intention to lie or exaggerate, which are inherently subjective states, there is no fake news algorithm that is free of potentially self-destructive flaws.
Fake News Driven by Irrational Biases
Many might object that we can’t depend on the gullible to change their minds appropriately. Indeed, they are closed-minded and ruled by bias. Fake news is therefore driven and exacerbated by incorrigible biases and prejudices: confirmation bias, anchor bias, etc.
My view (also elaborated in my book), is that biases are in many cases heuristics — rules of thumb that are shortcuts to problem solving. Even when we can’t readily explain them as heuristics, they ought to be treated like all other conjectures. From an evolutionary perspective, I explain in my book why we should expect biases in many instances to be adaptive propensities: corrigible and modifiable by critical argument and experience.
Bias-researcher Gerd Gigerenzer, Director of the Center for Adaptive Behaviour and Cognition at the Max Planck Institute for Human Development, also takes a refreshingly different view of biases, arguing not only that they are rule of thumb shortcuts, but that they can be even more accurate than procedures that take more information into account. This is the exact opposite of the traditional justificationist assumption. Sometimes less is more. For example, when German and American students were asked which has the larger city Detroit or Milwaukie? The Germans all got the right answer Detroit, but only 60% of the American students got it right. That’s a puzzle from the point of view that says more information is always better when making decisions. The answer, Gigerenzer says, is that the German students, undistracted by the more detail they had about the cities, used what is called the recognition heuristic: If you recognise the name of one city but not that of the other, then that recognised city has the larger population.
But suppose they are errors. If our problem is to minimise error, we can define “bias” as “inclination to make a type of error.” This is the way it was used in astronomical observations. My view is, given that we are fallible and always prone to error, these biases are conjectures — risky trials — that the organism has produced in response to a problem-situation. We must also be aware of the fact that our very attempts to identify a bias are themselves conjectural and may be overturned once we look more closely at the behavioural inclination at issue. In other words, we are making conjectures about conjectures. The recognition heuristic, for example, is explained, and more generally, by the critical rationalist view that it is a conjecture, that city size and population are strongly associated and it works because the correlation is true.
Bacon’s idols and prejudices, some explicit but many only tacit or unconscious, are likewise conjectures and can be held open to critical scrutiny.
I think this view of mine allows us to take a better bird’s-eye view of the problem-situation. Despite his marvellous body of work on many biases, Gigerenzer is prevented from getting to this panoramic philosophical viewpoint because he is handicapped by his commitment to justificationism, the defunct relic of the old Aristotelian manifest theory of truth. Nevertheless, Gigerenzer is a beacon of light in the dark world of the irrationalist thesis. (See Gut Feelings: Short Cuts to Better Decision Making. G. Gigerenzer. 2007.)
What better way is there to make our prejudices and biases vulnerable to correction than to pit them against one another by emulating science, the method of conjecture and refutation.
Fracas of the Fakeries
Science operates most fruitfully in sprouting new knowledge and eliminating error in a social environment that promotes a plurality of views, competing in a ruthless and never-ending battle. Would-be monopolists of information-control who chant the praises of an alleged “consensus of scientists” as a warrant for the suppression of free speech are oblivious to the reality that consensus is the death of science. Science only grows through disagreement. When all scientists are at a loss for any new controversial ideas, or keep them to themselves for fear of losing their jobs, that’s when science is dormant. And if science languishes too long, it will perish.
One qualification here is that scientists generally need some consensus over the acceptance of observational test statements before deducing a refutation of a general theory, but such statements are tentative and can be challenged and refuted. Consensus is not required for general theories as such, but rather the exact opposite. (See section 29, Logic of Scientific Discovery, K. Popper.) So there is an asymmetry in the need for consensus with respect to basic test statements and universal statements. Some transitory consensus is needed for basic statements; but feverish disagreement is required for theories. There is a balancing act going on, for we need the new competing theories to spur the refutations in crucial experiments, which in turn require at least short-term consensus. This element of judgement in Popper’s account is often missed by superficial accounts.
Learning From this Ethos and Methodology of Science and Applying it to the Control of Fake News
How can we minimise fake news while respecting free speech and liberty? I propose that we need a battle of the biases or a fracas of the fakeries that parallels the competition of ideas within science. This would be, I surmise, the least bad solution. We need a pluralistic society, with free speech as its noble aspect. Yes, we need “integrity, unity, and cohesiveness” and honesty in our support for plurality, but not in the sense of a single, exclusive “gold-standard” judge and source of news. This demands a separation of politics and science.
Let’s return to the misleading post I started with. A mendacious conspiracy theorist posts a staged interview with a bogus researcher on YouTube claiming COVID-19 was intentionally released to sell vaccines. Some people believe this immediately and post to others, who believe the story immediately and pass it on. It goes viral. Is this gullibility necessarily stupid and irrational?
Those imbued with the justificationist stance in life will of course retort that they ought to first suspend belief, then collect all (or at least the optimal amount of) relevant information and confirm or refute the reports they encounter —such searches terminating, if necessary, in manifestly true assumptions or sources. But as we have seen, science —our best institution for checking theories— starts with conjectures and never terminates, but continues indefinitely with conjectures, guesses. In science, that’s a matter of putting conjectures down in statements, crystallising them in language outside our heads/psyches, so that they may be tested in a public arena. That is the logic of the methodology, which can be studied independently of our psychology. But I think we can read something back from the insights of this methodology to our psychology about how to view and manage our beliefs —even our most gullible beliefs.
Two Cheers for Gullibility
Let’s shine a kinder light on gullibility. Having a propensity to believe the first thing that other people say without first checking it is thought to be irrational. However, perhaps this propensity is not only a good heuristic and economic adaptation to everyday life, but also a necessity in eliminating error in the psychological realm. As I elaborate in my book, belief (a subjective state of thinking something true) is involuntary. You don’t decide what you believe, you discover it. Most of us, most of the time, have to adapt our beliefs moment-by-moment to the changing immediate circumstances that confront us. There is little or no time to reflect, and there’s never any time for suspension of belief since it’s not under immediate voluntary control. This involuntariness pertains also to beliefs more remote from sensory input. Raise your left hand for 3 minutes and then put it down, then raise your right hand for 4 minutes and 10 seconds and then put it down. You can do that because it’s voluntary. Now, try genuinely believing that Paris is the capital of France for 3 minutes, then switch to believing that Paris is the capital of Russia for 4 minutes, 10 seconds. Let me know when you’ve achieved this.
What if we have to believe something before we can reject it? The sophisticated might be inclined rather to entertain the report as a possibility (whether one believes it or not) and then test it. Imagine for the sake of argument that some people function more at the level of unreflective belief and aren’t familiar with the philosophical art of hypothetical thinking. Granted, one can understand a statement without believing it, but believing it may serve as a crude understanding and also a substitute for entertaining it as a conjecture. In both kinds of cases, belief and entertained conjecture, one first has to accept or hold in some important sense the position before one can test and reject it. Conjecture precedes refutation. So what’s wrong with doing that immediately, and then catching errors as soon as possible? And, provided these gullible “immediate believers” are nevertheless somewhat self-critical or at least bombarded by contrary and alternative views — comments, tweets, FB posts, even other fake news — as opposed to socially isolated as in a cult or subject to just one source of “gold standard” judgement or ministry of fake news, that belief may be toppled and replaced by a better one. Gullibility is an adaptive heuristic. Imperfect, of course, but nevertheless a workable kluge for the conjectural part of “conjecture and refutation” in the psychological sphere.
Fake news entered the world with the emergence of descriptive language, perhaps hundreds of thousands of years ago. We lack a fool-proof algorithm for disposing of fake news systematically without the danger of inadvertently violating any cherished system of thought used as a “truth-reference.” This holds even for truth-references that seem manifestly true, despite the fact that truth is never manifest. Therefore, fake news will continue to irritate our descendants. But this should not demoralise us. It is no different from the situation of science. Removing error has to be a piecemeal, tentative enterprise. Science progresses by feeding off error in an endless series of deliberately engineered trials and errors in a non-political context of competing theories, free speech and accurate reporting. A ministry of fake news is a fantasy, a tool of oppression, suppression and stagnation, and would unavoidably impair our best means of error-correction.
The wider admiration and emulation of science is a real possibility. That means the encouragement of a more self-critical and slower-to-retweet attitude on substantial issues. People can learn to transform some of their immediate beliefs into entertained conjectures for ruthless scrutiny.
The new media of podcasts etc, despite the repressive arrogance of Google, Facebook and the like in their bumbling de-platforming of some, offers some hope of a more self-reflective public. Less censorious platforms are already growing in popularity. There is more engagement with longer critical open conversations, for example the surprisingly popular three-hour sessions conducted by Joe Rogan and Lex Fridman.
The biggest gain in the control of error would be through the separation of science and the corrupting influences of politics (e.g. state funding, licensing etc.) and the chilling effect of political correctness on open discussion. Let’s keep the enlightenment alive and kicking. From an evolutionary-perspective, it has only just got started. What wonders can we produce in a hundred years, a thousand, or a million years?