Post-truth and the reification of intersubjective realities

Oxford Dictionaries have made ‘post-truth’ their international word of the year 2016, defining it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” The word and its usage over the course of the reality-testing events of this year suggest that ‘circumstances’ in the definition refers to the time in human history in which we now find ourselves. That is, the world wasn’t quite post-truth until relatively recently, but now it most certainly is. And while you might reject this suggestion, indulge me a little while I try to weave a couple of thoughts together.

As defined, ‘post-truth’ suggests a rearranging of priorities in the public and political spheres — in the past, objective facts trumped emotions and personal beliefs, but now the latter can reign supreme. Or, put another way, objective facts used to be indisputable; if they could be shown to be false then they were not, in fact, objective facts. But this is tautological, and I don’t want to jump down the rabbit hole of objectivity just yet, so suffice it to say that ‘objective’ in ‘objective fact’ doesn’t modify the word ‘fact’; rather it emphasizes the external reality of said ‘fact’ independent of whoever may be perceiving it, or whether anyone perceives it at all.

Objective facts

Humanity has amassed literally uncountable such objective facts, or facts, as I’ll call them, and it is constantly creating new ones. I’m wearing jeans: Fact. I’m sitting on a chair: Fact. I’m typing on a computer: Fact. These are facts that no one else on the planet probably knows right now to be facts, but they are facts by virtue of the fact that were anyone to walk into my office right now, independent of who they are, they would be unable to dispute any of them convincingly. But that doesn’t seem like a very good measure of truth, and it could be that I’m actually wearing underwear in bed typing on a smartphone as far as you’re concerned. The truth is that these facts I’ve selected are inconsequential, so let’s move onto something of actual consequence for more people than myself and my employers.

The Earth revolves around the Sun. Here is the ultimate go-to fact when you want a fact that everyone can agree with. Everyone has a stake in the sun. Everyone has seen it come up and go down with some regularity. And most modern education systems cover this stuff quite early — and that’s important, because it isn’t something intuitive that we would all independently deduce by observing the world. Left to our own naked eyes, seeing it rise and fall, it would be much more intuitive for us to assume the Sun revolves around the Earth. But we all know that that isn’t the case; people used to think that, but Copernicus came along and straightened us all out. So if anyone should get the urge to jump on Twitter and proclaim that the Sun revolves around the Earth, we’ll all know that person’s an idiot because of course the Earth revolves around the Sun. Everyone knows that.

The question now is how many people can explain why one story is a fact, and the other is an ‘objective falsehood’. If you haven’t kept high school physics fresh in your mind, you may be grasping somewhat — you don’t remember the whole lesson, you may have some images in your mind, maybe of an orrery, you know that gravity has something to do with it; but mostly you probably remember being relatively convinced that the Earth revolves around the Sun and not vice versa.

Let’s get back to that idea that a fact is objective insofar as it remains true regardless of who perceives it or whether it is perceived. This is a crucial point in defining objective truth, because as we all know, from our naïve perspective on Earth, watching the Sun come up and go down with predictable regularity, it really does appear as though the Sun revolves around the Earth. And the reason that it appears that way, is because, from our perspective, it does. Copernicus didn’t discover a new truth that invalidated the old — he presented a new model that was simpler and more useful to explain the motion of the celestial bodies than the older one. The old view was geocentrism, which put the Earth (and, implicitly, man) at the center of the universe, and measured the motion of other celestial bodies as they moved around it. Compare Copernicus’ heliocentric orbital model with the older geocentric one:

The difference is a matter of perspective. That shift of perspective was crucial in fostering the scientific revolution, and challenging the position of importance we had assumed for mankind at the center of the universe.

In any case, the aim of this little discussion was to emphasize three points: 
 1) The importance of ‘perspective’ in a definition of truth
 2) That knowing something to be true doesn’t require an understanding of why it is true, or the ability to explain, verify, or justify said truth.
 3) That hidden in that innocuous little definition at the start of this and so many news articles on the international word of the year 2016 are philosophical depths that are somewhat charted but perhaps poorly lit.

Epistemology

We are very much in the domain of philosophy here in our dialogue about truth and objectivity and facts, so let me invite someone else to describe the lay of the land:

Philosophers have recognized and separated two sorts of problem. There are first the problems of how things are, what is a person, and what sort of a world this is. These are the problems of ontology. Second, there are the problems of how we know anything, or more specifically, how we know what sort of a world it is and what sort of creatures we are that can know something (or perhaps nothing) of this matter. These are the problems of epistemology. To these questions, both ontological and epistemological, philosophers try to find true answers.

(Bateson, 1972: 313)

I don’t want to get into the hornets’ nest of whether we actually exist, so let’s leave ontology alone for now and agree that the physical world exists, independently of each of us, and that we each exist as human individuals: Cool?

We’re interested in epistemology: What we can know, or more specifically, the nature of the things that we know to be true. Hopefully, the short detour into Copernican heliocentrism convinced you that we receive and accept knowledge based on the authority from which we receive it. ‘Authority’ in this case can mean both those with power over us, but also those with power over an intellectual domain— i.e., those who know the most about a certain something. We trust physicists to know about physics, chemists to know about chemistry, and biologists to know about biology. We’ll trust a doctor’s medical advice over a lawyer’s. We’ll call a plumber to fix a blocked drain rather than an electrician. This is a sensible arrangement for the most part, as the progress of civilization wouldn’t advance very far if everyone had to prove and understand every little thing for themselves before accepting them. We even have institutions and underpaid specialists whose entire purpose is the conveying of humanity’s amassed knowledge to little humans in training. By the time these little humans have graduated into full-fledged selfhood, they should have soaked up a succinct and relatively general collection of facts and an understanding of some important ideas they’ll need to function productively in our communal world.

Revision

Part of that training includes swift glosses over the things that have happened in the past, including the ideas that people used to hold and live by, but that we have collectively moved on from since. I could pick lots of examples here as history is practically nothing but a record of facts being revised, but I’ll pick one that is proximate in time if not space: Pluto. Anyone who was in training to be human in the 1990s would have learned that Pluto was the ninth and final planet of our solar system. This was a fact that kindergarteners could tell you before they could multiply, and it had as much practical validity and ‘objective truth’ then as does the statement ‘Mars is the fourth planet from the Sun’ today. In the context of the 1990s, from the perspective of someone in that time (who wasn’t engaged in the study of planets), Pluto is the 9th planet in our solar system. How and why that changed is an interesting story that can be read elsewhere.

So: Facts can change; truth can change. And so it should — if you thought A, but then get better information which discredits A in favor of B, then revise, and don’t hesitate. Does that mean you were wrong to think A was true before? Well, yes and no, and it doesn’t really matter.

Yes: Of course you were wrong, you just had no way of knowing it. Now you do know, so you also know you were wrong, and that’s the end of that. Does it mean you should be less sure of the things you now know to be true? Maybe; not really… Figuring out a good burden of proof for truth is really tricky, that’s why philosophers are still working on the problem, and also why the Wikipedia page for ‘truth’ is so long. Healthy skepticism is usually a good position to take, but that’s just introducing two more words to define and get hung up on.

No: You weren’t wrong, because if you tried to claim B but had no way of showing it or proving it, then the rest of the world would have dismissed you as wrong. They might have to apologize to you when evidence emerges that points to B, but until that evidence is there, as long as everyone else is convinced about A, you will be wrong to assert B, at least in a practical sense.

But the real answer is that it doesn’t really matter whether you were ‘objectively’ right or wrong to think A was true back then. Because everyone else thought A was true, and unless you’re ready to change everyone’s mind (by producing said evidence for B) then the world of A is where you are destined or doomed to live and conduct yourself. You might as well live as if A were true.

Let me borrow the central metaphor from Andre Gide’s novel The Counterfeiters: Consider two coins, one real and one counterfeit, where the counterfeit is identical to the real coin, to the point that no one can tell them apart. Some person holding them trips and falls, and no one sees where they go. Days later someone finds one of them and pockets it. Hours later someone else finds the other. Now both are in circulation, one still real, and one still counterfeit. But if someone can’t tell that the fake is a fake, they will use it as if it were real, and for all intents and purposes, it is real.

Symbols

Which brings us to an important separation between the ‘objective world’ and the reality we inhabit that is implicit in the international word of 2016. The objective world (which, you’ll recall, we agreed exists independently of each of us) is one thing — true, indifferent, and empirical. Reality, on the other hand, is the intangible web of ideas that we cohabit within the physical space of said objective world. If we equate reality with the empirical world, then we have to accept that we live in a reality augmented by humans and their ideas. In that case, augmented reality isn’t some new technological wonder that became mainstream this year— it has been around for longer than recorded history.

Let me invite another underrated systems theorist into the fold:

What is unique in human behavior? The answer is unequivocal. The monopoly which man holds, which profoundly distinguishes him from other beings, is his ability to create a universe of symbols in thought and language. Except in the immediate satisfaction of biological needs, man lives in a world not of things but of symbols. A coin is a symbol for a certain amount of work done, or for the availability of a certain amount of food or other commodities; a document is a symbol of res gestae (things done); a book is a fantastic pile of accumulated symbols; and so forth ad infinitum.

(Bertalanffy, 1981:1)

Symbols maketh man. I mentioned ‘recorded history’ just now— how else could we record human history but in symbols? Though Bertalanffy goes on to mention some symbols at greater degrees of abstraction like currency, our primordial symbol system is language. And it is primordial in a biblical sense, in that language comes before anything else of a non-biological nature. The bible starts with the words ‘In the beginning…’ — words come before God. Without language, things just are.

Language

So it shouldn’t be surprising at this juncture that we ask ‘what is language?’, and ‘what is its role in this present distraction?’ Language is a system of symbols used for communication. There’s lots to be said and learned here, and there many rabbit holes appear when we start prodding the ground, so let’s return to that time of our lives when we were in training to become humans. Everyone, when they are born, is thrust into the world and existence and condemned to try to make sense of both. Everything a baby does is part of its learning and sense-making. When it picks something up, throws it, puts it in its mouth, these are all micro trial and error lessons that the brain accumulates and processes. Among the first things we learn is (a certain amount of) control over our own bodies — i.e., basic sensori-motor skills. An early milestone that parents often record is their child’s first steps. And one thing we probably learn before our first steps is our name, or rather the word that those other things always say when they seem to be referring to…‘me’?

Whether we are born with an innate sense of selfhood or learn one through socialization is a philosophical and perhaps metaphysical question. It doesn’t make sense to get into that debate here, but I’m on the side of socialization, so let me just point some of the evidence that Nick Crossley (1996) cites in defending Merleu-Ponty’s account of the emergence of self:

Whilst there is evidence to suggest that newly born infants enjoy some sense of their self (Coyle 1987), most psychologists see this as a lived or tacit ‘I’ which will only later be joined by the reflective self-consciousness of a ‘me’ (Case 1991; Fein 1991).

(Crossley, 1996: 58)

Contemporary studies indicate that this recognition [of an objective sense of self] is achieved by about 65 per cent of children in their twentieth to twenty-fourth months (Amsterdam 1971).

(Crossley, 1996: 60)

The child’s lack of a sense of differentiation, Dillion maintains, is broken down by a ‘significant other’ who ‘forces the infant to recognize an alien perspective’ (1978: 91). The examples which Dillon provides of how this is done are punishment and withdrawal of approval. […] For the infant to learn to recognize ‘other conscious beings’ or other perspectives and, thereby, recognize the particularity and peculiarity of its own (i.e. to recognise its self as self), Dillon is suggesting, it is necessary for those perspectives to impose themselves upon it.

(Crossley, 1996: 62–63)

Well, this is all rather out of date as far as ‘contemporary’ and ‘recent’ evidence goes, and I haven’t read any of the original sources, nor will I find the time to. I’m just using whatever is at hand right now. In any case, socialization certainly plays some role in the constitution of selfhood and the acquisition of a language is a crucial part of that. Further, specific linguistic details of our language shape how we view ourselves in relation to the world and other people. The Sapir-Whorf hypothesis, often oversimplified to ‘language determines worldview’, although controversial does enjoy at least some support. I’ll defer to Paul Ehrlich (2000: 146–149):

It seems to me that enough thought involves language that the different surface structures of languages cannot help but affect the way people view the world, just as experience and environment can alter visual perceptions. This weak version of the hypothesis recently received support from the results of a study of color perception among members of the Berinmo Tribe of Papua New Guinea. People in this tribe categorize colors differently from the way Westerners do, and they see them differently. To that extent, at least, language seems to shape worldview.

Anyway, we learn our first word, we gain an early sense of identification with our body and develop understandings about other people as equivalent entities. These are filtered through the primary communication system that we learn, our native tongue, and this potentially flavors how we conceive the world around us. Although we have institutions dedicated to training us to be humans, we typically only start attending these once we already have that first communication system, albeit in an early unsophisticated form. So although language classes in school are all about rules, these rules really come after the fact. That is, language evolves and develops, and rules are later created to ‘ease’ the acquisition process.

Intersubjectivity

Being that I started this essay with Oxford Dictionaries’ international word of the year 2016, it shouldn’t be controversial to suggest that language evolves and develops through use, and that it exists independently of any of its individual users. Language is by, its very nature, intersubjective. And that is the word beating at the heart of this missive: Intersubjectivity. It sits at an unspecific but critical point somewhere between the objectivity of the physical real world (that we agreed exists) and the subjectivity of each individual in their own heads. These ideas are typically posited as two perspectives one can take or assume, with each having dominion over certain areas of our individual and collective lives. But intersubjectivity (which frustratingly isn’t recognized by Microsoft Word and is thus constantly underlined by an aggressive jagged red line before me) offers an important, and I’d argue absolutely essential middle ground.

To make this point, let’s turn to the age old question of whether beauty is inherently subjective or objective. Is it in the eye of the beholder, or are certain things objectively more or less beautiful? On the one hand, there is no a priori reason why anyone shouldn’t be right in finding some thing to be beautiful, even if everyone else on the planet feels the opposite. The heart wants what it wants; de gustibus non est disputandum. On the other hand, there are things about which many or most people will be in agreement in terms of aesthetics. About ugly buildings, beautiful people, or cute baby animals. Yes, there can be and usually are pockets of dissenters with differing tastes, but is that reason enough to dismiss mass aesthetic agreement as entirely circumstantial? One could argue that such mass aesthetic agreement is a case of multiple subjectivities converging over time, perhaps due to cultural or economic forces — fair enough. But this position would, in fact, be nothing else than a version of intersubjectivity.

Intersubjectivity pervades every aspect of our lives, for the simple reason that language is intersubjective, and we experience our lives ‘in’ language. Our thoughts manifest in words in our heads, so even when you’re alone in your room dreaming of a bright tomorrow, you’re still engaged in intersubjectivity by the very fact that your sense of self derives from language, and the substance of your daydreams occurs in language.

Now, at this point we have to admit into this dialogue the fact that we (the ‘we’ of humanity) don’t all speak the same language. We have multiple overlapping languages that evolve and split and fracture across space and time. Which implies that there isn’t just one intersubjectivity but necessarily many intersubjectivities. The subjects of our languages necessarily deal with many of the same things, because we all inhabit the same real physical world, and generally encounter many of the same things — so we can translate between languages.

But I would go a little further, of course, and conjecture that ‘reality’, whatever we mean when we say that word, is intersubjective. And to get there I’ll conjecture that ‘reality’ is ultimately what Bertalanffy called mankind’s “universe of symbols”. Being that each of us know ourselves and engage with our world through language, and being that we experience the world in the form of language, our ‘reality’ ultimately boils down to products of language: the things we say, think, write, do, hear, see and make; they all take physical form in symbols — be they patterned air vibrations produced by human speech, lines of computer code that let us interact with our machines, letters we write, or ideas we have. Language is a necessary prerequisite for any of our technological achievements, because abstract ideas only gain form when we invent ways of talking about them. Even if you define reality as that stream of conscious experience that you spend your life moving through, you still need language to separate yourself from the rest of existence. To the degree that reality is measurable, it is only measurable in words.

So. Where are we? Reality takes form in language and language is intersubjective, so we are, right now and always since time immemorial, inhabiting uncountable, overlapping and evolving intersubjective realities. And when it seems, in this post-fact year of some lord 2016, like reality is coming apart at the seams, that’s because it kind-of-literally is. Over history, our intersubjective realities have tended to be relatively well layered, segmented and coordinated. Disparate communities of proto-humans and humans invented intersubjective realities that their progeny were born into and took for ‘objective’ reality. Separated by uncrossable distances, these realities posed little threat to one another. Centralized authorities kept everyone in the community looking the same way, and expelled or eradicated those that threatened the dominant narrative. Occasionally those dissidents were successful in reshaping the narrative through the power of their better (or more compelling) ideas, if not their weapons, which are nothing more than ideas in physical form. As human groups met and these realities came up against one another, typically one would win out and subsume the other, again by either brightness or brawn. And thus the course of human history has seemed for the bulk of us to be a series of political and physical confrontations between conflicting ideas that lead us right up to the 20th century. We’ve managed to get this far with an intact sense of there being a single, consistent, ‘objective’ reality.

The Internet

Then along came the internet. While its pioneers dreamed of democratizing the flow of information and unshackling us from media monopolies that pandered to the powers that be, something else happened. Like cows to the slaughter, we’ve followed our desire for pleasure and consumption and convenience right into a trap that basically none of us saw coming. With our belief in a single objective reality firm, we dove into the infinite sea of information, and expected things to be the same when we came up for air. But there is no single objective reality and there never was. There is one single physical real world, and we do all still inhabit it, but that is distinct, as I hope to have argued, from the reality some of us are struggling with or laughing at right now. When we jumped into that sea, we were offered free choice over whatever intersubjective reality we could choose, and everyone chose one, swam into it, and convinced themselves that things were getting better. And the algorithms we keep hearing about, they came next — giving us more of what we want, they sucked us further and further into our own little worlds. Of course there are still dominant narratives, as ever. And business in the real world goes on at unprecedented speed. Those dreams of a democratized media died; old and new power structures emerged to shape the new virtual landscape. But we’re all a little bit more removed from it now, with less patience for one another, and eager to find those pieces of information, those articles, opinions, and explanations—those stories — that validate our personal intersubjective realities.

That’s why, waking up on the 24th of June and learning that Brexit was on, at least half of the world was surprised. And that’s why those same people woke up surprised on the 9th of November. Because the dominant narrative, the loudest and most pervasive of all the intersubjective realities that confront us each day, relied on out of date polling methods, and we, used to it as we are, took it at the words on the screen; we thought it was the writing on the wall. We were so used to having conversations with people who thought like us that we didn’t realize just how many people there are out there that are living in a very different reality to us, a reality in which Brexit and Trump make sense. We may know that their reality is built on lies and objective falsehoods, but it’s as good as real to them.

The world is different now, and it seems difficult to imagine how we could turn this around. But if this year has taught us anything, it should be that anything can happen. That we’re all writing this story that we’re living in, and it’s not enough to get our individual stories right. The most precious thing we have is this planet, and humanity isn’t worth a thing if we fuck that up. The intersubjective narratives of history have won out by either the pen or the sword, and there are way too many people now to try to win this fight by blood. So we’ll need a new meta-narrative that we can all work within, something we can all get behind; maybe something like The Overview Effect.

References

Bateson, Gregory (1972) Steps to an Ecology of Mind, University of Chicago Press.

Bertalanffy, Ludwig Van (1981) A Systems View of Man.

Crossley, Nick (1996) Intersubjectivity: The Fabric of Social Becoming, Vol.4, Sage.

Ehrlich, Paul (2000) Human Natures: Genes, Cultures, and the Human Prospect, Island Press.

Gide, Andre (1927) The Counterfeiters, D Bussy (translator), Knopf.

Lakoff, George (1987) Women, Fire, and Dangerous Things, University of Chicago Press.

IMAGE SOURCE: http://i.imgur.com/tBCC7P4.gif