The Truth of It — Part II: How can we recognize truth?

Keith Sonnanburg
11 min readMar 28, 2022

--

Like all of us, I continually decide what makes sense to me and what doesn’t; that understanding informs everything I do. My ways of discerning what’s true seem so straightforward to me, I’ve hardly questioned them. I thought describing my methods would persuade others to accept them as valid. But, the more I’ve tried to nail those down, the more I realized how hard it is to identify truths with confidence.

In Part I of this series I explored the nature of “truth,” focusing on empirical and experimental truths since they’re useful for dealing with the material world. Ideally, such truths should correspond to reality, be consistent with other truths, be distinguished from opinions, be viscerally satisfying, be subject to verification, build confidence in what’s known, survive scrutiny by others, and garner support from independent observers. Now, in Part II, I’m asking: how can we recognize truth when we bump into it?

Nobody likes to be duped. We want to feel confident about what we believe. But, since none of us is omniscient or omnipresent, we depend on others to expand our knowledge. I generally trust my direct experiences (though I know even then I can be mistaken). But, I’m wary of cognitive errors and mindful of how my emotions tug at me. Given such limitations, it’s wise to consult others before deciding what seems true. But then, we must ask whom can we trust?

Knowledge, news, rumors, lies, and gossip have one thing in common: they’re all ideas that people share. But which tidbits are reliable? Many sources vie for our attention: print media, radio, television, film, visual arts, music, theater, internet memes, video game narratives, fashion, chats, and even virtual reality experiences. Spoofs, hoaxes, distortions, exaggerations, omissions, spins, and outright deceptions all come at us, and the messages we receive influence our attention, our appraisals, and our choices.

Decoding Media

I’m most confident when I can test claims, but even experiments can be flawed, and I’d rarely run my own even when feasible. To validate knowledge we need a way to analyze messages and judge their effects on us. At first, the challenge of identifying facts amid the torrents of misinformation bombarding us bewildered me. There are so many variants, caveats, and exceptions to truth claims, befitting their specific contexts. But, when I focused on six key elements the problem got much simpler. Those are:

1. Messages

2. Creators/producers

3. Motives prompting communications

4. Targeted audiences

5. Techniques of presentation and quality of executions

6. Facts used for assessing messages

In decoding media, it’s crucial to recognize the central role of creators and producers. It’s they who craft messages to reach particular goals. They aim at selected audiences, choose the techniques used to present information, and include or omit facts in support of their interests. Divining their purposes and perspectives helps us choose our responses.

Understanding why and how media content is shaped depends on a skillset known as “media literacy.” That’s mastered by applying “critical thinking.” When John Dewey introduced critical thinking, he meant an “active, persistent and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it, and the further conclusions to which it tends.” (Dewey 1910: 6; 1933: 9) That entails using logic and sound reasoning to evaluate beliefs and purported knowledge by examining evidence. The meanings of symbols and the relationships among a message’s elements are examined in the context of what we know. Critical thinking appears in many forms in scientific, mathematical, anthropological, economic, moral, historical, and philosophical thinking. The desired end is an enhanced ability to solve problems.

Those who explain the basis for their beliefs are more persuasive than those who don’t. They invite us to judge whether their conclusions are justified. But, even those doing their best to show how they know things might take things out of context or inadvertently spread misinformation. Even worse, some intentionally misrepresent reality. So, while I sometimes rely on others’ reports, the uncertainties introduced concern me.

Focusing on creators and producers to analyze what’s true is informed by “theory of mind.” That’s our ability to infer mental states in ourselves and in others. It appears in children as young as 15 months. It forms a foundation for social interactions. Although we can’t read minds, we can guess which private processes may shape others’ behaviors, including how they communicate. Messages contain several features: language, imagery, music, sounds, movements, etc. Any of those can be manipulated (e.g., by using lighting, timing, filters, colors, font choices, framing, cropping, voices, music, editing, enhancements, etc.). Theory of mind evolves by gathering clues, improving our chances of matching communications with their original intentions. By selecting whom or what is presented, sources reveal implicit values and biases. Carefully decoding messages makes it easier to determine what’s true.

Identifying intended audiences provides a context for interpreting messages. For example, a Chinese news agency ran an English article with a Moscow dateline and a headline broadcasting the dangers of substandard equipment found in US nuclear reactors. It emphasized security risks posed by inadequate components and criticized the failures of US oversight. Ironically, the original source for that report was the US Nuclear Regulatory Commission. Although such concerns are serious, the dangers were detected by appropriate authorities. The intended effect seemed to be inciting fear among US readers, perhaps especially among those who distrust their own government. The fact that US agencies uncovered the deficiencies is reassuring. Similar tactics were used by the Russians during the 2020 presidential election.

Deconstructing how media captures our attention is also revealing. Is fanfare used to stir emotions or is the tone restrained? Is there informative content or merely claims and critiques? Are admired role models enticing you to follow them? What content is highlighted and whom or what isn’t mentioned at all? Is accuracy sacrificed to push a biased message? Each choice made by creators/producers reflects their intentions.

Once media strike a nerve, their effects get amplified by online algorithms and social networks. Bubble filters match browsing habits, interests, and opinions with messages aimed at specific users. So, what you find in your news feed is different from what others find in theirs. Bubble filters and echo chambers (networks of people who rely on similar sources) reinforce our tendency toward confirmation bias by directing our attention to items expressing opinions we already embrace. Since they insulate us from perspectives we don’t usually seek, we miss important data. Such technologies and phenomena often go undetected. They don’t wait for permission and they don’t explain what they suppress.

Understanding the purpose of media helps construct theory of mind. Is a production made to entertain, to persuade, to educate, to connect, to alert us about new ideas and events, or just to express the an opinion? Whatever the purpose, certain assumptions and beliefs play a role in building the message. Even objectivity-minded news organizations prioritize which stories to cover, decide how they’re presented, and choose what to include or leave out. Whether such choices are intentional or result from unconscious assumptions, the final products affect us.

Most media come from businesses seeking profits. Networks of corporate interests distort and hide truths for their own advantage. Their leaders control the contents and distribution of most of the media that we encounter. They generally aspire to maintain the status quo and amass wealth. Even postings from autonomous individuals follow established rules for distributing content. Investigating the incentives guiding creators improves our guesses about a source’s hopes for influencing us. Consider the decades-long campaign by the tobacco industry against associating them with health risks, and the fossil fuel industry’s longstanding denial of its known contributions to global warming. Such sources circulate pseudoscience, disingenuously contradicting credible experts.

Other deceptive strategies include using clickbait, sponsored content, native advertising (commercials that imitate legitimate stories), false connections, false attributions, or false context. Those typically display eye-catching, entertaining, sensational, or outrageous headlines, images, and captions to attract viewers. They promise to reveal useful secrets. They present advertising disguised as editorial content. False connections use headlines, visuals, or captions unrelated to the content. False contexts pair genuine content with deceptive information, like an incorrect date or misattributed quote. False attributions use authentic images, video, or quotes attributed to the wrong events or individuals. Such misinformation can fool us, and once fooled, we might direct others to more fake content by clicking on supporting links.

Misinformation can stem from honest mistakes or poor journalism. Creators don’t always know when they’re incorrect. At times content gets published without time to check all the facts. Reputable news sources accept accountability for their stories. Since errors tarnish reputations and can result in litigation, they apologize for inaccuracies and retract mistakes.

Satire and parody are relatively benign forms of misinformation, produced mainly for entertainment with humor, irony, or exaggeration, focusing on current events or celebrities and sometimes offering social commentary. But, opinions or jokes can be misunderstood, get transformed over time, and finally become misinformation passed on from one source to another. Truth gets confused with fiction in the name of fun.

Passions beyond commercial interests and humor move creators to spread messages (e.g., political or religious ideologies, self-aggrandizement, and tribal bonding). Information is manipulated or fabricated to create confusion and gain influence. Emotionally charged language aims to cultivate power. Partisan news, propaganda, and astroturfing spin or alter facts and cherry-pick information to support chosen narratives. Astroturfing is presenting messages that pretend to come from grassroots movements in targeted communities. For example, producers may run ads announcing which causes they support to signal that they share psychographics with those audiences.

Any form of media that presents false information material as credible information is fake news. It usually appears on platforms specializing in bogus or sensationalized stories with provocative headlines and generally seeks to outrage and shock (inspiring sharing). The contents spread quickly since many believe and share it without taking time to check facts. More harmful forms of disinformation include fabrications and conspiracy theories. Conspiracy theories have appeal since they seem to explain complex realities with simple assertions, letting believers cope with uncertainty and anxiety. Such theories reject experts and their authority, and their claims are unassailable since evidence refuting them is taken as proof of their truth.

Evaluating Media

Before deciding what’s true, I look for outlets that seem reliable to sample alternative perspectives. I take cues from formatting (e.g., whether a book resembles a novel or a work of non-fiction). Some media declare their genre (e.g., a film may label itself as a documentary, historic fiction, or as a work based on a true story). But, such self-designations can’t be automatically trusted.

Legitimate sources can be imitated so convincingly that the ruse goes unnoticed. But, often fake news can be detected by judging the piece’s production quality. Fakes may look amateurish, have lots of annoying ads and sponsors, use altered or stolen images, or solicit payments for “exclusive” services. Be skeptical if there are many misspelled words, an excessive use of CAPS, dramatic punctuations, lots of padding (“fill”) or poor grammar. Most reputable sources have high proofreading standards. Examine the URLs (links) when directed to other websites. I’ve received spam emails with links like these:

Costco-3XfPgnUSIR@CostcoShopperFeedback3YfPgnUSIR.com and

“Compliance Department US” at dev@inmowa-jp.com.

Notice, the domain names don’t match the resources indicated. Fake sites also use unconventional domain extensions like “.infonet” or “.offer”.

Today’s sophisticated technology further muddles the truth. For example, “deep-fakes” use computer graphic imagery that simulates real events with convincing fictional videos. Statistics, graphs, and photos can be edited. Images can be manipulated and quotes manufactured. Possible signs of fake content include warping (straight lines in the background appear wavy), strange shadows, jagged edges, or perfect skin tones. Content may be removed or added to enhance desirable features or exaggerate repugnant ones. Even unadulterated images can be placed in misleading contexts. Big news attracts fakers and Photoshoppers, so it’s prudent to perform reverse searches on images (checking their origins). Google’s reverse image search or tools like TinEye identify places or people and help detect misleading news.

While assessing others’ messages, it’s also important to monitor our own biases. Be cautious about jumping to conclusions and about accepting those of others. In light of opinionated journalism, polarizing social media, and websites profiting from provocation, consider why particular messages target you. The algorithms that make recommendations by predicting viewing preferences will likely inflate our prejudices.

Orson Welles’ infamous 1938 Halloween broadcast, “The War of the Worlds,” purposely blurred the line between news and fiction. Just as people then could have searched the skies outside their homes for invading aliens, or tuned their radios to competing news services, we can compare seeming fakes with other sources. While none is foolproof, several together build confidence. If content is hard to believe, you probably shouldn’t believe it. Extraordinary claims demand extraordinary evidence.

Comparing stories from distinct sources helps us uncover the truth. If no reputable ones corroborate a story, it’s probably fake. Professional news agencies have editorial guidelines and resources for fact-checking. Consulting a range of sources and perspectives puts us on safer ground when drawing conclusions. It’s important to seek new sources and interact with people holding different views, and to discuss ideas with facts, patience, and respect. Just because you want something to be true doesn’t make it so.

Credible news includes plenty of facts — data, statistics, quotes from experts, and so on. If there are few or none of those, be wary. Follow links to original articles and named sources to check the credibility of cited sources. Seek out reporters close to an incident. To validate what’s reported, consult experts, librarians, and fact-checkers like BBC’s Reality Check, AP’s Fact Check, FactCheck.org, Hoaxy, Media Bias/Fact Check, Politifact, Snopes, and Washington Post’s Fact Checker. Also, pay attention to the language used. Phrases like “We are getting reports.” “We are seeking confirmation. And “We have learned.” suggest vague, possibly unreliable sources.

Look beyond a story to investigate the source’s mission and contact information. A website’s “About Us” section is revealing (trustworthy sources list background information, policy statements, and email contacts matching their domain, instead of using a Yahoo or Gmail address). Fakes often cite anonymous or unreliable sources, or no sources. Survey the headlines of other stories on the same platform; are they unbelievable, shocking, or inflammatory? Often links or comments posted in response to content are auto-generated by bots or trolls hired to post misleading information. Biased or fake news outlets will even give fake contact information.

Check the author’s bio (fake news often doesn’t list authors) and search their background and reputation, verifying they’re real. What and where have they reported before? Does their expertise grant them credibility? What seems to be their agenda? Do they give the other side of the story?

Most media productions display some kind of branding since producers like to encourage repeat customers. If an outlet presents scant clues to origins, you may learn more by noticing its associations with others. Investigate links, credits, sponsors, or legal disclaimers and observe how they’re placed. Platforms can act like bona fide outlets by faking credentials. Determine if referenced sources actually support the stories citing them.

Living with Media

“You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.”

False information can spread quickly, especially if we don’t take the time to investigate. It’s up to each of us to avoid exacerbating the problem. Some may already have been aware of what’s written here, but how many consistently employ these safeguards? It’s sobering to realize that if we can’t testify from direct experiences, the best we can do is to be conscientious when deciding what’s likely to be true, and then remain open to further evidence. In this series’ next installment (“The Truth of It — Part III”), I’ll explore the importance of trusting science to establish facts. I believe that understanding science can help us agree on methods for grasping the truth of it.

--

--

Keith Sonnanburg

Curiosity drives me. Writing to discover what I think. Lived on the both coasts and somewhere in-between. Studied psychology, philosophy, lit, and computers.