Nobody Really Cares About Fake News
Lessons I learnt about fake news and design while working on an academic UX project
--
Alright, to begin I want to apologize for the misleading and “click-baity” title. There are plenty of people who care about fake news, including me. Although, in the spirit of the “alternative facts” movement, I felt it was an appropriate misnomer.
But more importantly, the exaggerated title helps reinforce one of the most important lessons I have learnt about fake news, which is this: our preconceived notions about an article are firmly established by the time we have read it’s title.
A recent study conducted by computer scientists at Columbia University concluded that 59% of all links shared on social media are never actually engaged with. When I first read this statistic, I naively reassured myself that I represented the other 41%. But after a brief moment of unwanted self-reflection, I realized I was guilty of indulging in this disturbing trend as well. I could recount the numerous of times where I had “liked” a news article on Facebook without actually having dug deeper into its substance. I did this simply because the title reaffirmed my existing beliefs and had been published by a source that I deemed was credible.
Then I had a second realization that was more daunting.
The rationale I had just expressed was the exact same rationale that was being used by individuals who are generally accused of spreading fake news. Even though I was not directly contributing to the spread of fake news, I was expressing the same motivations and behaviours that have allowed the fake news phenomenon to spread.
This is one of the numerous insights I learnt while working on a five-week academic UX project about mitigating fake news. Recently, I have been reflecting on the lessons that I have learnt from this experience for my portfolio, and I soon realized that it was going to be difficult to condense all these insights into a single paragraph. I practically had enough content to write a medium post.
So that is exactly what I am going to do, write a medium post!
This post will mark the end of my process documentation for this project, and will serve as a long-form reflection for my portfolio on both the personal and design related lessons that I have learnt from this experience.
Defining the Undefined
Independently, the words “fake” and “news” have straightforward meanings. But when you combine the two words together, they create a more nuanced and complicated concept. Unfortunately, the complicated nature of fake news is at times overlooked and oversimplified, predominantly because the word currently lacks a cohesive definition.
When discussing fake news, the word is thrown around so carelessly that it has begun to lose any valuable meaning. It has even reached point where the vagueness of the word has been utilized by President Trump to ironically discredit some of the most revered and credible journalistic institutions.
Originally, in the context of social media, the word “fake news” was being used to refer the most egregious articles that were intentionally misleading and contained sensationalized titles. But because a cohesive definition was never officially established, Donald Trump had the ability to begin labelling any news source that was critical of his administration as fake news. This even led to the unprecedented move to momentarily ban The New York Times and CNN, among other news networks, from attending an unofficial White House briefing.
This real world example highlights the dangers of being ambiguous when defining problems. Thus, when it comes to framing a problem, I have learnt that it is important to provide an explicitly clear definition of what the problem is inherently about. There can be no tolerance for vagueness. By doing this, it not only helps narrow down potential solutions during the discovery phase of the design process, it also helps promote specific definitions for problems that are undefined.
Recognizing the Risks
Fake news is a complicated subject, but the domain it presides in is even more complex. In a Design Sprint, one of the first steps in tackling a problem is identifying the potential risks that would cause a project to fail. In regards to fake news, this step is significantly crucial. Mainly because the problem of fake news is interconnected with the larger discussion about freedom of speech, which includes topics such as net neutrality, first-amendment rights, and censorship. Without considering all these external factors, the solutions that are proposed will likely be more problematic than beneficial.
This also leads to another challenge that has to be recognized when developing solutions for fake news — understanding the complicated relationship between social media companies and their responsibility in mitigating this problem.
This complicated relationship is evident in Mark Zuckerberg’s initial responses to fake news. Initially, Zuckerberg was steadfast in his claim that Facebook was just a “tech company, not a media company,” and that the notion that fake news on Facebook had “influenced the election in any way is a pretty crazy idea.” After receiving some public backlash about these comments, Zuckerberg clarified his stance and stated that “Facebook is a new kind of platform. It’s not a traditional technology company,” and reassured critics that investing resources into mitigating the spread of fake news was a high priority. Regardless of what motivated this shift in opinion, it is clear that Zuckerberg was initially cautious in how he described Facebook as a company, mainly because he recognized the risk that was associated with implying that an open social media platform should be an “arbiter of truth”.
This challenge that Zuckerberg faced is the crux of the problem that many tech companies are currently struggling with: how does a social media platform incorporate mechanisms to decipher what is factually incorrect without infringing on a person’s right to freedom of speech?
Interestingly enough, this is not a new unique problem. Rather, it is a contemporary version of one that was eloquently described by classical libertarian John Stuart Mill in his influential book, On Liberty (1859):
“First, if any opinion is compelled to silence, that opinion may, for all we can certainly know, be true. To deny this is to assume our own infallibility” — John Stuart Mill (On Liberty)
The notion of “denying our own infallibility” is precisely the type of risks that designers have to recognize when combating fake news. Without these considerations, the solutions that we create might not only be un-valuable, they may even have dangerous implications as well. To avoid this, I learnt that it is pertinent to take the time to identify the risks associated with the problem upfront. By doing this, I have the time to reflect on both the practical and moral implications of the design decisions I propose.
Understanding the Misunderstood
“If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias.” — The New Yorker
If user experience design is truly about having a deep understanding of a person’s context, desires, motivations, and beliefs — one would be hard pressed to find a more formidable problem for designers to pursue than America’s ideological divide. (disclaimer: I want to acknowledge that I understand that this is the epitome of a wicked hard problem that design alone cannot solve… more on this later.)
After a grueling election cycle in 2016, it’s easy to understand why resentment exists between the two largest American political parties. Unfortunately, this resentment has extended beyond the confines of Washington D.C. and has bled into the general public. The vitriol we see publicly and online is a reflection of the deep wounds that have been inflicted by unprecedented political division.
So, what does this have to do with fake news? Well, as mentioned before, if user experience design is inherently about empathy and understanding context – than one of the first step in attempting to mitigate fake news is to recognize that America’s current divisive political state has set the stage for this problem to flourish.
Intentionally misleading news has always existed, but the current divide within America has made the problem exponentially worse. It has even reached a point where fact-checking alone is unfortunately not compelling enough of a tool to change someone’s mind – people are too divided. Therefore, it is imperative to re-evaluate the process people take in validating new ideas. In order to do this, it is important to actually practice empathy, instead of simply just talking about it, by embracing others who hold opposing opinions to ourselves. In doing so, we may actually catch a glimpse of the underlying behaviour and motivations that fuel our beliefs as individuals.
It is the mark of an educated mind to be able to entertain a thought without accepting it. — Aristotle (Metaphysics)
The New Yorker article, Why Fact’s Do Not Change Our Minds, expands on this lingering question, “why do we believe what we believe?” In summary, the article proposes the argument that “Opinions about everything — including politics — are not made more meritorious or convincing if they are backed by a steady helping of facts. Most often opinions are created and strengthened by affirmation from other people.” So in a sense, the title of this post does hold a bit of truth. Simply acknowledging that a news article is providing false information is at times not adequate enough to change a person’s mind, because there are other cognitive factors in play that help influence whether or not an individual accepts an idea to be true. If a certain sentiment reaffirms our pre-existing beliefs, we perform whatever moral acrobatics is needed to justify it and we find other people to further confirm those biases. (momentary tangent: the current discussion around climate change is a very interesting case study for confirmation bias and cognitive dissonance.)
So, if our minds are cognitively hard-wired to avoid uncomfortable truths, does that mean we should give up on promoting objectivity through fact-checking? Absolutely not. It just simply means we need a more nuanced conversation about how to approach fact-checking — one that recognizes that simply relying on deployment of objective facts is unfortunately not enough. We need solutions that go beyond just discussing empathy. Solutions that recognize that emotional and cognitive factors play a significant role, alongside facts, in convincing people to embrace the notion of objectivity.
Furthermore, by recognizing that facts alone are unlikely to change people’s minds, we can take a step back and figure out how to cultivate a context where fact-checking can be utilized meaningfully. In order to accomplish this, it will require a holistic approach that goes beyond design. No single touchpoint will have the ability to solve this problem by itself. It will require something much more meaningful and impactful to change the hearts and minds of individuals.
This is where the application of experience design can play a significant role alongside other fields, such as journalism and psychology, in helping mitigate this problem. By analyzing a network of touchpoints that span across disciplines, we can have a better understanding of how their collective experiences deeply influence us as people.
Through this improved understanding of why people believe what they believe, we can begin to design for experiences that avoid political elitism and condescending rhetoric, in the hopes of cultivating a respectful environment where the exchanging of ideas, including facts, is not only possible — but encouraged. I recognize that all this sounds overly-optimistic, but I full-heartedly subscribe to the notion that design is only effective if you trust in its potential to create desired outcomes. And at the end of the day, isn’t that what design is suppose to be about — the opportunity to enact meaningful change?