working version 1.0
The motivation for this post is my experience with the newly formed wave of pseudo-specialists echoing their opinion from the four corners of the internet. And by simply writing this column, I am joining the chorus but I will try to stay on a topic that I am quite familiar with: learning.
Learning makes our life better
The reason for why we learn things is simple: we learn to inform our decisions in the future. We are taught that fire burns, so we don’t stick our hand in the fire. We learn that the sun is a star formed mainly of Hydrogen and Helium so we don’t spend our time praying for it. We learn how totalitarian regimes, slavery and other shameful events in our history started so we can avoid them became a reality in the future. We learn to read so we can catch the right bus, to do math so we can plan when to leave home so we can meet our friends for dinner at 8 pm, if it takes 30 minutes to get there. Even simple concepts like that make our life better.
What happens when we learn something that is not true? We fail to show up for our dinner parties on time, we burn our hands, we put our faith in fake idols and we are susceptible to become slaves or live under dictatorships.
New ideas must be tested
It is not just for the pride of being right, but specialists spend so much time studying a certain topic so they can inform the public about things that will make the life of the community better, it is their job.
To avoid that specialists inform the public of something wrong despite the best efforts, any new idea goes under scrutiny of other specialists that will judge, debate, test and sometimes make amendments to the idea before reaches the public. In academia, we call this “peer review”. It is not by any means perfect: ego, personal agendas, politics and other mundane, or should I say humane, problems might get in the way. However, despite the downsides, peer-review is still the best way we can verify if what some one is talking about is in fact something useful.
There are models and models
Note that I used the word “useful” and not something stronger like “truth” or “right”. Any new idea is a model of the “laws of reality”, the ultimate truth and George Box, a famous statistician, elegantly explain that:
“All models are wrong, some models are useful” — George Box
Science, in all levels, work as an approximation of reality and it is heavily biased on what we can see, hear, measure or detect somehow. Theoretical models try to explain the data observed and make predictions that must be tested. If the model is good at predicting and explaining the data well, the new model (idea) is passed along to peers for verification of the findings. There will be no theory that will be completely true, but they will do for what it is worth.
For example, classical mechanics in physics is a pretty good model. Coming out of high school, we can predict with pretty good accuracy where an object will land based on its initial velocity. This model is very useful in many aspects, including ballistics — when we throw or kick a football or we are trying to hit a target with our catapult.
Do the laws of nature obey the models of classical mechanics invented by humans? Probably not — all models are wrong. But classical mechanics is pretty good at what it does — some models are useful. Regardless, classical mechanics has been under the scrutiny of scientists and engineers at least since the 1600s and new ideas on the topic (amendments to the model or a completely new model) has to explain not only everything that the current model explains but also to be even better. It would go under “peer review” and the general public will only hear about it when there is a massive body of evidence favoring the new idea over the older one.
The reality of the accessible information
The reality is that there is no “peer review” in social media. When someone propose a new idea (model)on the internet, the judgment, debate, rare tests and amendments are done by people that spent little time working on it: the non-specialists.
Lots of guesses comes afloat and the debate over the idea looks more like a discussion about sports teams: not objective, logically flawed, based on beliefs rather than facts, when based in facts is based in a subset but not all the facts, it is based on personal opinion rather than consensus over communities dedicated to work on this and etc. Thus, the idea is propagated and if for some reason become popular, it starts to gain status of “truth” (of being “useful”), despite the lack of rigorous testing and discussion. This is what I call “misinformation”.
More likely than not, these “new ideas” on the internet, won’t hold the first level of scrutiny by people that have been thinking on the same problem for over decades. An example is the concept that the Earth is flat.
And what is the problem of all that? The problem is that believing in misinformation will translate in bad decisions in the future, in failure to recognizes threats and limit our ability to allocate time effectively.
What do we do?
To avoid misinformation, I check sources, credentials, if whatever is being talked about was triple-checked by other independent specialists. If I am very interested, I check competing hypothesis, arguments in papers, books on the topic and etc.
As a rule of thumb, be skeptical. Working our way to understand the idea is much better than just accept it.
Although with caveats, trusting consensus of specialists will decrease the chances of believing in something wrong. If we don’t want to trust the consensus, we must investigate the subject by ourselves. Ask questions, always, and listen to the arguments of specialists against the new model in question. These specialists might not be right, but their challenges, if sound, must be refuted with sound evidences.
It is important to stay open minded to disruptive new ideas but we must keep our guard up against misinformation for the sake of our future and sanity.