5 Tips to Help You Science Better on the Internet
In 2016, a Ph.D candidate at the University of Wollongong was awarded a doctorate after a successful defense of her dissertation. The candidate’s central thesis was that the Australian Government’s vaccination policy was not based on credible evidence. Instead, the candidate asserted that the policy was the result of a conspiracy between the World Health Organization and Big Pharma. The candidate, to the horror of Australia’s scientific and medical community, was awarded her doctorate. It later became known that none of the examiners had a scientific background.
In recent years, there has been a significant anti-science movement in popular culture. This has led to debate over important issues such as climate change, vaccine efficacy & safety, or the safety of genetically modified food. The issue has become so pervasive in our culture that it even made the March 2015 cover of National Geographic.
Those who hold the belief that science is a bunch of evil men in white lab coats are unaware of how the process science is actually conducted. Science is not scary, it is not hellbent on humanity’s destruction, and it is certainly not all funded by evil corporations.
I think that scientific literacy is incredibly lacking in today’s population, and it is important that a greater percentage of the population not only understand how the process of science actually works, but also learn how to sniff out bad science.
Here are 5 tips to help those who lack a formal scientific education better understand some basic principles upon which science is built and how to evaluate scientific claims.
- Understand the principle steps of the scientific method, specifically, replication.
The most common mistake made when referring to scientific findings is that people are unaware of the how the scientific method actually works.
When a group of researchers performs an experiment, they publish their results in a scientific journal. Doing so makes their findings available to the public, and one of the main purposes of publishing your results is to allow other scientists to review, critique, and replicate and potentially improve on or modify your work.
Replication is the step I would like to focus on, because an ignorance of that step has led to some very controversial conclusions, specifically that the MMR vaccine for measles causes autism and that GMOs cause cancerous tumours.
The studies I have referenced have in fact both been retracted from their respective scientific journals due to, among other things, erroneously reported data and poorly designed methods. A large part of how this data was discovered to be falsified was through the process of replication.
Because scientists were unable to produce results that were similar to that of Dr. Wakefield or Dr. Seralini, questions were raised at the legitimacy of the findings. As a result, the experiments were investigated and determined to be irrelevant because of their lack of reproducibility.
Unfortunately, the results of one study that support a preconceived notion in a debate are often treated as irrefutable proof. This condition has been termed “single study syndrome” by Andrew Revkin of the New York Times, and this term describes the attitude that many individuals possess who are skeptical or mistrusting of science.
2) The results of one study do not “prove” anything; not all journals are created equal.
In scientific vernacular, the word prove is a dangerous term, because there are always sources of error, however minuscule they may be. Any quality research paper will use words like “demonstrate” or “suggest”. In recent years, scientists from Oxford and Harvard have had papers accepted for publication in open access online scientific journals that demonstrate the suspect publishing scrutinies of many online journals.
Online scientific journals do not operate under the same constraints as traditional print journals do. If you consider a traditional print journal, such as Nature or Science, the editors of these journals get hundreds, if not thousands, of submissions for each issue. In order to maintain an excellent publishing record, only the top submissions are reviewed and selected for publication.
This is what is known as “peer review”. A panel of researchers who are highly accomplished in their respective fields are responsible for reviewing submissions to determine if the methods are sound, the data are accurate, and the conclusions are logical. Peer review allows for a system of checks and balances that keeps science as honest and open-minded as possible. As a result of their excellent publishing reputation, studies that are published in top journals are more highly respected and prone to citation in future studies.
Most online open access journals operate under a different business model. Since these journals are open access, anyone can access their content, so they need a different source of income. If a group of authors wishes to publish their results in an online open access journal, they can simply pay the journal a publishing fee, and the journal will publish their work; there is no panel of submission reviewers to weed out the lower quality submissions.
So if the names of these open access journals are very similar, how do you know if a journal’s contents are trustworthy? There are two useful metrics to determine the influence and prestige of of scientific journal: the SJR and the h-index, the latter of which can also help determine the influence and prestige of an individual researcher.
The SJR, or scientific journal ranking, is a ranking of the average number of weighted citations received by documents published in the journal in a given time period. Essentially, the more citations a journal garners, the more influential its contents — pretty straightforward reasoning.
The h-index is a measure of publishing reputation first described by Jorge E. Hirsch, which is calculated using the following formula: A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have no more than h citations each.
In other words, if I’m a researcher and I have published 15 papers that contain at least 15 citations, I have an h-index of 15; if I have published 20 papers with at least 20 citations, my h-index will be 20, and so on.
These two methods should be used in conjunction when evaluating the quality of a journal, as the SJR of many journals can be inflated with popular review articles or techniques. For example, the paper that first described the technique and application of Polymerase Chain Reaction, a very common technique in molecular biology, will be referenced quite often due to the popularity of the technique in DNA sequencing.
3) Beware of circular referencing and intellectual inbreeding.
Another common tactic briefly discussed in the previous example was that of intellectual inbreeding, which is an impediment for producing quality research. A good researcher will examine multiple angles of their experiment to find many different layers of evidence for their argument. Breadth of coverage makes for sound reasoning and a more stable base of support. This doesn’t mean that more citations automatically makes a paper stronger, but a greater variety of evidence sources usually does.
Consider the case of many pseudoscientific websites: while many of their articles may include a good deal of references, these references often loop back to previous articles written by the same author, or they reference the same study multiple times. Many websites publish multiple articles reporting the results of the same study years apart. Circular referencing is not an adequate form of evidence, and is a common deceptive tactic employed by many pseudoscientific websites to dress up their articles under a guise of credibility.
4) Just because an author has letters after their name, it does not mean their research is valid.
This point builds upon the concepts of peer review and replication. There are a growing number of individuals who are taking advantage of the public’s nature to trust single sources of authority; after all, we are raised to trust our doctor, our dentist, our veterinarian, our pharmacist, and, if we attend university, our professor. Unfortunately, as discussed previously, one study — or in this case one person — does not necessarily equate to a valid opinion. This is otherwise known as an “appeal to authority” — a common logical fallacy.
If your doctor tells you to take your regularly prescribed medication for a thoroughly researched ailment, such as taking an antibiotic like Penicillin for an infected cut, then there should be no doubt in your mind to trust them. What you should be wary of is methods touted as “new” or “revolutionary”. These words are used to dress up unsupported claims dreamed up by individuals with the proper credentials but lacking the proper evidence.
What many of these unscientific claims take advantage of is the general public’s lack of scientific literacy. By sprinkling some authentic-sounding science terms on a product or service that they are selling, anyone with the letters “Dr.” before their name can make a bogus product or service seem quite legitimate.
What much of the public doesn’t know is that even someone as educated as a physician may be just an unqualified as you or I to give advice on particular health topics. This may seem shocking, but medical school (including the residency process) is lengthy for a reason: the body is an incredibly complex system, and physicians are specialized in various areas of it for that same reason.
According to her CV, Dr. Amy Meyers is trained as an emergency physician — she doesn’t know more about the gut than you or I if we spent a few hours researching the matter online or if we read a basic physiology textbook. She has also been featured in the Huffington Post and on the Dr. Oz show — both of which are prone to promoting pseudoscience and health myths. Unless these physicians are gastroenterologists, they have no business posing as an expert on the health of your gut.
5) Doing your research takes a lot longer than using Google for half an hour
Public distrust of science has certainly stemmed from the widespread availability of information online. This is indicative a larger cultural shift where experts are doubted more than ever. Tom Nichols’ 2017 book The Death of Expertise reviews this problem in great detail. Debate can happen anywhere, so the barrier to entry no longer requires a well-thought out submission letter or an invitation. You just log on to Facebook or Twitter and let fly the words of ignorance.
Scientific professionals undergo years of training just to be granted their license to research or practice, and even then their education is never fully complete. There is a contingent of armchair scientists who seem to think that reading a few scientific papers (let’s be honest, abstracts) or articles on a matter is a sufficient replacement for a decade of post-secondary education.
What much of the public has to remember is that scientific research is still very much an artisanal pursuit, and we as the general public should give the collective of scientific professionals the respect and trust they deserve. We should also understand that scientists, like all experts, do make mistakes from time-to-time, but that shouldn’t undermine their credibility. If anything, it should make them more accessible in the sense that it reinforces that experts are people just like us; they just happen to know a lot about one thing, and we should give them the respect they deserve.