by Samantha Wong
figures by Daniel Utter
“Global warming is based on faulty science and manipulated data which is proven by the emails that were leaked.”
Alarming words from the most powerful man in the USA, don’t you think? Unfortunately, a Pew Research report revealed that the American public has increasingly echoed this sort of scientific mistrust over the last 5 years.
Some part of me balks at the idea of people so vehemently distrusting what I love.
Yet as a scientist-in-training, I will readily admit that the scientific process isn’t perfect. However, there is a difference between a reasonable mistrust of the scientific process and an outright rejection of all scientific findings that clash with personal beliefs. Too often, people spurn scientific conclusions because they contradict personal worldviews, perpetuating an unfounded mistrust in science and the spread of scientific misinformation.
The psychology of disbelief
The rejection of scientific evidence is largely a psychological phenomenon and is often selective rather than universal. Consider these scenarios: vaccination critics quote Andrew Wakefield’s infamous study linking autism to vaccines to support their stand, and climate change skeptics cite a 1990 research paper by climatologists Roy Spencer and John Christy arguing for the lack of evidence for global warming. Paradoxically, quoting scientific journals demonstrates that these “anti-science” folks do actually trust scientific evidence, but probably only if it aligns with a mental model of the world they have built.
As it turns out, modern neuroscience has shed some light on this phenomenon. When people are confronted with an idea that contradicts their beliefs, a brain area responsible for suppressing unwanted representations, the dorsolateral prefrontal cortex, becomes activated. This area is also activated when you take the Stroop test (Figure 1). This suppression of discordant information may explain why people reject ideas (even correct ones!) that contradict entrenched beliefs, because it is easier to accept something that aligns with their own worldview than to expend the (unpleasant) extra mental effort to reconcile the conflicting ideas.
This phenomenon may underlie the mistrust some people feel towards science. However, mistrusting scientific evidence on the basis of belief incongruity doesn’t make the science any less true. The real question we should be asking is: how do we know if the “offending” piece of evidence is actually true?
Mistrusting science: a scientist’s perspective
Scientific evidence presented to the public is often only the end result; the experimentation and publishing process is a mysterious business privy only to the esoteric scientific community. Yet, what occurs within this black box is crucial to shaping the results that the public receive.
So what happens in the “black box”?
When scientists have enough experimental data, they will submit a manuscript to an academic journal (Figure 2). This manuscript is then assessed by the editorial board and sent out for review (the “peer-review” process). The authors receive reviewers’ comments, refine their manuscript, and resubmit it for publication.
Where experimental analyses can go wrong
Often, papers that get published are the ones predicted to be game-changers in their field. To determine whether findings are ground-breaking, we commonly use something called the p-value. Simply put, a p-value is a measure of how likely it is that your “null hypothesis” is true. The null hypothesis is the premise that there is no true difference between two variables or conditions (i.e. treatment with a new medication vs. no treatment). When the p-value exceeds a certain pre-set threshold, which can occur when the difference between two variables is likely too large to be explained by chance, then we can reject the null hypothesis. By rejecting the null hypothesis, we are saying, with a certain level of confidence (as dictated by the p-value), that there isn’t no difference between the variables in question. Thus we can conclude that the effect we observe with the treatment is real and it’s not due to random chance. We call this “statistical significance”, or positive data.
Here’s where it gets fuzzy: statistical significance doesn’t always imply real-world significance. For instance, you may find a gene that is significantly associated with schizophrenia, yet if this gene only contributes 0.2% to the disease, it can hardly be deemed biologically significant. Alternatively, because of the sheer number of data points in large datasets, statistically significant relationships might arise between 2 completely unrelated factors by chance – like the rate of chocolate consumption and the number of Nobel laureates in a country, as this study found!
Despite these issues, statistical significance is often the golden ticket into journals. The bias against non-statistically significant (‘negative’) data sometimes incentivizes data manipulation and ‘cherry-picking’, where one selectively presents positive results that support a hypothesis, while hiding those that don’t. These practices tend to make science inefficient, as multiple labs may waste time and resources unknowingly repeating experiments that other labs have already tried.
Thankfully, there is a growing awareness of this problem within the scientific community. The World Health Organization recently called for the publication of all clinical trials – even negative results – in recognition of how failed trials can still inform treatment. Nowadays, there are even some journals that publish only negative data. These new movements are promising in addressing the problems arising from an overemphasis on p-values, but it may take some time to abolish the stigma surrounding negative data.
The publishing process
In science, we often joke about having to “publish or perish”. But in reality, it is an uncomfortable truth. The sheer number of submissions that journals receive allows them to influence the type of science that reaches the public, because they get to choose what to publish. There are many high-quality journals, but the more prestigious ones – like Nature, Cell and Science, for example – tend to reach a wider audience. Sometimes, these journals may publish findings that have not been fully elucidated – or are controversial, even – to spur further research on that topic. In the long run, this approach will produce a more cohesive picture of the topic, but it complicates how you evaluate a finding because now you’ll also have to consider why a study is published.
When it comes to quality control, we have the peer review process – where all manuscripts are evaluated by field experts (Figure 2). Typically, peer reviews are single-blind: the reviewers know who the authors are, but not vice versa. This removes bias by allowing reviewers to critique works of friends, competitors or powerful figures without fear of personal or professional repercussions. However, this process isn’t foolproof. By design, reviewers are chosen based on their work on a related topic, because this likely means that they know the field well enough to judge the accuracy and impact of the new findings. This can lead to problems, though, if a reviewer’s own ideas cloud his judgment of a manuscript’s data quality. Those in support of the authors’ ideas would likely recommend publication, but the reverse can occur if a reviewer supports a competing idea. For this reason, some journals allow authors to request exclusion of competing reviewers, which removes some unfairness to the author but potentially at some expense of rigor.
In sum, it’s complicated. The scientific process is nuanced and complex, and although it generally includes sufficient fail-safes to ensure quality publication, systemic flaws do occasionally allow less robust findings to slip through the cracks.
So why can we still trust science despite its systemic flaws?
Trusting science: a scientist’s perspective
Newton said “If I have seen further, it is only by standing on the shoulders of giants.” This statement underscores the self-critical and self-correcting nature of science, because further discoveries are dependent on the truth of previous research. Scientists often repeat one another’s experiments; hence it is difficult for an erroneous finding to stand up to this kind of scrutiny for long. Cases of scientific fraud – like data fabrication or cheating on peer reviews – are disgraceful, but the fact that they are discovered by other scientists is reassuring: it means that even if a fraudulent study makes it through peer review, the larger science community is rigorous enough to discern fallacious data.
It’s easy to be confused by the constant slew of new findings, some of which may contradict previous studies. It is therefore more important than ever to recognize where one’s mistrust of science is coming from – especially if it stems from a personal cognitive bias or from today’s politically super-charged environment – because this awareness, together with an understanding of weaknesses in the scientific process, will help one wisely discern when and why they mistrust science.
Sam Wong is a first year PhD student in the Biological and Biomedical Sciences Programme at Harvard Medical School. She studies fat metabolism in cancer, but is also interested in science education and the psychological aspects of social issues in science.