by Lee McIntyre
As we saw in the last chapter, the scientific attitude is sometimes betrayed even by scientists. Yet a proportionally greater threat may arise from those who are outside science: those who either willfully or unwittingly misunderstand the process of science, who are prepared to deny scientific results that do not jibe with their ideological beliefs, who are only pretending to do science in order to further their pet theories, who cynically exploit others’ ignorance while they profit, or who fool themselves by being overly gullible. It is important to realize, however, that these errors can be committed both by those who are lying to others (who are doing false science or are rejecting science falsely) and by those who are being lied to (who have not bothered to educate themselves in the skills necessary to form well-warranted beliefs). Whether conscious of their betrayal of good scientific principles or not, in an age where climate change deniers routinely use Facebook and Twitter to spread their benighted ideology, and intelligent design theorists have a website to share their cherry-picked doubts about evolution, we are all responsible for any distance between our reliance on the fruits of science and our sometimes woefully misinformed conception of how scientific beliefs are formed.
The two most pernicious forms of cheating on scientific principles by those who are outside science are denialism and pseudoscience. Although both will be discussed at greater length later in this chapter, let me now define denialism as the refusal to believe in well-warranted scientific theories even when the evidence is overwhelming.1 The most common reason for this is when a scientific theory conflicts with someone’s ideological beliefs (for instance, that climate change is a hoax cooked up by liberals), so they refuse to look at any distasteful evidence. Pseudoscience, by contrast, is when someone seeks the mantle of science to promote a fringe theory about an empirical matter (such as intelligent design), but refuses to change their beliefs even in the face of refutatory evidence or methodological criticism by those who do not already believe in their theory. As we will see, it is difficult to draw a clear line between these practices because they so often overlap in their tactics, but both are united in their repudiation of the scientific attitude.
Everyone has a stake in the justification of science. If our beliefs are being manipulated by those who are seeking to deceive us—especially given that virtually all of us are prewired with cognitive biases that can lead to a slippery slope of gullibility and self-deception—the consequences for scientific credibility are enormous. We may feel justified in believing what we want to believe about empirical matters (falsely judging that if scientists are still investigating there must be a lack of consensus), but if we do this then who do we have to blame but ourselves if the planet is nearly uninhabitable in fifty years? Of course, this is to oversimplify an enormously complex set of psychological circumstances, for there are many shades of awareness, bias, intentionality, and motivation, all of which bear on the nature of belief. As Robert Trivers masterfully demonstrates in his previously cited book The Folly of Fools, the line between deception and self-deception may be thin. Just as scientific researchers may sometimes engage in pathological science due to their own delusions, those who engage in denialism or pseudoscience may believe that they are actually living up to the highest standards of the scientific attitude.
But they are not.2 And neither are those who uncritically accept these dogmas, never bothering to peek over the wall of willful ignorance at the results of good science. In this chapter, I will explore the mistakes of both the liars and those who are overly credulous. For as I’ve said, in a day and age where scientific results are at our fingertips, we all bear some responsibility for the warrant behind our empirical beliefs. And, for science, it is a problem either way. Whether someone has lit the fire of denialism about climate change or is merely stopping by to warm their hands, it is still repudiation of a core value of science.3
In the last chapter, we explored what happens when a scientific researcher cheats on the scientific attitude. In this chapter, we will consider what happens when those who are not doing science—whatever their motive—peddle their convictions to the larger community and cast doubt on the credibility of well-warranted scientific beliefs.
Ideology and Willful Ignorance
Scientists presumably have a commitment to the scientific attitude, which will influence how they formulate and change their beliefs based on empirical evidence. But what about everyone else? Fortunately, many people have respect for science. Even if they do not do science themselves, it is fair to say that most people have respect for the idea that scientific beliefs are especially credible and useful because of the way they have been vetted.4 For others, their primary allegiance is to some sort of ideology. Their beliefs on empirical subjects seem based on fit not with the evidence but rather with their political, religious, or other ideological convictions. When these conflict—and the conclusions of science tread on some sacred topic on which people think that they already know the answer (e.g., whether prayer speeds up healing, whether ESP is possible), this can result in rejection of the scientific attitude.
There has always been a human tendency to believe what we want to believe. Superstition and willful ignorance are not new to the human condition. What is new in recent years is the extent to which people can find a ready supply of “evidence” to support their conspiracy-theory-based, pseudoscientific, denialist, or other outright irrational beliefs in a community of like-minded people on the Internet. The effect that group support can have in hardening one’s (false) convictions has been well known to social psychology for over sixty years.5 In these days of 24/7 partisan cable “news” coverage, not to mention Facebook groups, chat rooms, and personal news feeds, it is increasingly possible for those who wish to do so to live in an “information silo,” where they are seldom confronted by inconvenient facts that conflict with their favored beliefs. In this era of “fake news” it is possible for people not only to avoid views that conflict with their own, but almost to live in an alternative reality, where their preferred views are positively reinforced and opposing views are undermined. Thus political and religious ideologies—even where they tread on empirical matters—are increasingly “fact free” and reflect a stubborn desire to shape reality toward them.
To say that this is a dangerous development for science would be an understatement. In fact, I think it is so dangerous that I wrote an entire book—Respecting Truth: Willful Ignorance in the Internet Age—on the topic of how these forces have conspired to create an increasing lack of respect for the concept of truth in recent years.6 I will not repeat those arguments here, but I would like to trace out their implications for the debate about the distinctiveness of science.
One important topic here is the role of group consensus. We have already seen that in science, consensus is reached only after rigorous vetting and comparison with the evidence. Community scrutiny plays a key role in rooting out individual error. In science, we look to groups not for reinforcement of our preexisting beliefs, but for criticism. With ideological commitments, however, one often finds little appetite for this and people instead turn to groups for agreement.7 Yet this feeds directly into the problem of “confirmation bias” (which we’ve seen is one of the most virulent forms of cognitive bias, where we seek out evidence that confirms our beliefs rather than contradict them). If one wants to find support for a falsehood, it is easier than ever to do so. Thus, in contrast to the way that scientists use groups as a check against error, charlatans use them to reinforce prejudice.
Sagan’s Matrix
In his influential book The Demon-Haunted World: Science as a Candle in the Dark,8 Carl Sagan makes the observation that science can be set apart from pseudoscience and other chicanery by two simple principles: openness and skepticism. As Sagan puts it:
The heart of science is an essential balance between two seemingly contradictory attitudes—an openness to new ideas, no matter how bizarre or counter-intuitive, and the most ruthless skeptical scrutiny of all ideas, old and n
ew.9
By “new ideas,” Sagan means that scientists must not be closed to the power of challenges to their old way of thinking. If scientists are required to base their beliefs on evidence, then they must be open to the possibility that new evidence may change their minds. But, as he goes on to observe, one must not be so open to new ideas that there is no filter. Scientists cannot be gullible and must recognize that “the vast majority of ideas are simply wrong.”10 Thus these two principles must be embraced simultaneously even while they are in tension. With “experiment as the arbiter,” a good scientist is both open and skeptical. Through the critical process, we can sort the wheat from the chaff. As Sagan concludes, “some ideas really are better than others.”11
Some may complain that this account is too simple, and doubtless it is, but I think it captures an essential idea behind the success of science. Yet perhaps the best measure of the depth of Sagan’s insight is to examine its implication for those areas of inquiry that are not open or not skeptical. Let us now dig a little deeper into denialism and pseudoscience. Although he does not discuss denialism per se, Sagan discusses pseudoscience at length, allowing for an intriguing comparison. What is the difference between denialism and pseudoscience?
Sagan says that pseudoscientists are gullible, and I think that most scientists would be hard pressed to disagree.12 If one goes in for crystal healing, astrology, levitation, ESP, dowsing, telekinesis, palmistry, faith healing, and the like,13 one will find little support from most scientists. Yet virtually all of these belief systems make some pretense of scientific credibility through seeking evidential support. What is the problem? It is not that they are not “open to new ideas,” but that in some ways they are “too open.”14 One should not believe something without applying a consistent standard of evidence. Cherry picking a few favorable facts and ignoring others is not good scientific practice. Here Sagan cites favorably the work of CSICOP (Committee for the Scientific Investigation of Claims of the Paranormal—now the Committee for Skeptical Inquiry), the professional skeptical society that investigates “extraordinary” beliefs. If science really is open, such claims deserve a hearing.15 But the problem is that in virtually every case in which real skeptics have investigated such extraordinary claims, the evidence has not held up.16 They are revealed as pseudoscientific not because they are new or fantastical, but because they are believed without sufficient evidence.
This way of looking at pseudoscience allows for a fascinating contrast with denialism. Although, as noted, Sagan does not make this comparison, one wonders whether he might approve of the idea that the problem with denialism is not that it is not skeptical enough, but that it is insufficiently open to new ideas.17 When you’re closed to new ideas—most especially to any evidence that might challenge your ideological beliefs—you are not being scientific. As Sagan writes, “If you’re only skeptical, then no new ideas make it through to you. You never learn anything.”18 Although one desires a much more developed sense of scientific skepticism than Sagan offers here (which I will soon provide), his notion does at least suggest what may be wrong with denialism. The scientific attitude demands that evidence counts because it might change our thinking. But for denialists, no evidence ever seems sufficient for them to change their minds. Through embracing the scientific attitude, science has a mechanism for recovering from its mistakes; denialism does not.
So are pseudoscience and denialism more similar or different? I would argue that they have some similarities (and that their demographics surely overlap) but that it is enlightening to pursue the question of their purported differences. Later in this chapter, I will explore these notions in more depth but for now, as a starting point, let’s offer a 2 × 2 matrix of what might be regarded as a Sagan-esque first cut on the issue.19
Skeptical
Gullible
Open
Science
Pseudoscience
Closed
Denialism
Conspiracy Theories
Notice that in the extra box I offer the possibility of conspiracy theories. These seem both closed and gullible. How is that possible? Consider the example of someone who argues that NASA faked the Moon landing. Is this a closed theory? It would seem so. No evidence provided by Moon rocks, videotape, or any other source is going to be enough to convince disbelievers. This, it should be noted, is not true skepticism but only a kind of selective willingness to consider evidence that fits with one’s hypothesis. Evidence counts, but only relative to one’s prior beliefs. What about the claim that Moon-landing deniers are gullible? Here the “skeptical” standard does not apply at all. Anyone who thinks that the US government is capable of covering up something as enormous as a faked Moon landing must either be extremely gullible or have a faith in governmental competence that belies all recent experience. Here the problem is that one’s beliefs are not subject to sufficient scrutiny. If an idea fits one’s preconceived notions, it is left unexamined.
With conspiracy theories, we thus find an odd mixture of closure and gullibility: complete acceptance of any idea that is consistent with one’s ideology alongside complete rejection of any idea that is not. Conspiracy theories routinely provide insufficient evidence to survive any good scientist’s criticism, yet no refutatory evidence whatsoever seems enough to convince the conspiracy theorist to give up their pet theory. This is charlatanism of the highest order—in some ways, the very opposite of science.
It is always fun to try to work out such hard and fast distinctions as we see in this matrix. I would argue, however, that there is something wrong—or at least incomplete—about it. In practice, denialists are not quite so skeptical and pseudoscientists are not quite so open. Both seem guided by a type of ideological rigidity that eschews true openness or skepticism, and instead seems to have much more in common with conspiracy theories. Although Sagan’s insight is provocative, and can be used as a stalking horse, the real problem with both denialism and pseudoscience is their lack of the scientific attitude.
Denialists Are Not Really Skeptics
Denialists are perhaps the toughest type of charlatans to deal with because so many of them indulge in the fantasy that they are actually embracing the highest standards of scientific rigor, even while they repudiate scientific standards of evidence. On topics like anthropogenic climate change, whether HIV causes AIDS, or whether vaccines cause autism,20 most denialists really don’t have any other science to offer; they just don’t like the science we’ve got.21 They will believe what they want to believe and wait for the evidence to catch up to them. Like their brethren “Birthers” (who do not accept Barack Obama’s birth certificate) or “Truthers” (who think that George W. Bush was a co-conspirator on 9/11), they will look for any excuse to show that their ill-warranted beliefs actually fit the facts better than the more obvious (and likely) rational consensus. While they may not actually care about empirical evidence in a conventional sense (in that no evidence could convince them to give up their beliefs), they nonetheless seem eager to use any existing evidence—no matter how flimsy—to back up their preferred belief.22 But this is all based on a radical misunderstanding or misuse of the role of warrant in scientific belief. As we know, scientific belief does not require proof or certainty, but it had better be able to survive a challenge from refuting evidence and the critical scrutiny of one’s peers. But that is just the problem. Denialist hypotheses seem based on intuition, not fact. If a belief is not based on empirical evidence, how can we convince someone to modify it based on empirical evidence? It is almost as if denialists are making faith-based assertions.
Unsurprisingly, most denialists do not see themselves as denialists and bristle at the name; they prefer to call themselves “skeptics” and see themselves as upholding the highest standards of science, which they feel have been compromised by those who are ready too soon to reach a scientific conclusion before all of the evidence is in. Climate change is not “settled science,” they will tell you. Liberal climate scientists arou
nd the world are hyping the data and refusing to consider alternative hypotheses, because they want to create more work for themselves or get more grant money. Denialists customarily claim that the best available evidence is fraudulent or has been tainted by those who are trying to cover something up. This is what makes it so frustrating to deal with denialists. They do not see themselves as ideologues, but as doubters who will not be bamboozled by the poor scientific reasoning of others, when in fact they are the ones who are succumbing to highly improbable conspiracy theories about why the available evidence is insufficient and their own beliefs are warranted despite lack of empirical support. This is why they feel justified in their adamant refusal to change their beliefs. After all, isn’t that what good skeptics are supposed to do? Actually, no.
Skepticism plays an important role in science. When one hears the word “skepticism” one might immediately think of the philosopher’s claim that one cannot know anything; that knowledge requires certainty and that, where certainty is lacking, all belief should be withheld. Call this philosophical skepticism. When one is concerned with nonempirical beliefs—such as in Descartes’s Meditations, where he is concerned with both sensory and rational belief—we could have a nice discussion over whether fallibilism is an appropriate epistemological response to the wider quest for certainty. But, as far as science is concerned, we need not take it this far, for here we are concerned with the value of doubt in obtaining warrant for empirical beliefs.
Are scientists skeptics? I believe that most are, not in the sense that they believe knowledge to be impossible, but in that they must rely on doubt as a crucible to test their own beliefs before they have even been compared to the data. Call this scientific skepticism.23 The ability to critique one’s own work, so that it can be fixed in advance of showing it to anyone else, is an important tool of science. As we have seen, when a scientist offers a theory to the world one thing is certain: it will not be treated gently. Scientists are not usually out to gather only the data that support their theory, because no one else will do that. As Popper stated, the best way to learn whether a theory is any good is to subject it to as much critical scrutiny as possible to see if it fails.