Book Read Free

The Scientific Attitude

Page 14

by Lee McIntyre


  12. See figure 1.1. For a thorough discussion of this problem, see the discussion in chapter 1. See also Tom Nickles, “The Problem of Demarcation: History and Future,” 101–120, and James Ladyman, “Toward a Demarcation of Science from Pseudoscience,” 45–49, both in Philosophy of Pseudoscience, ed. M. Pigliucci and M. Boudry (Chicago: University of Chicago Press, 2013).

  13. Galileo famously said, “the intention of the Holy Ghost is to teach us how one goes to heaven, not how heaven goes.” A Letter to the Grand Duchess Christina of Tuscany (Florence, 1615).

  14. I suppose one might here try to draw a firmer line within the category of nonscience, between what I have called “unscience” and “pseudoscience.” Although this would not amount to a larger criterion of demarcation between science and nonscience (or between science and pseudoscience), perhaps it would be useful to say that within the category of nonscience there are those fields that do not purport to care about empirical evidence (literature, art), and those that do (astrology, creationism) … even if one could then argue that, as a matter of practice, the latter do not actually care. Even this modest step, however, might prove problematic, for it raises the temptation to (1) try to use this as leverage to change the subject in the larger demarcation debate or (2) go down the rabbit hole of looking for all of the necessary and sufficient conditions for some new demarcation debate within the category of nonscience. But if we have already had trouble with how “caring about evidence” might serve as a sufficient condition for demarcating science from nonscience, how difficult might it be to build another account based on purporting to care about evidence? Again, I think that the proper focus for the demarcation debate is between science and nonscience.

  15. And, arguably, in evolutionary biology. See Michael Ruse, “Evolution: From Pseudoscience to Popular Science, from Popular Science to Professional Science,” in Philosophy of Pseudoscience, 225–244.

  16. See the excellent treatment of this subject in Frank Cioffi’s essay “Pseudoscience: The Case of Freud’s Sexual Etiology of the Neuroses,” in Philosophy of Pseudoscience, 321–340.

  17. I am indebted to Rik Peels for this example.

  18. Although a case could be made that my brother has the scientific attitude, most philosophers would still want to say that he was not doing science. Sidney Morgenbesser once addressed this by saying that there may be a difference between something’s being scientific and its being science. Boudry also raises this pragmatic issue in his essay “Plus Ultra: Why Science Does Not Have Limits,” Science Unlimited (Chicago: University of Chicago Press, 2017).

  19. Is plumbing a science? Boudry (in his essay “Plus Ultra”) says it is all right to believe this—that there need be no epistemological limit to science. In his own essay “In Defense of Demarcation Projects,” Pigliucci disagrees, but his reason for this is that scientists have a “fairly clearly defined role.” Here I wholeheartedly agree with Boudry. What seems important is the approach that one takes in seeking “everyday knowledge,” not some sociological fact about plumbers.

  20. The only way for a demarcationist to avoid the “looking for one’s keys” problem is either to refuse to embrace a sufficiency standard in the first place (in which case they might fail to be a demarcationist) or come up with more necessary criteria. But once we start down this road there seems no end to it, which is another reason why I prefer to avoid offering the scientific attitude as a sufficiency standard.

  21. It is of course also missing from those disciplines like math and logic, which do not care about empirical evidence.

  22. Emile Durkheim, The Rules of Sociological Method (Paris, 1895; author’s preface to the second edition).

  23. One of the most intriguing proposals I have read recently is made by Tom Nickles (“The Problem of Demarcation,” in Philosophy of Pseudoscience, 116–117) who critically examines “fertility” as a possible criterion for demarcation. Maybe what is wrong with creationism, for instance, is not that it is false, pretending, or doesn’t have the scientific attitude, but that it is just not very interested in giving us future scientific puzzles to solve.

  24. Perhaps Popper’s original instinct was right and the necessity condition is all one needs for a criterion of demarcation.

  25. Hansson, “Defining Pseudoscience and Science,” in Philosophy of Pseudoscience, 61.

  26. See again my discussion of Laudan’s meta-argument (chapter 1, note 27) in which he claims that in order to solve the problem of demarcation one would have to find the necessary and sufficient conditions for science.

  27. See Pigliucci, “The Demarcation Problem,” 21. But does he do this because he is once again equivocating between whether the target should be pseudoscience or nonscience?

  28. This appears in Science Unlimited, ed. M. Boudry and M. Pigliucci (Chicago: University of Chicago Press, 2017). In this volume, Boudry and Pigliucci say that they are now concerned with a different demarcation dispute, this time more along the lines of what Boudry earlier called the territorial problem. In their earlier book, Pigliucci and Boudry were concerned with keeping pseudoscience from infecting science. Now they seem concerned with the problem of scientism: whether other areas of inquiry need to be protected from science.

  29. Pigliucci, Science Unlimited, 197.

  30. Kitcher says much the same thing about pseudoscience. “Pseudoscience is just what [pseudoscientists] do.” Quoted in Boudry, “Loki’s Wager,” 91.

  31. McIntyre, Respecting Truth: Willful Ignorance in the Internet Age (New York: Routledge, 2015), 107–109.

  32. Pigliucci acknowledges as much in his essay “The Demarcation Problem,” where he seems to agree with Boudry that the appropriate task for demarcation is to limn the difference between science and pseudoscience, not science and nonscience.

  33. See here the discussion in chapter 8 on the question of whether it is the content of a theory, or the behavior of the people who advance it, that makes the difference in science.

  34. As we have seen with the examples of Galileo and Semmelweis, the scientific community is sometimes woefully irrational.

  35. Pigliucci, Science Unlimited, 197.

  36. For more on the interesting question of sorting out the family relations of different areas of inquiry when compared to science, see Tom Nickles, “Problem of Demarcation,” and James Ladyman, “Toward a Demarcation of Science from Pseudoscience,” 45–59, in Philosophy of Pseudoscience.

  5    Practical Ways in Which Scientists Embrace the Scientific Attitude

  For science to work, it has to depend on more than just the honesty of its individual practitioners. Though outright fraud is rare, there are many ways in which scientists can cheat, lie, fudge, make mistakes, or otherwise fall victim to the sorts of cognitive biases we all share that—if left unchallenged—could undermine scientific credibility.

  Fortunately, there are protections against this, for science is not just an individual quest for knowledge but a group activity in which widely accepted community standards are used to evaluate scientific claims. Science is conducted in a public forum, and one of its most distinctive features is that there is an ideal set of rules, which is agreed on in advance, to root out error and bias. Thus the scientific attitude is instantiated not just in the hearts and minds of individual scientists, but in the community of scientists as a whole.

  It is of course a cliché to say that science is different today than it was two hundred years ago—that scientists these days are much more likely to work in teams, to collaborate, and to seek out the opinion of their peers as they are formulating their theories. This in and of itself is taken by some to mark off a crucial difference with pseudoscience.1 There is a world of difference between seeking validation from those who already agree with you and “bouncing an idea” off a professional colleague who is expected to critique it.2 But beyond this, one also expects these days that any theory will receive scrutiny from the scientific community writ large—beyond one’s collaborators or colleagues—who wil
l have a hand in evaluating and critiquing that theory before it can be shared more widely.

  The practices of science—careful statistical method, peer review before publication, and data sharing and replication—are well known and will be discussed later in this chapter. First, however, it is important to identify and explore the types of errors that the scientific attitude is meant to guard against. At the individual level at least, problems can result from several possible sources: intentional errors, lazy or sloppy procedure, or unintentional errors that may result from unconscious cognitive bias.

  Three Sources of Scientific Error

  The first and most egregious type of error that can occur in science is intentional. One thinks here of the rare but troubling instances of scientific fraud. In chapter 7, I will examine a few of the most well-known recent cases of scientific fraud—Marc Hauser’s work on animal cognition and Andrew Wakefield’s work on vaccines—but for now it is important to point out that the term “fraud” is sometimes used to cover a multitude of sins. Presenting results that are unreproducible is not necessarily indicative of fraud, though it may be the first sign that something is amiss. Data fabrication or lying about evidence is more solidly in the wheelhouse of someone who is seeking to mislead. For all the transparency of scientific standards, however, it is sometimes difficult to tell whether any given example should be classified as fraud or merely sloppy procedure. Though there is no bright line between intentional deception and willful ignorance, the good news is that the standards of science are high enough that it is rare for a confabulated theory—whatever the source of error—to make it through to publication.3 The instantiation of the scientific attitude at the group level is a robust (though not perfect) check against intentional deception.

  The second type of error that occurs in science is the result of sloppiness or laziness—though sometimes even this can be motivated by ideological or psychological factors, at either a conscious or an unconscious level. One wants one’s own theory to be true. The rewards of publication are great. Sometimes there is career pressure to favor a particular result or just to find something that is worth reporting. Again, this can cover a number of sins:

  (1)  Cherry picking data (choosing the results that are most likely to make one’s case)

  (2)  Curve fitting (manipulating a set of variables until they fit a desired curve)

  (3)  Keeping an experiment open until the desired result is found

  (4)  Excluding data that don’t fit (the flip side of cherry picking)

  (5)  Using a small data set

  (6)  P-hacking (sifting through a mountain of data until one finds some statistically significant correlation, whether it makes sense or not)4

  Each of these techniques is a recognizable offense against good statistical method, but it would be rash to claim that all of them constitute fraud. This does not mean that criticism of these techniques should not be severe, yet one must always consider the question of what might have motivated a scientist in any particular instance. Willful ignorance seems worse than carelessness, but the difference between an intentional and unintentional error may be slim.

  In his work on self-deception, evolutionary biologist Robert Trivers has investigated the sorts of excuses that scientists make—to themselves and others—for doing questionable work.5 In an intriguing follow up to his book, Trivers dives into the question of why so many psychologists fail to share data, in contravention of the mandate of the APA-sponsored journals in which they are published.6 The mere fact that these scientists refuse to share their data is in and of itself shocking. The research that Trivers cites reports that 67 percent of the psychologists who were asked to share their data failed to do so.7 The intriguing next step came when it was hypothesized that failure to share data might correlate with a higher rate of statistical error in the published papers. When the results were analyzed, it was found not only that there was a higher error rate in those papers where authors had withheld their data sets—when compared to those who had agreed to share them—but that 96 percent of the errors were in the scientists’ favor! (It is important to note here that no allegation of fraud was being made against the authors; indeed, without the original data sets how could one possibly check this? Instead the checks were for statistical errors in the reported studies themselves, where Wicherts et al. merely reran the numbers.)

  Trivers reports that in another study investigators found further problems in psychological research, such as spurious correlations and running an experiment until a significant result was reached.8 Here the researchers’ attention focused on the issue of “degrees of freedom” in collecting and analyzing data, consonant with problems (3), (4), and (5) above. “Should more data be collected? Should some observations be excluded? Which conditions should be combined and which ones compared? Which control variables should be considered? Should specific measures be combined or transformed or both?” As the researchers note, “[it] is rare, and sometimes impractical, for researchers to make all these decisions beforehand.”9 But this can lead to the “self-serving interpretation of ambiguity” in the evidence. Simmons et al. demonstrate just this by intentionally adjusting the “degrees of freedom” in two parallel studies to “prove” an obviously false result (that listening to a particular song can change the listener’s birth date).10

  I suppose the attitude one might take here is that one is shocked—shocked—to find gambling in Casablanca. Others might contend that psychology is not really a science. But, as Trivers points out, such findings at least invite the question of whether such dubious statistical and methodological techniques can be found in other sciences as well. If psychologists—who study human nature for a living—are not immune to the siren song of self-serving data interpretation, why should others be?

  A third type of error in scientific reasoning occurs as a result of the kind of unintentional cognitive biases we all share as human beings, to which scientists are not immune. Preference for one’s own theory is perhaps the easiest to understand. But there are literally hundreds of others—representativeness bias, anchoring bias, heuristic bias, and the list goes on—that have been identified and explained in Daniel Kahneman’s brilliant book Thinking Fast and Slow, and in the work of other researchers in the rapidly growing field of behavioral economics.11 We will explore the link between such cognitive bias and the paroxysm of science denialism in recent years in chapter 8. That such bias might also have a foothold among those who actually believe in science is, of course, troubling.

  One might expect that these sorts of biases would be washed out by scientists who are professionally trained to spot errors in empirical reasoning. Add to this the fact that no scientist would wish to be publicly embarrassed by making an error that could be caught by others, and one imagines that the incentive would be great to be objective about one’s work and to test one’s theories before offering them for public scrutiny. The truth, however, is that it is sometimes difficult for individuals to recognize these sorts of shortcomings in their own reasoning. The cognitive pathways for bias are wired into all of us, PhD or not.

  Perhaps the most virulent example of the destructive power of cognitive bias is confirmation bias. This occurs when we are biased in favor of finding evidence that confirms what we already believe and discounting evidence that does not. One would think that this kind of mistake would be unlikely for scientists to make, since it goes directly against the scientific attitude, where we are supposed to be searching for evidence that can force us to change our beliefs rather than ratify them. Yet again, no matter how much individual scientists might try to inoculate themselves against this type of error, it is sometimes not found until peer review or even post-publication.12 Thus we see that the scientific attitude is at its most powerful when embraced by the entire scientific community. The scientific attitude is not merely a matter of conscience or reasoning ability among individual scientists, but the cornerstone of those practices that make up science as an inst
itution. As such, it is useful as a guard against all manner of errors, whatever their source.

  Critical Communities and the Wisdom of Crowds

  Ideally, the scientific attitude would be so well ensconced at the individual level that scientists could mitigate all possible sources of error in their own work. Surely some try to do this, and it is a credit to scientific research that it is one of the few areas of human endeavor where the practitioners are motivated to find and correct their own mistakes by comparing them to actual evidence. But it is too much to think that this happens in every instance. For one thing, this would work only if all errors were unintentional and the researcher was motivated (and able) to find them. But in cases of fraud or willful ignorance, we would be foolish to think that scientists can and will correct all of their own mistakes. For these, membership in a larger community is crucial.

  Recent research in behavioral economics has shown that groups are often better than individuals at finding errors in reasoning. Most of these experiments have been done on the subject of unconscious cognitive bias, but it goes without saying that groups also would be better motivated to find errors created by conscious bias than the people who had perpetrated them. Some of the best experimental work on this subject has been done with logic puzzles. One of the most important results is the Wason selection task.13 In this experiment, subjects were shown four cards lying flat on a table with the instruction that—although they could not touch the cards—each had a number on one side and a letter of the alphabet on the other. Suppose the cards read 4, E, 7, and K. Subjects were then given a rule such as “If there is a vowel on one side of the card, then there is an even number on the other.” Their task was to determine which (and only which) cards they would need to turn over in order to test the rule.

 

‹ Prev