Why Trust Science?

Home > Other > Why Trust Science? > Page 8
Why Trust Science? Page 8

by Naomi Oreskes


  The inclination of some religious believers to distrust scientific findings is neither new nor unstudied. Scholars have amply described and attempted to account for religiously motivated dissent from scientific theories of evolution from Darwin to Dawkins. But rejection of scientific claims is not restricted to matters of theological concern; people reject scientific conclusions for a host of reasons. Clearly, the establishment of scientific claims qua science does not entail the acceptance of those claims by people outside the scientific community. On the contrary, a “post-truth” world is one in which the fundamental assumptions of scientific inquiry—including its capacity to yield objective, trustworthy knowledge—have been called into question.

  Some scholars, most notably Bruno Latour and Sheila Jasanoff, have argued that scientific knowledge is co-produced by scientists and society, in which case truthiness might be viewed as a normal state of affairs.3 A co-produced claim, in their view, is one on which both scientists and society have converged, and it is this convergence—rather than empirical reality or even empirical support—that grants stability to the claim. Until this scientific and social convergence occurs, disputation is inevitable, and not just about values but also about facts. As an empirical matter, this is clearly so. But the concept of co-production begs the question of what it means for a claim to be scientific and whether factual claims should fairly be understood as distinct from other types of claims. It also begs the question of whether we are justified in rejecting (or at least suspending judgement on) a claim that scientists consider settled when other members of society have demurred. The theory of co-production begs the question of whether scientific claims made by scientific experts merit trust.4

  Latour has argued that scientific claims are performances about the natural world, and that scientists have been successful at “performing the world we live in.”5 By this he (presumably) means that scientists have achieved substantial social authority and are broadly accepted as our leading societal experts on “matters of fact.”6 (They perform and we applaud.) He also suggests (presumably ruefully) that natural scientists are “better equipped at performing the world we live in than [social scientists] have been at deconstructing it.”7 But he may be overestimating the success (performative or otherwise) of the natural sciences, given the large numbers of Americans who doubt many important claims of contemporary science (I restrict myself to the United States here, but similar claims could be made about other countries, such as the HIV-AIDS link in parts of Africa).

  If we define success in terms of cultural authority, the success of science is clearly not only incomplete but at the moment looking rather shaky. Large numbers of our fellow citizens—including the current president and vice president of the United States-—doubt and in some cases actively challenge scientific conclusions about vaccines, evolution, climate change, and even the harms of tobacco. These challenges cannot be dismissed as “scientific illiteracy.” Studies show that in the United States, among Democrats and independent voters, higher levels of education are correlated with higher levels of acceptance of scientific claims, but among Republicans the opposite is true: The more educated Republicans are, the more likely they are to doubt or reject scientific claims about anthropogenic climate change. This indicates not a lack of knowledge but the effects of ideological motivation, interpreted self-interest, and the power of competing beliefs.8

  And, as we saw in chapter 1, there is a deeper problem, one that transcends our particular political moment and varying cultural conditions. Even if we accept contemporary scientific claims as true or likely to be true, history demonstrates that the process of transformative interrogation will sometimes lead to the overturning of well-established claims. William James argued more than a century ago that experience has a “way of boiling over, and making us correct our present formulas.”9 He astutely pointed out that what we label as “ ‘absolutely’ true, meaning what no further experience will ever alter, is that ideal vanishing point toward which we imagine that all our temporary truths will someday converge … Meanwhile we live today by what truth we can get today, and be ready tomorrow to call it falsehood.”10 This was also Karl Popper’s point where he argued for the provisional character of all scientific knowledge.

  The overturning of claims is not arbitrary; it is related to experience and observation. But why we should accept any contemporary claim if we know that it may in the future be overturned? One might point out that incomplete and even inaccurate knowledge may still be useful and reliable for certain purposes: the Ptolemaic system of astronomy was used to make accurate predictions of eclipses, and airplanes were flying before aeronautical engineers had an accurate theory of lift.11 That scientific knowledge may be partial or incomplete—or that old theories get replaced by new ones—is not ipso facto a refutation of science in general. On the contrary, it may be read as proof of the progress of science, particularly when in hindsight we can look back on the older theories and understand how and why they worked. (Newtonian mechanics still works when the objects under consideration are not moving very quickly.) But if our knowledge is overturned wholesale—if it is deemed in hindsight to have been wholly incorrect—that calls into question whether we can trust current scientific knowledge when we need to make decisions.12

  Climate skeptics sometimes raise this point. In public lectures on climate science, I have been asked: “Scientists are always getting it wrong, so why should we believe them about climate change?” The “it” that scientists are allegedly getting wrong is rarely specified, and when I ask my interlocutor what he has in mind, usually there is no specific answer. When there is, most often it is the changing and seemingly contradictory recommendations of nutritionists. There are many reasons why nutritional information in recent years has been a moving target, and why nutrition seems to be a dismal science. These include the role of the mass media in publicizing novel but unconfirmed findings; the misuse of statistics by ill-trained scientists; the problems of small sample size and the difficulty of undertaking a controlled study of people’s eating habits (see Krosnick, this volume); and the influence of the food industry in funding distracting research on the relative harms of sugar and fat.13 (Elsewhere I have written on the potential adverse effects of industry funding of science when the desired outcomes are clear and biasing.14) But even if nutritional science is atypical, or even if it is typical but the sources of confusion in it can be identified and addressed, the skeptical challenge is epistemologically legitimate. If scientists sometimes get things wrong—and of course they do—then how do we know they are not wrong now? Can we trust the current state of knowledge?

  In this chapter, I set aside the issues of corruption, media misrepresentation, and inadequate statistical training to look at a problem that I think is more vexing, and certainly more challenging epistemically. It is the problem of science gone awry of its own accord. There are numerous examples in the history of science of scientists coming to conclusions that were later overturned, and many of those episodes have to do neither with religious commitments, nor overt political pressures, nor commercial corruption.15 This has been the central question guiding much of my research career: How are we to evaluate the truth claims of science when we know that these claims may in the future be overturned?

  Elsewhere I have called this problem the instability of scientific truth.16 In the 1980s, philosopher Larry Laudan called it the pessimistic meta-induction of the history of science.17 He observed (as have many others) that the history of science offers many examples of scientific “truths” that were later viewed as misconceptions. Conversely, ideas rejected in the past have sometimes been rescued from their epistemological dustbins, brushed off, polished up, and accepted into the halls of respectable science. The retrieval of continental drift theory and its incorporation into plate tectonics—the topic of my first book—is a case in point.18 As I wrote in 1999 when discussing that retrieval: “History is littered with the discarded beliefs of yesterday and the present is populated
by epistemic resurrections.” Given the perishability of past scientific knowledge, how are we to evaluate the aspirations of contemporary scientific claims to legitimacy and even permanence?19 For even if some truths of science prove to be permanent, we have no way of knowing which ones those will be. We simply do not know which of our current truths will stay and which will go.20 How, therefore, can we warrant relying on current knowledge to make decisions, particularly when the issues at stake are socially or politically sensitive, economically consequential, or deeply personal?21

  In this chapter, I consider some examples in which scientists clearly went astray. The examples are drawn either from my own prior research and that of my students, or from historical examples that I have come to know well through three decades of teaching. Can we learn from these examples? Do they have anything in common? Might they help us answer the question of ex ante trust, by helping us to recognize cases where it may be appropriate to be skeptical, to reserve judgment, or to ask with good reason for more research?

  I do not claim that these examples are representative, only that they are interesting and informative. They all come from the late nineteenth century onwards, because in my experience many scientists discount anything older on the grounds that we are smarter now, have better tools, or subject our claims to more comprehensive peer review.22 Of course, no two historical cases are the same. Each of the examples I will present is complex, with more than one possible interpretation of how and why scientists took the positions they did. These cases do not define a “set.” But they do have one crucial element in common: each of them includes red flags that were evident at the time.

  Example 1: The Limited Energy Theory

  In 1873, Edward H. Clarke (1820–77), an American physician and Harvard Medical School professor, argued against the higher education of women on the grounds that it would adversely affect their fertility.23 Specifically, he argued that the demands of higher education would cause their ovaries and uteri to shrink. In the words of Victorian scholars Elaine and English Showalter, “Higher education,” Clarke believed, “was destroying the reproductive functions of American women by overworking them at a critical time in their physiological development.”24

  Clarke presented his conclusion as a hypothetic-deductive consequence of the theory of thermodynamics, specifically the first law: conservation of energy. Developed in the 1850s particularly by Rudolf Clausius, the first law of thermodynamics states that energy can be transformed or transferred but it cannot be created or destroyed. Therefore, the total amount of energy available in any closed system is constant. It stood to reason, Clarke argued, that activities that directed energy toward one organ or physiological system, such as the brain or nervous system, necessarily diverted it from another, such as the uterus or endocrine system. Clarke labeled his concept “The Limited Energy Theory.”25

  Scientists were inspired to consider the implications of thermodynamics in diverse domains, and Clarke’s title might suggest he was applying energy conservation to a range of biological or medical questions.26 But not so. For Clarke, the problem of limited energy was specifically female, i.e., female capacity. In his 1873 book, Sex in Education; or, a Fair Chance for Girls, Clarke applied the first law to argue that the body contained a finite amount of energy and therefore “energy consumed by one organ would be necessarily taken away from another.”27 But his was not a general theory of biology, it was a specific theory of reproduction. Reproduction, he (and others) believed, was unique, an “extraordinary task” requiring a “rapid expenditure of force.”28 The key claim, then, was that energy spent on studies would damage women’s reproductive capacities. “A girl cannot spend more than four, or in occasional instances, five hours of force daily upon her studies” without risking damage, and once every four weeks she should have a complete rest from studies of any kind.29 One might suppose that, on this theory, too much time or effort spent on any activity, including perhaps housework or child-rearing, might similarly affect women’s fertility, but Dr. Clarke did not pursue that question. His concern was the potential effects of strenuous higher education.

  In 1873, thermodynamics was a relatively new science, and Clarke presented his work as an exciting application of this important development. His book was widely read: Sex in Education enjoyed nineteen editions; over twelve thousand copies were printed in the three decades after its release. Historians have credited it with playing a significant role in undermining public support for educational and professional opportunities for women at that time; one contemporary commentator predicted that the book would “nip co-education in the bud.”30

  Clarke’s argument was primarily aimed at co-education—that women could not withstand the rigors of a system of higher education designed for men—but it was also used against rigorous intellectual training for women of any sort, particularly that being conceptualized at the women’s colleges that were being founded around that time, such as Smith (founded in 1871), Wellesley (1875), Radcliffe (1879), and Bryn Mawr (1885). Higher education for women was problematic, Clarke and his followers insisted, unless it was specifically designed to take account of women’s “limited energy.”31 M. Carey Thomas, the first dean and second president of Bryn Mawr College, recalled that in the early years of the college, “we did not know when we began whether women’s health could stand the strain of education.” Early advocates of higher education for women were “haunted,” she reflected, “by the clanging chains of that gloomy little specter, Dr. Edward H. Clarke’s Sex in Education.”32

  Clarke’s theory was also linked to emerging eugenic arguments (of which we will shortly say more). Like many elite white men in the late nineteenth and early twentieth centuries, Clarke feared the combination of women abandoning domestic responsibilities and the declining birth rate among native-born white women would be disastrous to the existing social order. He spoke for many when he fearfully predicted that “the race will be propagated from its inferior classes,” and exhorted readers to “secure the survival and propagation of the fittest” by keeping women home, uneducated and child-rearing.33 Perhaps for this reason his work was heralded by many male medical colleagues, who often shared these fears. One of these was Dr. Oliver Wendell Holmes, dean of the Harvard Medical School (and father of the future Supreme Court justice, who later defended the legality of eugenic sterilization in the infamous case of Buck v. Bell).34 Holmes publicly expressed his “hearty concurrence with the views of Doctor Clarke.”35

  Clarke offered seven cases of young women who pursued traditionally male educational or work environments and experienced a variety of disorders, from menstrual pain and headaches to mental illness. His prescription to these women—and therefore to women in general—was to refrain from mental and physical effort, particularly during and after menstruation. Clarke did not attempt to measure or quantify the energy transfer among the body’s organs, nor did he theorize the mechanism by which energy was selectively distributed to some parts of the body rather than others.36 Rather, he asserted that his conclusion was a “deductive consequence from general scientific principles [i.e., the first law] using auxiliary assumptions.” In this sense, his approach was similar to others at that time, such as social Darwinists, who also attempted to apply theories developed in the biological domain to problems in social worlds.

  In hindsight it does not take much effort to identify the ways in which Clarke embedded prevailing gender prejudice and racial anxiety into his theory. But that risks historical anachronism. If our concern is how to identify problematic science, not in hindsight, but in our own time, then we must ask the question: Did anyone at the time object? The answer is yes. Feminists in the late nineteenth century found Clarke’s agenda transparent and his non-empirical methodology ripe for attack. His leading critic within the medical community was Dr. Mary Putnam Jacobi, a professor of medicine at Columbia and the author of over a hundred medical papers.

  Jacobi signposted the gender politics inside Clarke’s theory, writing th
at the popularity of his work could be attributed to “many interests besides those of scientific truth. The public cares little about science, except insofar as its conclusions can be made to intervene in behalf of some moral, religious or social controversy.”37 She also identified its empirical inadequacy, based as it was on only seven women. As we saw in chapter 1, drawing deductive consequences from theory is part of accepted scientific methodology, but only part: deductive consequences have to be tested by reference to empirical evidence. And Clarke, Jacobi noted, didn’t have much.

  In 1877 she published a study of her own, The Question of Rest for Women during Menstruation, in which she sampled 268 women “who ranged in health, and education and professional status.” (She also allowed the women to self-report their status, in contrast to Clarke who used his own interpretations of their symptoms.) Jacobi presented her data in a series of thirty-four tables examining the relationship between multiple variables, such as rest, exercise, and education.38 She found 59% of women reported no suffering or only slight or occasional suffering from menstruation. Physiologically, she noted that there was “nothing in the nature of menstruation to imply the necessity, or even the desirability, of rest,” particularly when the women’s diets were normal. She supported this conclusion with a thorough literature review on menstruation and nutrition, as well as laboratory experiments on nutrition and the menstrual cycle.39 Her research earned Harvard’s Boylston Medical Prize. But it had little effect on Clarke or his male medical colleagues. In 1907 Dr. G. Stanley Hall wrote in his widely read work Adolescence, “it is, to say the very least, not yet proven that higher education of women is not injurious to their health.”40 Clarke’s theory was viewed as sufficiently established as to place the burden of proof on those who claimed that higher education for women was fine.41

 

‹ Prev