Faith Versus Fact : Why Science and Religion Are Incompatible (9780698195516)
Page 6
What Is Science?
So what is “science”? Before I lay out my own definition, let’s see how the word is construed by other people. To many it represents simply the activities of professional scientists: the person on television in the lab coat who, peddling the latest antiwrinkle cream, touts it with the words “Science says . . .” To others, it’s the knowledge produced by scientists: the facts taught in classes on chemistry, biology, physics, geology, and so on. Those facts segue into technology, or the practical applications of scientific knowledge—the development of antibiotics, computers, lasers, and so on.
But scientific knowledge is often transitory: some (but not all) of what we find is eventually made obsolete, or even falsified, by new findings. That is not a weakness but a strength, for our best understanding of phenomena will alter with changes in our way of thinking, our tools for looking at nature, and what we find in nature itself. Any “knowledge” incapable of being revised with advances in data and human thinking does not deserve the name of knowledge. In my lifetime, the continents were thought to be static, but now we know they move—at the same rate our fingernails grow. The universe was also thought by many to be static, having eternally been in its present form, until 1929, when Edwin Hubble showed that it was expanding, and later, in 1964, when scientists discovered the background radiation that was the sign of a Big Bang. And even in 1949, the year I was born, and less than three years before Watson and Crick discovered the structure of DNA, many people still thought that the genetic material was a protein.
What is “known” may sometimes change, so science isn’t really a fixed body of knowledge. What remains is what I really see as “science,” which is simply a method for understanding how the universe (matter, our bodies and behavior, the cosmos, and so on) actually works. Science is a set of tools, refined over hundreds of years, for getting answers about nature. It is the set of methods we cite when we’re asked “How do you know that?” after making claims such as “Birds evolved from dinosaurs” or “The genetic material is not a protein, but a nucleic acid.”
My view of science as a toolkit is what Michael Shermer meant when he defined science as a collection of methods that produce “a testable body of knowledge open to rejection or confirmation.” That’s as good a definition of science as any, but the best rationale for using those methods came from the renowned and colorful physicist Richard Feynman:
The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that.
Like everyone, scientists can suffer from confirmation bias, our tendency to pay attention to data that confirm our a priori beliefs and wishes, and to ignore data we don’t like. But, like all rational people, we must admit the truth of what Voltaire noted in 1763: “The interest I have in believing in something is not a proof that the something exists.” The doubt and criticality of science are there precisely for the reason Feynman emphasized: to prevent us from believing what we’d like to be true. The part of Feynman’s quote about fooling yourself is important, because his view of science is precisely the opposite of how religion finds truth. (Feynman was an atheist, and I can’t help but suspect that he was thinking of religion when he wrote that.) As we’ll see, religion is heavily laden with the kind of confirmation bias that makes people see their own faiths as true and all others as false. In other words, religion is replete with features to help people fool themselves.
But I’m getting ahead of myself. When I characterize science as a way to find truth, what I mean is “truth about the universe”—the kind of truth that is defined by the Oxford English Dictionary as “Conformity with fact; agreement with reality; accuracy, correctness, verity (of statement or thought).” And if you look up “fact,” you’ll find that it’s defined as “something that has really occurred or is actually the case; something certainly known to be of this character; hence, a particular truth known by actual observation or authentic testimony, as opposed to what is merely inferred, or to a conjecture or fiction; a datum of experience, as distinguished from the conclusions that may be based upon it.”
In other words, truth is simply what is: what exists in reality and can be verified by rational and independent observers. It is true that DNA is a double helix, that the continents move, and that the Earth revolves around the Sun. It is not true, at least in the dictionary sense, that somebody had a revelation from God. The scientific claims can be corroborated by anyone with the right tools, while a revelation, though perhaps reflecting someone’s real perception, says nothing about reality, for unless that revelation has empirical content, it cannot be corroborated. In this book I will avoid the murky waters of epistemology by simply using the words “truth” and “fact” interchangeably. These notions blend into the concept of “knowledge,” defined as “the apprehension of fact or truth with the mind; clear and certain perception of fact or truth; the state or condition of knowing fact or truth.”
As I noted above, widespread agreement by scientists about what is true does not guarantee that that truth will never change. Scientific truth is never absolute, but provisional: there is no bell that rings when you’re doing science to let you know that you’ve finally reached the absolute and unchangeable truth and need go no further. Absolute and unalterable truth is for mathematics and logic, not empirically based science. As the philosopher Walter Kaufmann explained, “What distinguishes knowledge is not certainty but evidence.”
And that evidence can change. It’s easy to find cases of accepted scientific “truths” that were later shown to be false. I’ve mentioned a few above, and there are many more. Early cases in the history of science are geocentrism (the Earth as the center of the cosmos) and the Greek concept of the “four humors”: that both personality and disease resulted from the balance of four bodily fluids (black bile, yellow bile, phlegm, and blood). A famous modern case is the demonstration of “N rays,” a form of radiation described in 1903, observed by many people, and then found to be bogus, a result of confirmation bias. Atoms were once considered indivisible particles of matter. There’s even one case of a Nobel Prize awarded for a bogus discovery, that of the Spiroptera carcinoma, a parasitic nematode worm that supposedly caused cancer. Its discovery earned Johannes Fibiger the Nobel Prize in Physiology or Medicine in 1926. Soon thereafter, researchers showed that this result was wrong: the worm was simply an irritant that, like many other factors, induced tumors in already damaged cells. But Fibiger’s prize stands, for his discovery seemed true at the time.
The overturning of some scientific truths has often served as ammunition for religious critics who indict the field for its inconstancy. Science can be wrong! But that mischaracterizes any attempt to understand truth, both religious and scientific. Scientific tools and ways of thinking change: how can our understanding of nature not change as well? And, of course, the criticism of inconstancy can be turned right back on religion. There is simply no way that any faith can prove beyond question that its claims are true while those of other faiths are false.
It is a common saying among scientists that we can prove theories wrong (it would be relatively easy to show, for instance, that the formula for water isn’t H2O), but that we can never prove them right, for new observations could always come along that would overturn received knowledge. The theory of evolution, for instance, is regarded by all rational scientists as true, as it’s supported by mountains of evidence from many different fields. Yet there are observations that could, if they surfaced, conceivably disprove that theory. These include, for instance, finding fossils embedded in strata from the “wrong” time, like discovering mammalian fossils in four-hundred-million-year-old sediments, or observing adaptations in one species that are useful only for another species, such as a pouch on a wallaby that can hold only baby koalas. Needless to say, such evidence hasn’t appeared. Evolution, then, is a fact in the scientific sense, something Steve Gould defined as an observation “conf
irmed to such a degree that it would be perverse to withhold provisional assent.” Indeed, the only real “proofs” beyond revision are those found in mathematics and logic.
But some people take this too far, claiming that scientific truths not only are provisional, but change most of the time. Science, the argument goes, isn’t really that good at apprehending truth, and we should be wary of it. Such claims of inconstancy usually involve medical studies—like the value of a daily aspirin in preventing heart disease, or the advisability of annual mammograms—whose conclusions go back and forth when different populations are sampled. What’s important to remember is that most scientific findings become truths when they’re replicated many times, either directly by other dubious scientists or when repeated as a foundation for further work.
In reality, we can consider many scientific truths to be about as absolute as truths can be, ones that are very unlikely to change. I would bet my life savings that the DNA in my cells forms a double helix, that a normal water molecule has two hydrogen atoms and one oxygen atom, that the speed of light in a vacuum is unchanging (and close to 186,000 miles per second), and that the closest living relatives of humans are the two species of chimpanzees. After all, you bet your life on science every time you take medicines like antibiotics, insulin, and anticholesterol drugs. If we consider “proof” in the vernacular to mean “evidence so strong that you’d bet your house on it,” then, yes, science is sometimes in the business of proof.
So what are the components of the toolkit of science? Like many of us, I was taught in high school that there is indeed a “scientific method,” one consisting of “hypothesis, test, and confirmation.” You made a hypothesis (for instance, that DNA is the genetic material) and then tested it with laboratory experiments (the classic one, done in 1944, involved inserting the DNA of a disease-causing bacteria into a benign one and seeing if the transformed bacteria could both cause disease and pass this pathogenicity on to its descendants). If your predictions worked, you had supported your hypothesis. With strong and repeated support, the hypothesis was finally considered “true.”
But scientists and philosophers now agree that there is no single scientific method. Often you must gather facts before you can even form a hypothesis. One example is Darwin’s observation, made on his Beagle voyage, that oceanic islands—usually volcanic islands that rose above the sea bereft of life—have lots of birds, insects, and plants that are endemic, native only to those islands. The diverse species of finches of the Galápagos and the fruit flies of Hawaii are examples. Further, oceanic islands like Hawaii and the Galápagos either have very few species of native reptiles, amphibians, and mammals or lack them completely, yet such creatures are widely distributed on continents and “continental islands” like Great Britain that were once connected to major landmasses. It is these facts that helped Darwin concoct the theory of evolution, for those observations can’t be explained by creationism (a creator could have put animals wherever he wanted). Rather, they lead us to conclude that endemic birds, insects, and plants on oceanic islands descended, via evolution, from ancestors that had the ability to migrate to those places. Insects, plant seeds, and birds can colonize distant islands by flying, floating, or being borne by the wind, while this is not possible for mammals, reptiles, and amphibians. Collecting that data and then recognizing a pattern in it was what helped produce the theory of evolution.
And sometimes the “tests” of hypotheses don’t involve experiments, but rather observations—often of things that occurred long ago. It’s hard to do experiments about cosmology, but we’re completely confident in the existence of the Big Bang because we observe things predicted by it, like the expanding universe and the background radiation that is the echo of that event. Historical reconstruction is a perfectly valid way of doing science, so long as we can use observations to test our ideas (this, by the way, makes archaeology and history disciplines that are, in principle, scientific). Creationists often criticize evolution because it can’t be seen in “real time” (although it has been), apparently ignorant of the massive historical evidence, including the fossil record, the useless remnants of ancient DNA in our genome, and the biogeographic pattern I described above. If we accept as true only the things we see happen with our own eyes in our own lifetime, we’d have to regard all of human history as dubious.
While scientific theories can make predictions, they can also be tested by what I call “retrodictions”: facts that were previously known but unexplained, and that suddenly make sense when a new theory appears. Einstein’s general theory of relativity was able to explain anomalies in the orbit of Mercury that could not be explained by classical Newtonian mechanics. A thick coat of hair, the lanugo, develops in a human fetus at about six months after fertilization but is usually shed before birth. That makes sense only under the theory of evolution: the hair is a vestige of our common ancestry with other primates, who develop the same hair at a similar stage but don’t shed it. (A coat of hair is simply not useful for a fetus floating in warm fluid.)
Finally, it’s often said that the defining characteristic of science is that it is quantitative: it involves numbers, calculations, and measurements. But that too isn’t always true. There’s not a single equation in Darwin’s On the Origin of Species, and the whole theory of evolution, though sometimes tested quantitatively, can be stated explicitly without any numbers.
As some philosophers have noted, the scientific method boils down to the notion that “anything goes” when you’re studying nature—with the proviso that “anything” is limited to combinations of reason, logic, and empirical observation. There are, however, some important features that distinguish science from pseudoscience, from religion, and from what are euphemistically called “other ways of knowing.”
Falsifiability via Experiments or Observations
Although philosophers of science argue about its importance, scientists by and large adhere to the criterion of “falsifiability” as an essential way of finding truth. What this means is that for theory or fact to be seen as correct, there must be ways of showing it to be wrong, and those ways must have been tried and have failed. I’ve mentioned how the theory of evolution is in principle falsifiable: there are dozens of ways to show it wrong, but none have done so. When many attempts to disprove a theory fail, and that theory remains the best explanation for the patterns we see in nature (as is evolution), then we consider it true.
A theory that cannot be shown to be wrong, while it may be pondered by scientists, cannot be accepted as scientific truth. When I was a child I made my first theory: that when I left my room, all my plush animals would get up and move around. But to account for the fact that I never actually saw them move or change their positions during my absence, I added a proviso: the animals would instantly assume their former positions when I tried to catch them. At the time, that was an unfalsifiable hypothesis (nanny cams didn’t exist). That seems silly, but is not too far removed from theories about paranormal phenomena, whose adherents claim—as they often do for ESP or other psychic “powers”—that the presence of observers actually eliminates the phenomenon. Likewise, claims of supernatural phenomena like the efficacy of prayer are rendered unfalsifiable by the assertion that “God will not be tested.” (Of course, if the tests had been successful, then testing God would have been fine!) A more scientific example of untestability is that of string theory, a branch of physics claiming that all fundamental particles can be represented as different oscillations on one-dimensional “strings,” and that the universe may have twenty-six dimensions instead of four. String theory is enormously promising because if it is right it could constitute the elusive “theory of everything” that unifies all known forces and particles. Alas, nobody has thought of a way of testing it. Absent such tests, it stands as a fruitful theory, but because it’s not at present falsifiable, it’s one that can’t be seen as true. In the end, a theory that can’t be shown to be wrong can never be shown to b
e right.
Doubt and Criticality
Any scientist worth her salt will, when getting an interesting result, ask several questions: Are there alternative explanations for what I found? Is there a flaw in my experimental design? Could anything have gone wrong? The reason we do this is not only to make sure that we have a solid result but also to protect our reputation. There’s no better incentive for honesty than the knowledge that you’re competing against other scientists in the same area, some often working on the very same problem. If you screw up, you’ll be found out very quickly.
That, by the way, gives the lie to the many creationists who claim that we evolutionists conspire to prop up a theory we supposedly know is wrong. They never specify what motivates us to keep promoting something that they consider so obviously false, but creationists often imply that we’re committed to using evolution as a way to buttress the atheism of science. (Never mind that many scientists, including evolutionary biologists, are believers, with no vested interest in promoting atheism.) But the main argument against conspiracy theories in science is that anyone who could disprove an important paradigm like the modern theory of evolution would gain immediate renown. Fame accrues to those who, like Einstein and Darwin, overturn the accepted explanations of their day, not to journeymen who simply provide additional evidence for theories that are already widely accepted.
A striking example of the importance of doubt was the finding in 2011 that neutrinos appeared to move faster than the speed of light, discovered by timing their journey over a path from Switzerland to Italy. That observation was remarkable, for it violated everything we know about physics, especially the “law” that nothing can exceed the speed of light. Predictably, the first thing that the physicists (and almost every scientist) thought when hearing this report was simply, “What went wrong?” Although if such an observation were correct it would surely garner a Nobel Prize, one would risk a lifetime of embarrassment to publish it without substantial replication and checking. And, sure enough, immediate checks found that the neutrinos had behaved properly, and their anomalous speed was due simply to a loose cable and a faulty clock.