This Explains Everything
Page 2
My Dear Michael,
Jim Watson and I have probably made a most important discovery. . . . Now we believe that the DNA is a code. That is, the order of the bases (the letters) makes one gene different from another gene (just as one page of print is different from another). You can see how Nature makes copies of the genes. Because if the two chains unwind into two separate chains, and if each chain makes another chain come together on it, then because A always goes with T, and G with C, we shall get two copies where we had one before. In other words, we think we have found the basic copying mechanism by which life comes from life. . . . You can understand we are excited.
Never has a mystery seemed more baffling in the morning and an explanation more obvious in the afternoon.
REDUNDANCY REDUCTION AND PATTERN RECOGNITION
RICHARD DAWKINS
Evolutionary biologist; Emeritus Professor of the Public Understanding of Science, Oxford; author, The Magic of Reality
Deep, elegant, beautiful? Part of what makes a theory elegant is its power to explain much while assuming little. Here, Darwin’s natural selection wins hands down. The ratio of the huge amount that it explains (everything about life: its complexity, diversity, and illusion of crafted design) divided by the little that it needs to postulate (nonrandom survival of randomly varying genes through geological time) is gigantic. Never in the field of human comprehension were so many facts explained by assuming so few. Elegant then, and deep—its depths hidden from everybody until as late as the 19th century. On the other hand, for some tastes, natural selection is too destructive, too wasteful, too cruel to count as beautiful. In any case, I can count on somebody else choosing Darwin. I’ll take his great-grandson instead, and come back to Darwin at the end.
Horace Barlow, FRS, is the youngest grandchild of Sir Horace Darwin, Charles Darwin’s youngest child. Now a very active ninety, Barlow is a member of a distinguished lineage of Cambridge neurobiologists. I want to talk about an idea he published in two papers in 1961, on redundancy reduction and pattern recognition. It’s an idea whose ramifications and significance have inspired me throughout my career.
The folklore of neurobiology includes a mythical “grandmother neuron,” which fires only when a very particular image, the face of Jerry Lettvin’s grandmother, falls on the retina (Lettvin was a distinguished American neurobiologist who, like Barlow, worked on the frog retina). The point is that Lettvin’s grandmother is only one of countless images that a brain is capable of recognizing. If there were a specific neuron for everything we can recognize—not just Lettvin’s grandmother but lots of other faces, objects, letters of the alphabet, flowers, each one seen from many angles and distances—we would have a combinatorial explosion. If sensory recognition worked on the grandmother principle, the number of specific-recognition neurons for all possible combinations of nerve impulses would exceed the number of atoms in the universe. Independently, the American psychologist Fred Attneave had calculated that the volume of the brain would have to be measured in cubic light-years. Barlow and Attneave independently proposed redundancy reduction as the answer.
Claude Shannon, inventor of information theory, coined “redundancy” as a kind of inverse of information. In English, “q” is always followed by “u,” so the “u” can be omitted without loss of information. It is redundant. Wherever redundancy occurs in a message (which is wherever there is nonrandomness), the message can be more economically recoded without loss of information—although with some loss in capacity to correct errors. Barlow suggested that at every stage in sensory pathways there are mechanisms tuned to eliminate massive redundancy.
The world at time t is not greatly different from the world at time t-1. Therefore it is not necessary for sensory systems continuously to report the state of the world. They need only signal changes, leaving the brain to assume that everything not reported remains the same. Sensory adaptation is a well-known feature of sensory systems, which does precisely as Barlow prescribed. If a neuron is signaling temperature, for example, the rate of firing is not, as one might naively suppose, proportional to the temperature. Instead, firing rate increases only when there is a change in temperature. It then dies away to a low, resting frequency. The same is true of neurons signaling brightness, loudness, pressure, and so on. Sensory adaptation achieves huge economies by exploiting the nonrandomness in temporal sequence of states of the world.
What sensory adaptation achieves in the temporal domain, the well-established phenomenon of lateral inhibition does in the spatial domain. If a scene in the world falls on a pixelated screen, such as the back of a digital camera or the retina of an eye, most pixels seem the same as their immediate neighbors. The exceptions are those pixels which lie on edges, boundaries. If every retinal cell faithfully reported its light value to the brain, the brain would be bombarded with a hugely redundant message. Great economies can be achieved if most of the impulses reaching the brain come from pixel cells lying along edges in the scene. The brain then assumes uniformity in the spaces between edges.
As Barlow pointed out, this is exactly what lateral inhibition achieves. In the frog retina, for example, every ganglion cell sends signals to the brain, reporting on the light intensity in its particular location on the surface of the retina. But it simultaneously sends inhibitory signals to its immediate neighbors. This means that the only ganglion cells to send strong signals to the brain are those that lie on an edge. Ganglion cells lying in uniform fields of color (the majority) send few if any impulses to the brain, because they, unlike cells on edges, are inhibited by all their neighbors. The spatial redundancy in the signal is eliminated.
The Barlow analysis can be extended to most of what is now known about sensory neurobiology, including Hubel and Wiesel’s famous horizontal-and vertical-line detector neurons in cats (straight lines are redundant, reconstructable from their ends), and in the movement (“bug”) detectors in the frog retina, discovered by the same Jerry Lettvin and his colleagues. Movement represents a nonredundant change in the frog’s world. But even movement is redundant if it persists in the same direction at the same speed. Sure enough, Lettvin and colleagues discovered a “strangeness” neuron in their frogs, which fires only when a moving object does something unexpected, such as speeding up, slowing down, or changing direction. The strangeness neuron is tuned to filter out redundancy of a very high order.
Barlow pointed out that a survey of the sensory filters of a given animal could, in theory, give us a readout of the redundancies present in the animal’s world. They would constitute a kind of description of the statistical properties of that world. Which reminds me, I said I’d return to Darwin. In Unweaving the Rainbow, I suggested that the gene pool of a species is a “Genetic Book of the Dead,” a coded description of the ancestral worlds in which the genes of the species have survived through geological time. Natural selection is an averaging computer, detecting redundancies—repeat patterns—in successive worlds (successive through millions of generations) in which the species has survived (averaged over all members of the sexually reproducing species). Could we take what Barlow did for neurons in sensory systems and do a parallel analysis for genes in naturally selected gene pools? Now, that would be deep, elegant, and beautiful.
THE POWER OF ABSURDITY
SCOTT ATRAN
Anthropologist, Centre National de la Recherche Scientifique, Paris; author, Talking to the Enemy: Faith, Brotherhood, and the (Un)Making of Terrorists
The notion of a transcendent force that moves the universe or history or determines what is right and good—and whose existence is fundamentally beyond reason and immune to logical or empirical disproof—is the simplest, most elegant, and most scientifically baffling phenomenon I know of. Its power and absurdity perturbs mightily and merits careful scientific scrutiny. In an age in which many of the most volatile and seemingly intractable conflicts stem from sacred causes, scientific understanding of how best to deal with the subject has also never been more crucial.
&nb
sp; Call it love of Group or God, or devotion to an Idea or Cause, it matters little in the end. It is the “the privilege of absurdity; to which no living creature is subject, but man only,” of which Hobbes wrote in Leviathan. In The Descent of Man, Darwin cast it as the virtue of “morality,” with which winning tribes are better endowed in history’s spiraling competition for survival and dominance. Unlike other creatures, humans define the groups they belong to in abstract terms. Often they strive to achieve a lasting intellectual and emotional bond with anonymous others and seek to heroically kill and die not in order to preserve their own lives or those of people they know but for the sake of an idea—the conception they have formed of themselves, of “who we are.”
Sacred, or transcendental, values and religious ideas are culturally universal, yet content varies markedly across cultures. Sacred values mark the moral boundaries of societies and determine which material transactions are permissible. Material transgressions of the sacred are taboo: We consider people who sell their children or sell out their country to be sociopaths; other societies consider adultery or disregard of the poor immoral, but not necessarily selling children or women or denying freedom of expression.
Sacred values usually become strongly relevant only when challenged, much as food takes on overwhelming value in people’s lives only when denied. People in one cultural milieu are often unaware of what is sacred for another—or, in becoming aware through conflict, find the other side’s values (pro-life vs. pro-choice, say) immoral and absurd. Such conflicts cannot be wholly reduced to secular calculations of interest but must be dealt with on their own terms, a logic different from the marketplace or realpolitik. For example, cross-cultural evidence indicates that the prospect of crippling economic burdens and huge numbers of deaths doesn’t necessarily sway people from choosing to go to war, or to opt for revolution or resistance. As Darwin noted, the virtuous and brave do what is right, regardless of consequences, as a moral imperative. (Indeed, we have suggestive neuroimaging evidence that people process sacred values in parts of the brain devoted to rule-bound behavior rather than utilitarian calculations—think Ten Commandments or Bill of Rights.)
There is an apparent paradox underlying the formation of large-scale human societies. The religious and ideological rise of civilizations—of larger and larger agglomerations of genetic strangers, including today’s nations, transnational movements, and other “imagined communities” of fictive kin—seem to depend upon what Kierkegaard deemed this “power of the preposterous” (as in Abraham’s willingness to slit the throat of his most beloved son to show commitment to an invisible, no-name deity, thus making him the world’s greatest culture hero rather than a child abuser, would-be murderer, or psychotic). Humankind’s strongest social bonds and actions, including the capacities for cooperation and forgiveness, and for killing and allowing oneself to be killed, are born of commitment to causes and courses of action that are “ineffable”—that is, fundamentally immune to logical assessment for consistency and to empirical evaluation for costs and consequences. The more materially inexplicable one’s devotion and commitment to a sacred cause—that is, the more absurd—the greater the trust others place in it and the more that trust generates commitment on their part.
To be sure, thinkers of all persuasions have tried to explain the paradox (most being ideologically motivated and simpleminded), often to show that religion is good, or more usually that religion is unreasonably bad. If anything, evolution teaches that humans are creatures of passion and that reason itself is primarily aimed at social victory and political persuasion rather than philosophical or scientific truth. To insist that persistent rationality is the best means and hope for victory over enduring irrationality—that logical harnessing of facts could someday do away with the sacred and so end conflict—defies all that science teaches about our passion-driven nature. Throughout the history of our species, as for the most intractable conflicts and greatest collective expressions of joy today, utilitarian logic is a pale prospect to replace the sacred.
For Alfred Russel Wallace, moral behavior (along with mathematics, music, and art) was evidence that humans had not evolved through natural selection alone: “The special faculties we have been discussing clearly point to the existence in man of something which he has not derived from his animal progenitors—something which we may best refer to as being of a spiritual essence . . . beyond all explanation by matter, its laws and forces.”* His disagreement with Darwin on this subject was longstanding, at one point prompting the latter to protest, “I hope you have not murdered too completely your own and my child.”* But Darwin himself produced no causal account of how humans became moral animals, other than to say that because our ancestors were so physically weak, only group strength could get them through. Religion and the sacred, banned so long from reasoned inquiry by the ideological bias of all persuasions—perhaps because the subject is so close to who we want or don’t want to be—is still a vast, tangled, and largely unexplored domain for science, however simple and elegant for most people everywhere in everyday life.
HOW APPARENT FINALITY CAN EMERGE
CARLO ROVELLI
Theoretical physicist, Centre de Physique Théorique, University of Marseille; author, Quantum Gravity
Darwin, no doubt. The beauty and the simplicity of his explanation is astonishing. I am sure that others have pointed out Darwinian natural selection as their favorite deep, elegant, beautiful explanation, but I still want to emphasize the general reach of Darwin’s central intuition, which goes well beyond the monumental result of having clarified that we share the same ancestors with all living beings on Earth and is directly relevant to the core of the entire scientific enterprise.
Shortly after the ancient Greek physicists started developing naturalistic explanations of nature, a general objection arose. The objection is well articulated in Plato—for instance, in the Phaedo—and especially in Aristotle’s discussion of the theory of the “causes.” Naturalistic explanations rely on what Aristotle called “the efficient cause”—namely, past phenomena producing effects. But the world appears to be dominated by phenomena that can be understood in terms of “final causes”—that is, an “aim” or a “purpose.” These are evident in the kingdom of life. We have mouths “so” we can eat. The importance of this objection cannot be underestimated. It brought down ancient naturalism, and in the minds of many it is still the principal source of psychological resistance to a naturalistic understanding of the world.
Darwin discovered the spectacularly simple mechanism by which efficient causes produce phenomena that appear to be governed by final causes. Anytime we have phenomena that can reproduce, the actual phenomena we observe are those that keep reproducing and therefore are necessarily better at reproducing, and we can thus read them in terms of final causes. In other words, a final cause can be effective for understanding the world because it’s a shortcut in accounting for the past history of a continuing phenomenon.
To be sure, this idea has appeared before. Empedocles speculated that the apparent finality in the living kingdom could be the result of selected randomness, and Aristotle himself, in his Physics, mentions a version of this idea for species (“seeds”). But the times were not yet ripe and the suggestion was lost in the following religious ages. I think the resistance to Darwin is not just difficulty in seeing the power of a spectacularly beautiful explanation but fear of realizing the extraordinary power such an explanation has in shattering old worldviews.
THE OVERDUE DEMISE OF MONOGAMY
AUBREY DE GREY
Gerontologist; chief science officer, SENS Foundation; author, Ending Aging
There are many persuasive arguments from evolutionary biology explaining why various species, notably Homo sapiens, have adopted a lifestyle in which males and females pair up long-term. But my topic here is not one of those explanations. Instead, it is the explanation for why we are close—far closer than most people, even most readers of Edge, yet appreciate—to the g
reatest societal, as opposed to technological, advance in the history of civilization.
In 1971, the American philosopher John Rawls coined the term “reflective equilibrium” to denote “a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments.”* In practical terms, reflective equilibrium is about how we identify and resolve logical inconsistencies in our prevailing moral compass. Examples such as the rejection of slavery and of innumerable “isms” (sexism, ageism, etc.) are quite clear: The arguments that worked best were those highlighting the hypocrisy of maintaining acceptance of existing attitudes in the face of already established contrasting attitudes in matters that were indisputably analogous.
Reflective equilibrium gets my vote for the most elegant and beautiful explanation, because of its immense breadth of applicability and also its lack of dependence on other controversial positions. Most important, it rises above the question of cognitivism, the debate over whether there is any such thing as objective morality. Cognitivists assert that certain acts are inherently good or bad, regardless of the society in which they do or do not occur—very much as the laws of physics are generally believed to be independent of those observing their effects. Noncognitivists claim, by contrast, that no moral position is universal and that each (hypothetical) society makes its own moral rules unfettered, so that even acts we would view as unequivocally immoral could be morally unobjectionable in some other culture. But when we make actual decisions concerning whether such-and-such a view is morally acceptable or not, reflective equilibrium frees us from the need to take a view on the cognitivism question. In a nutshell, it explains why we don’t need to know whether morality is objective.