Ever Since Darwin: Reflections in Natural History
Page 21
A map generated and drawn by a computer shows the distribution by size of male house sparrows in North America. The higher numbers indicate larger sizes, based on a composite of sixteen different measurements of the birds’ skeletons.
Multivariate analysis is beginning to have a similar effect on studies of human variation. In past decades, for example, J. B. Birdsell wrote several distinguished books that divided mankind into races, following accepted practice at the time. Recently he has applied multivariate analysis to gene frequencies for blood types among Australian Aborigines. He rejects any subdivision into discrete units and writes: “It may well be that the investigation of the nature and intensity of evolutionary forces is the endeavor to be pursued while the pleasures of classifying man fall away, perhaps forever.”
30 | The Nonscience of Human Nature
WHEN A GROUP of girls suffered simultaneous seizures in the presence of an accused witch, the justices of seventeenth century Salem could offer no explanation other then true demonic possession. When the followers of Charlie Manson attributed occult powers to their leader, no judge took them seriously. In nearly three hundred years separating the two incidents, we have learned quite a bit about social, economic, and psychological determinants of group behavior. A crudely literal interpretation of such events now seems ridiculous.
An equally crude literalism used to prevail in interpreting human nature and the differences among human groups. Human behavior was attributed to innate biology; we do what we do because we are made that way. The first lesson of an eighteenth-century primer stated the position succinctly: In Adam’s fall, we sinned all. A movement away from this biological determinism has been a major trend in twentieth-century science and culture. We have come to see ourselves as a learning animal; we have come to believe that the influences of class and culture far outweigh the weaker predispositions of our genetic constitution.
Nonetheless, we have been deluged during the past decade by a resurgent biological determinism, ranging from “pop ethology” to outright racism.
With Konrad Lorenz as godfather, Robert Ardrey as dramatist, and Desmond Morris as raconteur, we are presented with man, “the naked ape,” descended from an African carnivore, innately aggressive and inherently territorial.
Lionel Tiger and Robin Fox try to find a biological basis for outmoded Western ideals of aggressive, outreaching men and docile, restricted women. In discussing cross-cultural differences between men and women, they propose a hormonal chemistry inherited from the requirements of our supposed primal roles as group hunters and child rearers.
Carleton Coon offered a prelude of events to come with his claim (The Origin of Races, 1962) that five major human races evolved independently from Homo erectus (“Java” and “Peking” man) to Homo sapiens, with black people making the transition last. More recently, the IQ test has been (mis)used to infer genetic differences in intelligence among races (Arthur Jensen and William Shockley) and classes (Richard Herrnstein)—always, I must note, to the benefit of the particular group to which the author happens to belong (see next essay).
All these views have been ably criticized on an individual basis; yet they have rarely been treated together as expressions of a common philosophy—a crude biological determinism. One can, of course, accept a specific claim and reject the others. A belief in the innate nature of human violence does not brand anyone a racist. Yet all these claims have a common underpinning in postulating a direct genetic basis for our most fundamental traits. If we are programmed to be what we are, then these traits are ineluctable. We may, at best, channel them, but we cannot change them, either by will, education, or culture.
If we accept the usual platitudes about “scientific method” at face value, then the coordinated resurgence of biological determinism must be attributed to new information that refutes the earlier findings of twentieth-century science. Science, we are told, progresses by accumulating new information and using it to improve or replace old theories. But the new biological determinism rests upon no recent fund of information and can cite in its behalf not a single unambiguous fact. Its renewed support must have some other basis, most likely social or political in nature.
Science is always influenced by society, but it operates under a strong constraint of fact as well. The Church eventually made its peace with Galileo because, after all, the earth does go around the sun. In studying the genetic components of such complex human traits as intelligence and aggressiveness, however, we are freed from the constraint of fact, for we know practically nothing. In these questions, “science” follows (and exposes) the social and political influences acting upon it.
What then, are the nonscientific reasons that have fostered the resurgence of biological determinism? They range, I believe, from pedestrian pursuits of high royalties for best sellers to pernicious attempts to reintroduce racism as respectable science. Their common denominator must lie in our current malaise. How satisfying it is to fob off the responsibility for war and violence upon our presumably carnivorous ancestors. How convenient to blame the poor and the hungry for their own condition—lest we be forced to blame our economic system or our government for an abject failure to secure a decent life for all people. And how convenient an argument for those who control government and, by the way, provide the money that science requires for its very existence.
Deterministic arguments divide neatly into two groups—those based on the supposed nature of our species in general and those that invoke presumed differences among “racial groups” of Homo sapiens. I discuss the first subject here and treat the second in my next essay.
Summarized briefly, mainstream pop ethology contends that two lineages of hominids inhabited Pleistocene Africa. One, a small, territorial carnivore, evolved into us; the other, a larger, presumably gentle herbivore, became extinct. Some carry the analogy of Cain and Abel to its full conclusion and accuse our ancestors of fratricide. The “predatory transition” to hunting established a pattern of innate violence and engendered our territorial urges: “With the coming of the hunting life to the emerging hominid came the dedication to territory” (Ardrey, The Territorial Imperative). We may be clothed, citified, and civilized, but we carry deep within us the genetic patterns of behavior that served our ancestor, the “killer ape.” In African Genesis Ardrey champions Raymond Dart’s contention that “the predatory transition and the weapons fixation explained man’s bloody history, his eternal aggression, his irrational, self-destroying, inexorable pursuit of death for death’s sake.”
Tiger and Fox extend the theme of group hunting to proclaim a biological basis for the differences between men and women that Western cultures have traditionally valued. Men did the hunting; women stayed home with the kids. Men are aggressive and combative, but they also form strong bonds among themselves that reflect the ancient need for cooperation in the killing of big game and now find expression in touch football and rotary clubs. Women are docile and devoted to their own children. They do not form intense bonds among themselves because their ancestors needed none to tend their homes and their men: sisterhood is an illusion. “We are wired for hunting.… We remain Upper Paleolithic hunters, fine-honed machines designed for the efficient pursuit of game” (Tiger and Fox, The Imperial Animal).
The story of pop ethology has been built on two lines of supposed evidence, both highly disputable:
1.Analogies with the behavior of other animals (abundant but imperfect data). No one doubts that many animals (including some, but not all, primates) display innate patterns of aggression and territorial behavior. Since we exhibit similar behavior, can we not infer a similar cause? The fallacy of this assumption reflects a basic issue in evolutionary theory. Evolutionists divide the similarities between two species into homologous features shared by common descent and a common genetic constitution, and analogous traits evolved separately.
Comparisons between humans and other animals lead to causal assertions about the genetics of our behavior only if they are base
d on homologous traits. But how can we know whether similarities are homologous or analogous? It is hard to differentiate even when we deal with concrete structures, such as muscles and bones. In fact, most classical arguments in the study of phytogeny involve the confusion of homology and analogy, for analogous structures can be strikingly similar (we call this phenomenon evolutionary convergence). How much harder it is to tell when similar features are only the outward motions of behavior! Baboons may be territorial; their males may be organized into a dominance hierarchy—but is our quest for Lebensraum and the hierarchy of our armies an expression of the same genetic makeup or merely an analogous pattern that might be purely cultural in origin? And when Lorenz compares us with geese and fish, we stray even further into pure conjecture; baboons, at least, are second cousins.
2.Evidence from hominid fossils (scrappy but direct data). Ardrey’s claims for territoriality rest upon the assumption that our African ancestor Australopithecus africanus, was a carnivore. He derives his “evidence” from accumulations of bones and tools at the South African cave sites and the size and shape of teeth. The bone piles are no longer seriously considered; they are more likely the work of hyenas than of hominids.
Teeth are granted more prominence, but I believe that the evidence is equally poor if not absolutely contradictory. The argument rests upon relative size of grinding teeth (premolars and molars). Herbivores need more surface area to grind their gritty and abundant food. A. robustus, the supposed gentle herbivore, possessed grinding teeth relatively larger than those of its carnivorous relative, our ancestor A. africanus.
But A. robustus was a larger creature than A. africanus. As size increases, an animal must feed a body growing as the cube of length by chewing with tooth areas that increase only as the square of length if they maintain the same relative size (see essays of section 6). This will not do, and larger mammals must have differentially larger teeth than smaller relatives. I have tested this assertion by measuring tooth areas and body sizes for species in several groups of mammals (rodents, piglike herbivores, deer, and several groups of primates). Invariably, I find that larger animals have relatively larger teeth—not because they eat different foods, but simply because they are larger.
Moreover, the “small” teeth of A. africanus are not at all diminutive. They are absolutely larger than ours (although we are three times as heavy), and they are about as big as those of gorillas weighing nearly ten times as much! The evidence of tooth size indicates to me that A. africanus was primarily herbivorous.
The issue of biological determinism is not an abstract matter to be debated within academic cloisters. These ideas have important consequences, and they have already permeated our mass media. Ardrey’s dubious theory is a prominent theme in Stanley Kubrick’s film 2001. The bone tool of our apelike ancestor first smashes a tapir’s skull and then twirls about to transform into a space station of our next evolutionary stage—as the superman theme of Richard Strauss’ Zarathustra yields to Johann’s Blue Danube. Kubrick’s next film, Clockwork Orange, continues the theme and explores the dilemma inspired by claims of innate human violence. (Shall we accept totalitarian controls for mass deprogramming or remain nasty and vicious within a democracy?) But the most immediate impact will be felt as male privilege girds its loins to battle a growing women’s movement. As Kate Millett remarks in Sexual Politics: “Patriarchy has a tenacious or powerful hold through its successful habit of passing itself off as nature.”
31 | Racist Arguments and IQ
LOUIS AGASSIZ, the greatest biologist of mid-nineteenth-century America, argued that God had created blacks and whites as separate species. The defenders of slavery took much comfort from this assertion, for biblical prescriptions of charity and equality did not have to extend across a species boundary. What could an abolitionist say? Science had shone its cold and dispassionate light upon the subject; Christian hope and sentimentality could not refute it.
Similar arguments, carrying the apparent sanction of science, have been continually invoked in attempts to equate egalitarianism with sentimental hope and emotional blindness. People who are unaware of this historical pattern tend to accept each recurrence at face value: that is, they assume that each statement arises from the “data” actually presented, rather than from the social conditions that truly inspire it.
The racist arguments of the nineteenth century were based primarily on craniometry, the measurement of human skulls. Today, these contentions stand totally discredited. What craniometry was to the nineteenth century, intelligence testing has been to the twentieth. The victory of the eugenics movement in the Immigration Restriction Act of 1924 signaled its first unfortunate effect—for the severe restrictions upon non-Europeans and upon southern and eastern Europeans gained much support from results of the first extensive and uniform application of intelligence tests in America—the Army Mental Tests of World War I. These tests were engineered and administered by psychologist Robert M. Yerkes, who concluded that “education alone will not place the negro [sic] race on a par with its Caucasian competitors.” It is now clear that Yerkes and his colleagues knew no way to separate genetic from environmental components in postulating causes for different performances on the tests.
The latest episode of this recurring drama began in 1969, when Arthur Jensen published an article entitled, “How Much Can We Boost IQ and Scholastic Achievement?” in the Harvard Educational Review. Again, the claim went forward that new and uncomfortable information had come to light, and that science had to speak the “truth” even if it refuted some cherished notions of a liberal philosophy. But again, I shall argue, Jensen had no new data; and what he did present was flawed beyond repair by inconsistencies and illogical claims.
Jensen assumes that IQ tests adequately measure something we may call “intelligence.” He then attempts to tease apart the genetic and environmental factors causing differences in performance. He does this primarily by relying upon the one natural experiment we possess: identical twins reared apart—for differences in IQ between genetically identical people can only be environmental. The average difference in IQ for identical twins is less than the difference for two unrelated individuals raised in similarly varied environments. From the data on twins, Jensen obtains an estimate of environmental influence. He concludes that IQ has a heritability of about 0.8 (or 80 percent) within the population of American and European whites. The average difference between American whites and blacks is 15 IQ points (one standard deviation). He asserts that this difference is too large to attribute to environment, given the high heritability of IQ. Lest anyone think that Jensen writes in the tradition of abstract scholarship, I merely quote the first line of his famous work: “Compensatory education has been tried, and it apparently has failed.”
I believe that this argument can be refuted in a “hierarchical” fashion—that is, we can discredit it at one level and then show that it fails at a more inclusive level even if we allow Jensen’s argument for the first two levels:
Level 1: The equation of IQ with intelligence. Who knows what IQ, measures? It is a good predictor of “success” in school, but is such success a result of intelligence, apple polishing, or the assimilation of values that the leaders of society prefer? Some psychologists get around this argument by defining intelligence operationally as the scores attained on “intelligence” tests. A neat trick. But at this point, the technical definition of intelligence has strayed so far from the vernacular that we can no longer define the issue. But let me allow (although I don’t believe it), for the sake of argument, that IQ measures some meaningful aspect of intelligence in its vernacular sense.
Level 2: The heritability of IQ. Here again, we encounter a confusion between vernacular and technical meanings of the same word. “Inherited,” to a layman, means “fixed,” “inexorable,” or “unchangeable.” To a geneticist, “inherited” refers to an estimate of similarity between related individuals based on genes held in common. It carries no implications of inevitabili
ty or of immutable entities beyond the reach of environmental influence. Eyeglasses correct a variety of inherited problems in vision; insulin can check diabetes.
Jensen insists that IQ is 80 percent heritable. Princeton psychologist Leon J. Kamin has done the dog-work of meticulously checking through details of the twin studies that form the basis of this estimate. He has found an astonishing number of inconsistencies and downright inaccuracies. For example, the late Sir Cyril Burt, who generated the largest body of data on identical twins reared apart, pursued his studies of intelligence for more than forty years. Although he increased his sample sizes in a variety of “improved” versions, some of his correlation coefficients remain unchanged to the third decimal place—a statistically impossible situation.5 IQ depends in part upon sex and age; and other studies did not standardize properly for them. An improper correction may produce higher values between twins not because they hold genes for intelligence in common, but simply because they share the same sex and age. The data are so flawed that no valid estimate for the heritability of IQ can be drawn at all. But let me assume (although no data support it), for the sake of argument, that the heritability of IQ is as high as 0.8.
Level 3: The confusion of within- and between-group variation. Jensen draws a causal connection between his two major assertions—that the within-group heritability of IQ is 0.8 for American whites, and that the mean difference in IQ between American blacks and whites is 15 points. He assumes that the black “deficit” is largely genetic in origin because IQ is so highly heritable. This is a non sequitur of the worst possible kind—for there is no necessary relationship between heritability within a group and differences in mean values of two separate groups.
A simple example will suffice to illustrate this flaw in Jensen’s argument. Height has a much higher heritability within groups than anyone has ever claimed for IQ. Suppose that height has a mean value of five feet two inches and a heritability of 0.9 (a realistic value) within a group of nutritionally deprived Indian farmers. High heritability simply means that short farmers will tend to have short offspring, and tall farmers tall offspring. It says nothing whatever against the possibility that proper nutrition could raise the mean height to six feet (taller than average white Americans). It only means that, in this improved status, farmers shorter than average (they may now be five feet ten inches) would still tend to have shorter than average children.