Science cannot escape its curious dialectic. Embedded in surrounding culture, it can, nonetheless, be a powerful agent for questioning and even overturning the assumptions that nurture it. Science can provide information to reduce the ratio of data to social importance. Scientists can struggle to identify the cultural assumptions of their trade and to ask how answers might be formulated under different assertions. Scientists can propose creative theories that force startled colleagues to confront unquestioned procedures. But science’s potential as an instrument for identifying the cultural constraints upon it cannot be fully realized until scientists give up the twin myths of objectivity and inexorable march toward truth. One must, indeed, locate the beam in one’s own eye before interpreting correctly the pervasive motes in everybody else’s. The beams can then become facilitators, rather than impediments.
Gunnar Myrdal (1944) captured both sides of this dialectic when he wrote:
A handful of social and biological scientists over the last 50 years have gradually forced informed people to give up some of the more blatant of our biological errors. But there must be still other countless errors of the same sort that no living man can yet detect, because of the fog within which our type of Western culture envelops us. Cultural influences have set up the assumptions about the mind, the body, and the universe with which we begin; pose the questions we ask; influence the facts we seek; determine the interpretation we give these facts; and direct our reaction to these interpretations and conclusions.
Biological determinism is too large a subject for one man and one book—for it touches virtually every aspect of the interaction between biology and society since the dawn of modern science. I have therefore confined myself to one central and manageable argument in the edifice of biological determinism—an argument in two historical chapters, based on two deep fallacies, and carried forth in one common style.
The argument begins with one of the fallacies—reification, or our tendency to convert abstract concepts into entities (from the Latin res, or thing). We recognize the importance of mentality in our lives and wish to characterize it, in part so that we can make the divisions and distinctions among people that our cultural and political systems dictate. We therefore give the word “intelligence” to this wondrously complex and multifaceted set of human capabilities. This shorthand symbol is then reified and intelligence achieves its dubious status as a unitary thing.
Once intelligence becomes an entity, standard procedures of science virtually dictate that a location and physical substrate be sought for it. Since the brain is the seat of mentality, intelligence must reside there.
We now encounter the second fallacy—ranking, or our propensity for ordering complex variation as a gradual ascending scale. Metaphors of progress and gradualism have been among the most pervasive in Western thought—see Lovejoy’s classic essay (1936) on the great chain of being or Bury’s famous treatment (1920) of the idea of progress. Their social utility should be evident in the following advice from Booker T. Washington (1904, p. 245) to black America:
For my race, one of its dangers is that it may grow impatient and feel that it can get upon its feet by artificial and superficial efforts rather than by the slower but surer process which means one step at a time through all the constructive grades of industrial, mental, moral and social development which all races have had to follow that have become independent and strong.
But ranking requires a criterion for assigning all individuals to their proper status in the single series. And what better criterion than an objective number? Thus, the common style embodying both fallacies of thought has been quantification, or the measurement of intelligence as a single number for each person.* This book, then, is about the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status. In short, this book is about the Mismeasure of Man.*
Different arguments for ranking have characterized the last two centuries. Craniometry was the leading numerical science of biological determinism during the nineteenth century. I discuss (Chapter 2) the most extensive data compiled before Darwin to rank races by the sizes of their brains—the skull collection of Philadelphia physician Samuel George Morton. Chapter 3 treats the flowering of craniometry as a rigorous and respectable science in the school of Paul Broca in late nineteenth-century Europe. Chapter 4 then underscores the impact of quantified approaches to human anatomy in nineteenth-century biological determinism. It presents two case studies: the theory of recapitulation as evolution’s primary criterion for unilinear ranking of human groups, and the attempt to explain criminal behavior as a biological atavism reflected in the apish morphology of murderers and other miscreants.
What craniometry was for the nineteenth century, intelligence testing has become for the twentieth, when it assumes that intelligence (or at least a dominant part of it) is a single, innate, heritable, and measurable thing. I discuss the two components of this invalid approach to mental testing in Chapter 5 (the hereditarian version of the IQ scale as an American product) and Chapter 6 (the argument for reifying intelligence as a single entity by the mathematical technique of factor analysis). Factor analysis is a difficult mathematical subject almost invariably omitted from documents written for nonprofessionals. Yet I believe that it can be made accessible and explained in a pictorial and nonnumerical way. The material of Chapter 6 is still not “easy reading,” but I could not leave it out—for the history of intelligence testing cannot be understood without grasping the factor analytic argument and understanding its deep conceptual fallacy. The great IQ debate makes no sense without this conventionally missing subject.
I have tried to treat these subjects in an unconventional way by using a method that falls outside the traditional purview of either a scientist or historian operating alone. Historians rarely treat the quantitative details in sets of primary data. They write, as I cannot adequately, about social context, biography, or general intellectual history. Scientists are used to analyzing the data of their peers, but few are sufficiently interested in history to apply the method to their predecessors. Thus, many scholars have written about Broca’s impact, but no one has recalculated his sums.
I have focused upon the reanalysis of classical data sets in craniometry and intelligence testing for two reasons beyond my incompetence to proceed in any other fruitful way and my desire to do something a bit different. I believe, first of all, that Satan also dwells with God in the details. If the cultural influences upon science can be detected in the humdrum minutiae of a supposedly objective, almost automatic quantification, then the status of biological determinism as a social prejudice reflected by scientists in their own particular medium seems secure.
The second reason for analyzing quantitative data arises from the special status that numbers enjoy. The mystique of science proclaims that numbers are the ultimate test of objectivity. Surely we can weigh a brain or score an intelligence test without recording our social preferences. If ranks are displayed in hard numbers obtained by rigorous and standardized procedures, then they must reflect reality, even if they confirm what we wanted to believe from the start. Antideterminists have understood the particular prestige of numbers and the special difficulty that their refutation entails. Léonce Manouvrier (1903, p. 406), the nondeterminist black sheep of Broca’s fold, and a fine statistician himself, wrote of Broca’s data on the small brains of women:
Women displayed their talents and their diplomas. They also invoked philosophical authorities. But they were opposed by numbers unknown to Condorcet or to John Stuart Mill. These numbers fell upon poor women like a sledge hammer, and they were accompanied by commentaries and sarcasms more ferocious than the most misogynist imprecations of certain church fathers. The theologians had asked if women had a soul. Sev
eral centuries later, some scientists were ready to refuse them a human intelligence.
If—as I believe I have shown—quantitative data are as subject to cultural constraint as any other aspect of science, then they have no special claim upon final truth.
In reanalyzing these classical data sets, I have continually located a priori prejudice, leading scientists to invalid conclusions from adequate data, or distorting the gathering of data itself. In a few cases—Cyril Burt’s documented fabrication of data on IQ of identical twins, and my discovery that Goddard altered photographs to suggest mental retardation in the Kallikaks—we can specify conscious fraud as the cause of inserted social prejudice. But fraud is not historically interesting except as gossip because the perpetrators know what they are doing and the unconscious biases that record subtle and inescapable constraints of culture are not illustrated. In most cases discussed in this book, we can be fairly certain that biases—though often expressed as egregiously as in cases of conscious fraud—were unknowingly influential and that scientists believed they were pursuing unsullied truth.
Since many of the cases presented here are so patent, even risible, by today’s standards, I wish to emphasize that I have not taken cheap shots at marginal figures (with the possible exceptions of Mr. Bean in Chapter 3, whom I use as a curtain-raiser to illustrate a general point, and Mr. Cartwright in Chapter 2, whose statements are too precious to exclude). Cheap shots come in thick catalogues—from a eugenicist named W. D. McKim, Ph.D. (1900), who thought that all nocturnal housebreakers should be dispatched with carbonic acid gas, to a certain English professor who toured the United States during the late nineteenth century, offering the unsolicited advice that we might solve our racial problems if every Irishman killed a Negro and got hanged for it.* Cheap shots are also gossip, not history; they are ephemeral and uninfluential, however amusing. I have focused upon the leading and most influential scientists of their times and have analyzed their major works.
I have enjoyed playing detective in most of the case studies that make up this book: finding passages expurgated without comment in published letters, recalculating sums to locate errors that support expectations, discovering how adequate data can be filtered through prejudices to predetermined results, even giving the Army Mental Test for illiterates to my own students with interesting results. But I trust that whatever zeal any investigator must invest in details has not obscured the general message: that determinist arguments for ranking people according to a single scale of intelligence, no matter now numerically sophisticated, have recorded little more than social prejudice—and that we learn something hopeful about the nature of science in pursuing such an analysis.
If this subject were merely a scholar’s abstract concern, I could approach it in more measured tone. But few biological subjects have had a more direct influence upon millions of lives. Biological determinism is, in its essence, a theory of limits. It takes the current status of groups as a measure of where they should and must be (even while it allows some rare individuals to rise as a consequence of their fortunate biology).
I have said little about the current resurgence of biological determinism because its individual claims are usually so ephemeral that their refutation belongs in a magazine article or newspaper story. Who even remembers the hot topics of ten years ago: Shockley’s proposals for reimbursing voluntarily sterilized individuals according to their number of IQ points below 100, the great XYY debate, or the attempt to explain urban riots by diseased neurology of rioters. I thought that it would be more valuable and interesting to examine the original sources of the arguments that still surround us. These, at least, display great and enlightening errors. But I was inspired to write this book because biological determinism is rising in popularity again, as it always does in times of political retrenchment. The cocktail party circuit has been buzzing with its usual profundity about innate aggression, sex roles, and the naked ape. Millions of people are now suspecting that their social prejudices are scientific facts after all. Yet these latent prejudices themselves, not fresh data, are the primary source of renewed attention.
We pass through this world but once. Few tragedies can be more extensive than the stunting of life, few injustices deeper than the denial of an opportunity to strive or even to hope, by a limit imposed from without, but falsely identified as lying within. Cicero tells the story of Zopyrus, who claimed that Socrates had inborn vices evident in his physiognomy. His disciples rejected the claim, but Socrates defended Zopyrus and stated that he did indeed possess the vices, but had cancelled their effects through the exercise of reason. We inhabit a world of human differences and predilections, but the extrapolation of these facts to theories of rigid limits is ideology.
George Eliot well appreciated the special tragedy that biological labeling imposed upon members of disadvantaged groups. She expressed it for people like herself—women of extraordinary talent. I would apply it more widely—not only to those whose dreams are flouted but also to those who never realize that they may dream. But I cannot match her prose (from the prelude to Middlemarch):
Some have felt that these blundering lives are due to the inconvenient indefiniteness with which the Supreme Power has fashioned the natures of women: if there were one level of feminine incompetence as strict as the ability to count three and no more, the social lot of women might be treated with scientific certitude. The limits of variation are really much wider than anyone would imagine from the sameness of women’s coiffure and the favorite love stories in prose and verse. Here and there a cygnet is reared uneasily among the ducklings in the brown pond, and never finds the living stream in fellowship with its own oary-footed kind. Here and there is born a Saint Theresa, foundress of nothing, whose loving heartbeats and sobs after an unattained goodness tremble off and are dispersed among hindrances instead of centering in some long-recognizable deed.
TWO
American Polygeny and Craniometry before Darwin
Blacks and Indians as Separate, Inferior Species
Order is Heaven’s first law; and, this confessed,
Some are, and must be, greater than the rest.
—ALEXANDER POPE, Essay on Man (1733)
APPEALS TO REASON or to the nature of the universe have been used throughout history to enshrine existing hierarchies as proper and inevitable. The hierarchies rarely endure for more than a few generations, but the arguments, refurbished for the next round of social institutions, cycle endlessly.
The catalogue of justifications based on nature traverses a range of possibilities: elaborate analogies between rulers and a hierarchy of subordinate classes with the central earth of Ptolemaic astronomy and a ranked order of heavenly bodies circling around it; or appeals to the universal order of a “great chain of being,” ranging in a single series from amoebae to God, and including near its apex a graded series of human races and classes. To quote Alexander Pope again:
Without this just gradation, could they be
Subjected, these to those, or all to thee?
…………………………………………
From Nature’s chain whatever link you strike,
Tenth, or ten thousandth, breaks the chain alike.
The humblest, as well as the greatest, play their part in preserving the continuity of universal order; all occupy their appointed roles.
This book treats an argument that, to many people’s surprise, seems to be a latecomer: biological determinism, the notion that people at the bottom are constructed of intrinsically inferior material (poor brains, bad genes, or whatever). Plato, as we have seen, cautiously floated this proposal in the Republic, but finally branded it as a lie.
Racial prejudice may be as old as recorded human history, but its biological justification imposed the additional burden of intrinsic inferiority upon despised groups, and precluded redemption by conversion or assimilation. The “scientific” argument has formed a primary line of attack for more than a century. In discussing the first biological t
heory supported by extensive quantitative data—early nineteenth-century craniometry—I must begin by posing a question of causality: did the introduction of inductive science add legitimate data to change or strengthen a nascent argument for racial ranking? Or did a priori commitment to ranking fashion the “scientific” questions asked and even the data gathered to support a foreordained conclusion?
A shared context of culture
In assessing the impact of science upon eighteenth- and nineteenth-century views of race, we must first recognize the cultural milieu of a society whose leaders and intellectuals did not doubt the propriety of racial ranking—with Indians below whites, and blacks below everybody else (Fig. 2.1). Under this universal umbrella, arguments did not contrast equality with inequality. One group—we might call them “hard-liners”—held that blacks were inferior and that their biological status justified enslavement and colonization. Another group—the “soft-liners,” if you will—agreed that blacks were inferior, but held that a people’s right to freedom did not depend upon their level of intelligence. “Whatever be their degree of talents,” wrote Thomas Jefferson, “it is no measure of their rights.”
Soft-liners held various attitudes about the nature of black disadvantage. Some argued that proper education and standard of life could “raise” blacks to a white level; others advocated permanent black ineptitude. They also disagreed about the biological or culturalroots of black inferiority. Yet, throughout the egalitarian tradition of the European Enlightenment and the American revolution, I cannot identify any popular position remotely like the “cultural relativism” that prevails (at least by lip-service) in liberal circles today. The nearest approach is a common argument that black inferiority is purely cultural and that it can be completely eradicated by education to a Caucasian standard.
The Mismeasure of Man Page 6