Perhaps the same argument can be made for morality. According to the theory of moral realism, right and wrong exist, and have an inherent logic that licenses some moral arguments and not others.12 The world presents us with non-zero-sum games in which it is better for both parties to act unselfishly than for both to act selfishly (better not to shove and not to be shoved than to shove and be shoved). Given the goal of being better off, certain conditions follow necessarily. No creature equipped with circuitry to understand that it is immoral for you to hurt me could discover anything but that it is immoral for me to hurt you. As with numbers and the number sense, we would expect moral systems to evolve toward similar conclusions in different cultures or even different planets. And in fact the Golden Rule has been rediscovered many times: by the authors of Leviticus and the Mahabharata; by Hillel, Jesus, and Confucius; by the Stoic philosophers of the Roman Empire; by social contract theorists such as Hobbes, Rousseau, and Locke; and by moral philosophers such as Kant in his categorical imperative.13 Our moral sense may have evolved to mesh with an intrinsic logic of ethics rather than concocting it in our heads out of nothing.
But even if the Platonic existence of moral logic is too rich for your blood, you can still see morality as something more than a social convention or religious dogma. Whatever its ontological status may be, a moral sense is part of the standard equipment of the human mind. It’ s the only mind we’ve got, and we have no choice but to take its intuitions seriously. If we are so constituted that we cannot help but think in moral terms (at least some of the time and toward some people), then morality is as real for us as if it were decreed by the Almighty or written into the cosmos. And so it is with other human values like love, truth, and beauty. Could we ever know whether they are really “out there” or whether we just think they are out there because the human brain makes it impossible not to think they are out there? And how bad would it be if they were inherent to the human way of thinking? Perhaps we should reflect on our condition as Kant did in his Critique of Practical Reason: “ Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them: the starry heavens above and the moral law within.”
IN THE PAST four chapters I have shown why new ideas from the sciences of human nature do not undermine humane values. On the contrary, they present opportunities to sharpen our ethical reasoning and put those values on a firmer foundation. In a nutshell:
• It is a bad idea to say that discrimination is wrong only because the traits of all people are indistinguishable.
• It is a bad idea to say that violence and exploitation are wrong only because people are not naturally inclined to them.
• It is a bad idea to say that people are responsible for their actions only because the causes of those actions are mysterious.
• And it is a bad idea to say that our motives are meaningful in a personal sense only because they are inexplicable in a biological sense.
These are bad ideas because they make our values hostages to fortune, implying that someday factual discoveries could make them obsolete. And they are bad ideas because they conceal the downsides of denying human nature: persecution of the successful, intrusive social engineering, the writing off of suffering in other cultures, an incomprehension of the logic of justice, and the devaluing of human life on earth.
PART IV
KNOW THYSELF
Now that I have attempted to make the very idea of human nature respectable, it is time to say something about what it is and what difference it makes for our public and private lives. The chapters in Part IV present some current ideas about the design specs of the basic human faculties. These are not just topics in a psychology curriculum but have implications for many arenas of public discourse. Ideas about the contents of cognition—concepts, words, and images—shed light on the roots of prejudice, on the media, and on the arts. Ideas about the capacity for reason can enter into our policies of education and applications of technology. Ideas about social relations are relevant to the family, to sexuality, to social organization, and to crime. Ideas about the moral sense inform the way we evaluate political movements and how we trade off one value against another.
In each of these arenas, people always appeal to some conception of human nature, whether they acknowledge it or not. The problem is that the conceptions are often based on gut feelings, folk theories, and archaic versions of biology. My goal is to make these conceptions explicit, to suggest what is right and wrong about them, and to spell out some of the implications. Ideas about human nature cannot, on their own, resolve perplexing controversies or determine public policy. But without such ideas we are not playing with a full deck and are vulnerable to unnecessary befuddlement. As the biologist Richard Alexander has noted, “Evolution is surely most deterministic for those still unaware of it.”1
Chapter 12
In Touch with Reality
What a piece of work is a man!
How noble in reason!
How infinite in faculty!
In form, in moving, how express and admirable!
In action, how like an angel!
In apprehension, how like a god!
—William Shakespeare
THE STARTING POINT for acknowledging human nature is a sheer awe and humility in the face of the staggering complexity of its source, the brain. Organized by the three billion bases of our genome and shaped by hundreds of millions of years of evolution, the brain is a network of unimaginable intricacy: a hundred billion neurons linked by a hundred trillion connections, woven into a convoluted three-dimensional architecture. Humbling, too, is the complexity of what it does. Even the mundane talents we share with other primates—walking, grasping, recognizing—are solutions to engineering problems at or beyond the cutting edge of artificial intelligence. The talents that are human birthrights—speaking and understanding, using common sense, teaching children, inferring other people’s motives—will probably not be duplicated by machines in our lifetime, if ever. All this should serve as a counterweight to the image of the mind as formless raw material and to people as insignificant atoms making up the complex being we call “society.”
The human brain equips us to thrive in a world of objects, living things, and other people. Those entities have a large impact on our well-being, and one would expect the brain to be well suited to detecting them and their powers. Failing to recognize a steep precipice or a hungry panther or a jealous spouse can have significant negative consequences for biological fitness, to put it mildly. The fantastic complexity of the brain is there in part to register consequential facts about the world around us.
But this truism has been rejected by many sectors of modern intellectual life. According to the relativistic wisdom prevailing in much of academia today, reality is socially constructed by the use of language, stereotypes, and media images. The idea that people have access to facts about the world is naïve, say the proponents of social constructionism, science studies, cultural studies, critical theory, postmodernism, and deconstructionism. In their view, observations are always infected by theories, and theories are saturated with ideology and political doctrines, so anyone who claims to have the facts or know the truth is just trying to exert power over everyone else.
Relativism is entwined with the doctrine of the Blank Slate in two ways. One is that relativists have a penny-pinching theory of psychology in which the mind has no mechanisms designed to grasp reality; all it can do is passively download words, images, and stereotypes from the surrounding culture. The other is the relativists’ attitude toward science. Most scientists regard their work as an extension of our everyday ability to figure out what is out there and how things work. Telescopes and microscopes amplify the visual system; theories formalize our hunches about cause and effect; experiments refine our drive to gather evidence about events we cannot witness directly. Relativist movements agree that science is perception and cognition writ large, but they draw the opposite conclusion:
that scientists, like laypeople, are unequipped to grasp an objective reality. Instead, their advocates say, “Western science is only one way of describing reality, nature, and the way things work—a very effective way, certainly, for the production of goods and profits, but unsatisfactory in most other respects. It is an imperialist arrogance which ignores the sciences and insights of most other cultures and times.”1 Nowhere is this more significant than in the scientific study of politically charged topics such as race, gender, violence, and social organization. Appealing to “facts” or “the truth” in connection with these topics is just a ruse, the relativists say, because there is no “truth” in the sense of an objective yardstick independent of cultural and political presuppositions.
Skepticism about the soundness of people’s mental faculties also determines whether one should respect ordinary people’s tastes and opinions (even those we don’t much like) or treat the people as dupes of an insidious commercial culture. According to relativist doctrines like “false consciousness,” “inauthentic preferences,” and “interiorized authority,” people may be mistaken about their own desires. If so, it would undermine the assumptions behind democracy, which gives ultimate authority to the preferences of the majority of a population, and the assumptions behind market economies, which treat people as the best judges of how they should allocate their own resources. Perhaps not coincidentally, it elevates the scholars and artists who analyze the use of language and images in society, because only they can unmask the ways in which such media mislead and corrupt.
This chapter is about the assumptions about cognition—in particular, concepts, words, and images—that underlie recent relativistic movements in intellectual life. The best way to introduce the argument is with examples from the study of perception, our most immediate connection to the world. They immediately show that the question of whether reality is socially constructed or directly available has not been properly framed. Neither alternative is correct.
Relativists have a point when they say that we don’t just open our eyes and apprehend reality, as if perception were a window through which the soul gazes at the world. The idea that we just see things as they are is called naïve realism, and it was refuted by skeptical philosophers thousands of years ago with the help of a simple phenomenon: visual illusions. Our visual systems can play tricks on us, and that is enough to prove they are gadgets, not pipelines to the truth. Here are two of my favorites. In Roger Shepard’s “Turning the Tables”2 (right), the two parallelograms are identical in size and shape. In Edward Adelson’s “Checker Shadow Illusion”3 (below) the light square in the middle of the shadow (B) is the same shade of gray as the dark squares outside the shadow (A):
But just because the world we know is a construct of our brain, that does not mean it is an arbitrary construct—a phantasm created by expectations or the social context. Our perceptual systems are designed to register aspects of the external world that were important to our survival, like the sizes, shapes, and materials of objects. They need a complex design to accomplish this feat because the retinal image is not a replica of the world. The projection of an object on the retina grows, shrinks, and warps as the object moves around; color and brightness fluctuate as the lighting changes from sun to clouds or from indoor to outdoor light. But somehow the brain solves these maddening problems. It works as if it were reasoning backwards from the retinal image to hypotheses about reality, using geometry, optics, probability theory, and assumptions about the world. Most of the time the system works: people don’t usually bump into trees or bite into rocks.
But occasionally the brain is fooled. The ground stretching away from our feet projects an image from the bottom to the center of our visual field. As a result, the brain often interprets down-up in the visual field as near-far in the world, especially when reinforced by other perspective cues such as occluded parts (like the hidden table legs). Objects stretching away from the viewer get foreshortened by projection, and the brain compensates for this, so we tend to see a given distance running up-and-down in the visual field as coming from a longer object than the same distance running left-to-right. And that makes us see the lengths and widths differently in the turned tables. By similar logic, objects in shadow reflect less light onto our retinas than objects in full illumination. Our brains compensate, making us see a given shade of gray as lighter when it is in shadow than when it is in sunshine. In each case we may see the lines and patches on the page incorrectly, but that is only because our visual systems are working very hard to see them as coming from a real world. Like a policeman framing a suspect, Shepard and Adelson have planted evidence that would lead a rational but unsuspecting observer to an incorrect conclusion. If we were in a world of ordinary 3-D objects that had projected those images onto our retinas, our perceptual experience would be accurate. Adelson explains: “As with many so-called illusions, this effect really demonstrates the success rather than the failure of the visual system. The visual system is not very good at being a physical light meter, but that is not its purpose. The important task is to break the image information down into meaningful components, and thereby perceive the nature of the objects in view.”4
It’s not that expectations from past experience are irrelevant to perception. But their influence is to make our perceptual systems more accurate, not more arbitrary. In the two words below, we perceive the same shape as an “H” in the first word and as an “A” in the second:5
We see the shapes that way because experience tells us—correctly—that the odds are high that there really is an “H” in the middle of the first word and an “A” in the middle of the second, even if that is not true in an atypical case. The mechanisms of perception go to a lot of trouble to ensure that what we see corresponds to what is usually out there.
So the demonstrations that refute naïve realism most decisively also refute the idea that the mind is disconnected from reality. There is a third alternative: that the brain evolved fallible yet intelligent mechanisms that work to keep us in touch with aspects of reality that were relevant to the survival and reproduction of our ancestors. And that is true not just of our perceptual faculties but of our cognitive faculties. The fact that our cognitive faculties (like our perceptual faculties) are attuned to the real world is most obvious from their response to illusions: they recognize the possibility of a breach with reality and find a way to get at the truth behind the false impression. When we see an oar that appears to be severed at the water’s surface, we know how to tell whether it really is severed or just looks that way: we can palpate the oar, slide a straight object along it, or pull on it to see if the submerged part gets left behind. The concept of truth and reality behind such tests appears to be universal. People in all cultures distinguish truth from falsity and inner mental life from overt reality, and try to deduce the presence of unobservable objects from the perceptible clues they leave behind.6
VISUAL PERCEPTION IS the most piquant form of knowledge of the world, but relativists are less concerned with how we see objects than with how we categorize them: how we sort our experiences into conceptual categories like birds, tools, and people. The seemingly innocuous suggestion that the categories of the mind correspond to something in reality became a contentious idea in the twentieth century because some categories—stereotypes of race, gender, ethnicity, and sexual orientation—can be harmful when they are used to discriminate or oppress.
The word stereotype originally referred to a kind of printing plate. Its current sense as a pejorative and inaccurate image standing for a category of people was introduced in 1922 by the journalist Walter Lippmann. Lippmann was an important public intellectual who, among other things, helped to found The New Republic, influenced Woodrow Wilson’s policies at the end of World War I, and wrote some of the first attacks on IQ testing. In his book Public Opinion, Lippmann fretted about the difficulty of achieving true democracy in an age in which ordinary people could no longer judge public issues rationally because they got thei
r information in what we today call sound bites. As part of this argument, Lippmann proposed that ordinary people’s concepts of social groups were stereotypes: mental pictures that are incomplete, biased, insensitive to variation, and resistant to disconfirming information.
Lippmann had an immediate influence on social science (though the subtleties and qualifications of his original argument were forgotten). Psychologists gave people lists of ethnic groups and lists of traits and asked them to pair them up. Sure enough, people linked Jews with “shrewd” and “mercenary,” Germans with “efficient” and “nationalistic,” Negroes with “superstitious” and “happy-go-lucky,” and so on.7 Such generalizations are pernicious when applied to individuals, and though they are still lamentably common in much of the world, they are now actively avoided by educated people and by mainstream public figures.
By the 1970s, many thinkers were not content to note that stereotypes about categories of people can be inaccurate. They began to insist that the categories themselves don’t exist other than in our stereotypes. An effective way to fight racism, sexism, and other kinds of prejudice, in this view, is to deny that conceptual categories about people have any claim to objective reality. It would be impossible to believe that homosexuals are effeminate, blacks superstitious, and women passive if there were no such things as categories of homosexuals, blacks, or women to begin with. For example, the philosopher Richard Rorty has written, “‘The homosexual,’ ‘the Negro,’ and ‘the female’ are best seen not as inevitable classifications of human beings but rather as inventions that have done more harm than good.”8
The Blank Slate: The Modern Denial of Human Nature Page 29