by Morton Hunt
Festinger went on to develop and publish his theory of cognitive dissonance in 1957. It immediately became the central problem of social psychology and remained the principal topic of experimental research for over fifteen years. In 1959 he and a colleague, J. Merrill Carlsmith, conducted what is usually cited as the classic cognitive dissonance experiment. They artfully deceived their volunteer subjects about the purpose of the study, since the subjects, had they known the researchers wanted to see whether they would change their minds about some issue to minimize cognitive dissonance, might well have felt embarrassed to do so.
Festinger and Carlsmith had their undergraduate male subjects perform an extremely tedious task: they had to put a dozen spools into a tray, remove them, put them back, and repeat the process for half an hour. Then they had to turn each of forty-eight pegs in a board a quarter turn clockwise, then another quarter turn, and so on, again for half an hour. After each subject had finished, one of the researchers told him that the purpose of the experiment was to find out whether people’s expectation of how interesting a task is would affect how well they performed it, and that he had been in the “no-expectation group” but others would be told that the task was enjoyable. Unfortunately, the researcher went on, the assistant who was supposed to tell that to the next subject had just called in to say he couldn’t make it. The researcher said he needed someone to take the assistant’s place and asked the subject to help out. Some subjects were offered $1 to do so, others $20.
Nearly all of them agreed to tell what was obviously a lie to the next subject (who, in reality, was a confederate). After they had done so, the subjects were asked how enjoyable they themselves had found the task. Since it had unquestionably been boring, lying about it to someone else created a condition of cognitive dissonance (“I lied to someone else. But I’m not that kind of person”). The crucial question was whether the size of the payment they had received led them to reduce dissonance by deciding that the task had really been enjoyable.
Intuitively, one might expect that those who got $20—a substantial sum in 1959—would be more likely to change their opinion of the task than those who got a dollar. But Festinger and Carlsmith predicted the opposite. The subjects who got $20 would have a solid reward to justify their lying, but those who got a dollar would have so little justification for lying that they would still feel dissonance, and would relieve it by convincing themselves that the task had been interesting and they had not really lied. Which is exactly what the results showed.*23
Festinger and Carlsmith were exhilarated; social psychologists find it particularly exciting to discover something that is not obvious or that contradicts usual impressions. As Schachter has often told his students, it’s a waste of time to study bubbe psychology; that’s the kind that when you tell your grandmother —bubbe, in Yiddish—what you found, she says, “So what else is new? They pay you for this?”24
Cognitive dissonance theory stirred up a good deal of hostile criticism, which Festinger scathingly dismissed as “garbage,” and attributed to the fact that the theory presented a “not very idealistic” image of humankind.25 Whatever the motives of the critics, a flood of experiments showed cognitive dissonance to be a robust (consistent) finding. And, moreover, a fertile theory. Reminiscing, the eminent social psychologist Elliot Aronson said, “All we had to do was sit around and we could generate ten good hypotheses in an evening… the kinds of hypotheses that no one would even have dreamed of a few years earlier.”26 The theory also explained a number of kinds of social behavior that could not be accounted for within behaviorist theory. Here are a few examples, all verified by experiments:27
—The harder it is to gain membership in a group (as, for instance, when there is grueling screening or hazing), the more highly the group is valued by a person who is accepted. We convince ourselves we love what has caused us pain in order to feel that the pain was worthwhile.
—When people behave in ways they are likely to see as either stupid or immoral, they change their attitudes so as to believe that their behavior is sensible and justified. Smokers, for instance, say that the evidence about smoking and cancer is inconclusive; students who cheat say that everyone else cheats and therefore they have to in order not to be at a disadvantage.
—People who hold opposing views are apt to interpret the same news reports or factual material about the disputed subject quite differently; each sees and remembers what supports his views but glosses over and forgets what would create dissonance.
—When people who think of themselves as reasonably humane are in a situation where they hurt innocent others, as soldiers often harm civilians in the course of combat, they reduce the resulting dissonance by derogating their victims (“Those SOBs are helping the enemy. They’d knife you in the back if they could”). When people benefit from social inequities that cause others to suffer, they often tell themselves that the sufferers aren’t capable of anything better, are content with their way of life, and are dirty, lazy, and immoral.
Finally, one case of a “natural experiment” that illustrates the human tendency to reduce cognitive dissonance by rationalization:
—After a 1983 California earthquake the city of Santa Cruz, in compliance with a new California law, commissioned Dave Steeves, a well-regarded engineer, to assess how local buildings would fare in a major earthquake. Steeves identified 175 buildings that would suffer severe damage, many of them in the prime downtown shopping area. The city council, aghast at the report and what it implied about the work that would have to be done, dismissed his findings and voted unanimously to wait for clarification of the state law. Steeves was called an alarmist and his report a threat to the well-being of the town, and no further action was taken. On October 17, 1989, an earthquake of magnitude 7.1 hit just outside Santa Cruz. Three hundred homes were destroyed and five thousand seriously damaged in Santa Cruz County; the downtown area was reduced to ruins; five people were killed and two thousand injured.
Because of its explanatory power, cognitive dissonance theory easily survived all attacks. Twenty-five years after Festinger first advanced it and sixteen years after he left social psychology to study archaeology, a survey of social psychologists found that 79 percent considered him the person who had contributed most to their field.28 Today, a generation later, Festinger’s name and fame have dimmed, but cognitive dissonance remains a bedrock principle of social psychological theory. But one criticism of cognitive dissonance research has been difficult to rebut. The researchers almost always gulled the volunteers into doing things they would not ordinarily do (such as lying for money), subjected them without their consent to strenuous or embarrassing experiences, or revealed to them aspects of themselves that damaged their self-esteem. The investigators “debriefed” subjects after the experiment, explaining the real purpose, the reason deception had been necessary, and the benefit to science of their participation. This was intended to restore to them their sense of well-being, but critics have insisted that it is unethical to subject other people to such experiences without their knowledge and consent.29
The Psychology of Imprisonment
These ethical problems were not peculiar to dissonance studies; they existed in more severe form in other kinds of sociopsychological research. A famous case in point is an experiment conducted in 1971 by Professor Philip G. Zimbardo, a social psychologist at Stanford University, and three colleagues.30 To study the social psychology of imprisonment, they enlisted undergraduate men as volunteers in a simulation of prison life, in which each would play the part of a guard or a prisoner. All volunteers were interviewed and given personality tests; twenty-one middle-class whites were selected after being rated emotionally stable, mature, and law-abiding. By the flip of a coin, ten were designated as prisoners, eleven as guards, for the duration of a two-week experiment.
The “prisoners” were “arrested” by police one quiet Sunday morning, handcuffed, booked at the police station, taken to the “prison” (a set of cells built in the
basement of the Stanford psychology building), and there stripped, searched, deloused, and issued uniforms. The guards were supplied with billy clubs, handcuffs, whistles, and keys to the cells; they were told that their job was to maintain “law and order” in the prison and that they could devise their own methods of prisoner control. The warden (a colleague of Zimbardo’s) and guards drew up a list of sixteen rules the prisoners had to obey: they were to be silent at meals, rest periods, and after lights out; they were to eat at mealtimes but no other time; they were to address one another by their ID number and any guard as Mr. Correctional Officer, and so on. Violation of any rule could result in punishment.
The relations between guards and prisoners quickly assumed a classic pattern: the guards began to think of the prisoners as inferior and dangerous, the prisoners to view the guards as bullies and sadists. As one guard reported:
I was surprised at myself…I made them call each other names and clean out the toilets with their bare hands. I practically considered the prisoners cattle, and I kept thinking I have to watch out for them in case they try something.
In a few days the prisoners organized a rebellion. They tore off their ID numbers and barricaded themselves inside their cells by shoving beds against the doors. The guards sprayed them with a fire extinguisher to drive them back from the doors, burst into their cells, stripped them, took away their beds, and in general thoroughly intimidated them.
The guards, from that point on, kept making up additional rules, waking the prisoners frequently at night for head counts, forcing them to perform tedious and useless tasks, and punishing them for “infractions.” The prisoners, humiliated, became obsessed by the unfairness of their treatment. Some grew disturbed, one so much so that by the fifth day the experimenters began to consider releasing him before the end of the experiment.
The rapid development of sadism in the guards was exemplified by the comments of one of them who, before the experiment, said that he was a pacifist, was nonaggressive, and could not imagine himself mal-treating another person. By the fifth day he noted in his diary:
I have singled him [one prisoner] out for special abuse both because he begs for it and because I simply don’t like him… The new prisoner (416) refuses to eat his sausage…I decided to force feed him, but he wouldn’t eat. I let the food slide down his face. I didn’t believe it was me doing it. I hated myself for making him eat but I hated him more for not eating.
Zimbardo and his colleagues had not expected so rapid a transformation in either group of volunteers and later wrote in a report:
What was most surprising about the outcome of this simulated prison experience was the ease with which sadistic behavior could be elicited from quite normal young men, and the contagious spread of emotional pathology among those carefully selected precisely for their emotional stability.
On the sixth day the researchers abruptly terminated the experiment for the good of all concerned. They felt, however, that it had been valuable; it had shown how easily “normal, healthy, educated young men could be so radically transformed under the institutional pressures of a ‘prison environment.’ ”
That finding may have been important, but in the eyes of many ethicists the experiment was grossly unethical. It had imposed on its volunteers physical and emotional stresses that they had not anticipated or agreed to undergo. In so doing, it had violated the principle, affirmed by the Supreme Court in 1914, that “every human being of adult years and sound mind has a right to determine what shall be done with his own body.”31 Because of the ethical problems, the prison experiment has not been replicated; it is a closed case.*
Even this was bland in comparison with another experiment, also of major value, and also now a closed case. Let us open the file and see what was learned, and by what extraordinary means.
Obedience
In the aftermath of the Holocaust, many behavioral scientists sought to understand how so many normal, civilized Germans could have behaved toward other human beings with such incomprehensible savagery. A massive study published in 1950, carried out by an interdisciplinary team with a psychoanalytic orientation, ascribed prejudice and ethnic hatred to the “authoritarian personality,” an outgrowth of particular kinds of parenting and childhood experience.32 But social psychologists found this too general an explanation; they thought the answer more likely to involve a special social situation that caused ordinary people to commit out-of-character atrocities.
It was to explore this possibility that an advertisement in a New Haven newspaper in the early 1960s called for volunteers for a study of memory and learning at Yale University.33 Any adult male not in high school or college would be eligible, and participants would be paid $4 (roughly the equivalent of $25 today) an hour plus carfare.
Forty men ranging from twenty to fifty years old were selected and given separate appointments. Each was met at an impressive laboratory by a small, trim young man in a gray lab coat. Arriving at the same time was another “volunteer,” a pleasant middle-aged man of Irish-American appearance. The man in the lab coat, the ostensible researcher, was actually a thirty-one-year-old high school biology teacher, and the middle-aged man was an accountant by profession. Both were accomplices of the social psychologist conducting the experiment, Stanley Milgram of Yale, and would act the parts he had scripted.
The researcher explained to the two men, the real and false volunteers, that he was studying the effect of punishment on learning. One of them would be the “teacher” and the other the “learner” in an experiment in which the teacher would give the learner an electric shock whenever he made an error. The two volunteers then drew slips of paper to see who would be which. The one selected by the “naïve” volunteer read “Teacher.” (To ensure this result, both slips read “Teacher,” but the accomplice discarded his without showing it.)
The researcher then led the two subjects into a small room, where the learner was seated at a table, his arms strapped down, and electrodes attached to his wrists. He said he hoped the shocks wouldn’t be too severe; he had a heart condition. The teacher was then taken into an adjoining room from which he could speak to and hear the learner but not see him. On a table was a large shiny metal box said to be a shock generator. On the front was a row of thirty switches, each marked with the voltage it delivered (ranging from 15 to 450) plus descriptive labels: “Slight Shock,” “Moderate Shock,” and so on, up to “Danger: Severe Shock” at 435, and finally two switches marked simply “XXX.”
The teacher’s role, the researcher explained, was to read a list of word pairs (such as blue, sky and dog, cat) to the learner, then test his memory by reading the first word of one pair and four possible second words, one of which was correct. The learner would indicate his choice by pushing a button lighting one of four bulbs in front of the teacher. Whenever he gave a wrong answer, the teacher was to depress a switch giving him a shock, starting at the lowest level. Each time the learner made an error, the teacher was to give him the next stronger shock.
At first the experiment proceeded easily and uneventfully; the learner would give some right answers and some wrong ones, the teacher would administer a mild shock after each wrong answer, and continue. But as the learner made more mistakes and the shocks became greater in intensity—the apparatus was fake, of course, and no shocks were delivered—the situation grew unpleasant. At 75 volts the learner grunted audibly; at 120 he called out that the shocks were becoming painful; at 150 volts he shouted, “Get me out of here. I refuse to go on!” Whenever the teacher wavered, the researcher, standing beside him, said, “Please continue.” At 180 volts the learner called, “I can’t stand the pain!” and at 270 he howled. When the teacher hesitated or balked, the researcher said, “The experiment requires that you continue.” Later, when the learner was banging on the wall, or still later, when he was screaming, the researcher said sternly, “It is absolutely essential that you continue.” Beyond 330, when there was only silence from the next room—to be interpreted as equival
ent to an incorrect answer—the experimenter said, “You have no other choice; you must go on.”
Astonishingly—Milgram himself was amazed—63 percent of the teachers did go on, all the way. But not because they were sadists who enjoyed the agony they thought they were inflicting (standard personality tests showed no difference between the fully obedient subjects and those who at some point refused to continue); on the contrary, many of them suffered acutely while obeying the researcher’s orders. As Milgram reported:
In a large number of cases the degree of tension reached extremes that are rarely seen in sociopsychological laboratory studies. Subjects were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh…A mature and initially poised businessman enter[ed] the laboratory smiling and confident. Within 20 minutes he was reduced to a twitching, stuttering wreck who was rapidly approaching a point of nervous collapse… yet he continued to respond to every word of the experimenter, and obeyed to the end.34
Milgram did not, alas, report any symptoms he himself may have had while watching his teachers suffer. A spirited, feisty little man, he gave no indication in his otherwise vivid account that he was ever distressed by his subjects’ misery.
His interpretation of the results was that the situation, playing on cultural expectations, produced the phenomenon of obedience to authority. The volunteers entered the experiment in the role of cooperative and willing subjects, and the researcher played the part of the authority. In our society and many others, children are taught to obey authority and not to judge what the person in authority tells them to do. In the experiment, the teachers felt obliged to carry out orders; they could inflict pain and harm on an innocent human being because they felt that the researcher, not they themselves, was responsible for their actions.