Book Read Free

Conformity

Page 4

by Cass R Sunstein


  Most people predict that in such studies, more than 95 percent of subjects would refuse to proceed to the end of the series of shocks. When people are asked to make predictions about what people would do, the expected breakoff point is “very strong shock,”64 of 195 volts. But in Milgram’s initial experiments, every one of the forty subjects went beyond 300 volts. The mean maximum shock level was 405 volts, and a strong majority—twenty-six out of forty, or 65 percent—went to the full 450-volt shock, two steps beyond “danger: severe shock.”65

  Later variations on the original experiments produced even more remarkable results. In those experiments, the victim expresses a growing level of pain and distress as the voltage increases.66 Small grunts are heard from 75 volts to 105 volts, and at 120 volts, the subject shouts, to the experimenter, that the shocks are starting to become painful. At 150 volts, the victims cries out, “Experimenter, get me out of here! I won’t be in the experiment any more! I refuse to go on!”67 At 180 volts, the victim says, “I can’t stand the pain.” At 270 volts he responds with an agonized scream. At 300 volts he shouts that he will no longer answer the questions. At 315 volts he screams violently.

  At 330 volts and after, he is not heard. In this version of the experiment, there is no significant change in Milgram’s results: twenty-five of forty participants went to the maximum level, and the mean maximum level was over 360 volts. In a somewhat gruesome variation, the victim says, before the experiment begins, that he has a heart condition, and his pleas to discontinue the experiment include repeated reference to the fact his heart is “bothering” him as the shocks continue.68 This too did not lead subjects to behave differently.69 Notably, Milgram’s basic findings were generally replicated in 2009, with only slightly lower obedience rates than Milgram found forty-five years earlier; men and women did not differ in their rates of obedience.70

  Milgram himself explains his results as showing obedience to authority, in a way reminiscent of the behavior of many Germans under Nazi rule, and indeed Milgram was partly motivated by the goal of understanding how the Holocaust could have happened.71 Milgram concluded that ordinary people will follow orders even if the result is to produce great suffering in innocent others. Undoubtedly simple obedience is part of the picture. But there is another explanation.

  Subjects who are invited to an academic setting, to participate in an experiment run by an apparently experienced scientist, might well defer to the experimenter’s instructions in the belief that the experimenter is likely to know what should be done, all things considered. If the experimenter asks subjects to proceed, most subjects might believe, not unreasonably, that the harm apparently done to the victims is not serious and that the experiment actually has significant benefits for society. On this account, the experimenter has special expertise. And if Milgram’s subjects believed something like this, they were actually correct!

  If this account is right, then the participants in the Milgram experiments might be seen as similar to those in the Asch experiments, with the experimenter having a greatly amplified voice. Many of Asch’s subjects were deferring to the informational signal given by unanimous others; Milgram’s subjects were doing something similar. An expert or an authority can be a lot like unanimous others. And on this account, some or many of the subjects might have put their moral qualms to one side, not because of blind obedience but because of a judgment that those qualms are likely to have been ill founded. That judgment might be based in turn on a belief that the experimenter is not likely to ask subjects to proceed if the experiment is truly harmful or objectionable.

  In short, Milgram’s subjects might be responding to an especially loud informational signal—the sort of signal sent by a specialist or a crowd. And on this view, Milgram was wrong to draw an analogy between the behavior of his subjects and the behavior of Germans under Hitler. His subjects were not simply obeying a leader but responding to someone whose credentials and good faith they thought they could trust. Of course it is not simple, in theory or in practice, to distinguish between obeying a leader and accepting the beliefs of an expert. The only suggestion is that the obedience of subjects was hardly baseless; it involved a setting in which subjects had some reason to think that the experimenter was not asking them to produce serious physical harm out of sadism or for no reason at all.

  I do not argue that this explanation provides a full account of Milgram’s contested findings. But a subsequent study, exploring the grounds of obedience, offers support for this interpretation.72 In that study, a large number of subjects watched the tapes of the Milgram experiments and were asked to rank possible explanations for compliance with the experimenter’s request. Deference to expertise was the highest-ranked option. This is not definitive, of course, but an illuminating variation on the basic experiment, by Milgram himself, provides further support.73 In this variation, the subject is among three people asked to administer the shocks, and two of those people, actually confederates, refuse to go past a certain level (150 volts for one and 210 volts for the other). In such cases, the overwhelming majority of subjects—92.5 percent—defy the experimenter.74 This was by far the most effective of Milgram’s many variations on his basic study, all designed to reduce the level of obedience.75

  Why was the defiance of peers so potent? I suggest that the subjects, in this variation, were very much like those subjects who had at least one supportive confederate in Asch’s experiments. One such confederate led Asch’s subjects to say what they saw; so too, peers who acted on the basis of conscience freed Milgram’s subjects to give less weight to the instructions of the experimenter and to follow their consciences as well. Milgram himself established, in yet another variation, that without any advice from the experimenter and without any external influences at all, the subject’s moral judgment was clear: do not administer shocks above a very low level.76

  Indeed, that moral judgment had nearly the same degree of clarity, to Milgram’s subjects, as the clear and correct factual judgments made by Asch’s subjects when they were deciding about the length of lines on their own (and hence not confronted with Asch’s confederates). In Milgram’s experiments, it was the experimenter’s own position—that the shocks should continue and that no permanent damage would be done—that had a high degree of influence, akin to the influence of Asch’s unanimous confederates. But when the subject’s peers rejected the position of Milgram’s experimenter, the informational content of that position was effectively negated by the information presented by the refusals of peers. Hence subjects could rely on their own moral judgments or even follow the moral signals indicated by the peers’ refusals.

  Then and now, the best interpretation of Milgram’s findings is less than clear, but the general lessons are not obscure. When the morality of a situation is not evident, people are likely to be influenced by someone who seems to be an expert, able to weigh the concerns and risks involved. But when the expert’s questionable moral judgment is countered by reasonable people who bring their own moral judgments to bear, people become less likely to follow experts. They are far more likely to do as their conscience dictates.

  As we shall see, compliance with law has similar features. A legal pronouncement about what should be done will often operate in the same way as an expert judgment about what should be done. It follows that many people will follow the law even when it is hardly ever enforced—and even if they would otherwise be inclined to question the judgment that the law embodies. But if peers are willing to violate the law, violations may become widespread, especially but not only if people think that the law is enjoining them from doing something that they wish to do, either for selfish reasons or for reasons of principle. In this way, Milgram’s experiments offer some lessons about when law will be ineffective unless vigorously enforced—and also about the preconditions for civil disobedience.

  Chapter 2

  Cascades

  I now examine how informational and reputational influences can produce social cascades—large-scale soc
ial movements in which many people end up thinking something, or doing something, because of the beliefs or actions of a few early movers. As in the case of conformity, participation in cascades is fueled by social influences. But where the idea of conformity helps to explain social stability, an understanding of cascades helps to explain social and legal movements, which can be stunningly rapid and also produce situations that are highly unstable. To get ahead of the story, the popularity of the Mona Lisa, William Blake, Jane Austen, Taylor Swift, and the Harry Potter novels is reasonably seen as the product of a cascade. The same is true for the success of Barack Obama, Donald Trump, and Brexit.

  As preliminary evidence, consider a brilliant study of music downloads by the sociologist Duncan Watts and his coauthors.1 Here’s how the study worked. A control group was created in which people could hear and download one or more of seventy-two songs by new bands. In the control group, intrinsic merit was everything. Individuals were not told anything about what anyone else had downloaded or liked. They were left to make their own independent judgments about which songs they liked. To test the effect of social influences, Watts and his coauthors also created eight other subgroups. In each of these subgroups, people could see how many people had previously downloaded individual songs in their particular subgroups.

  In short, Watts and his coauthors were exploring the relationship between social influences and consumer choices. What do you think happened? Would it make a small or a big difference, in terms of ultimate numbers of downloads, if people could see the behavior of others? The answer is that it made a huge difference. While the worst songs (as established by the control group) never ended up at the very top, and the best songs never ended up at the very bottom, essentially anything else could happen. If a song benefited from a burst of early downloads, it could do exceedingly well. If it did not get that benefit, almost any song could be a failure. As Watts and his coauthors later demonstrated, you can manipulate outcomes pretty easily, because popularity is a self-fulfilling prophecy.2 If a site shows (falsely) that a song is getting downloaded a lot, that song can get a tremendous boost and eventually become a hit. John F. Kennedy’s father, Joe Kennedy, was said to have purchased tens of thousands of early copies of his son’s book, Profiles in Courage. The book became a bestseller.

  With respect to the popularity of songs, Watts and his coauthors were exploring the effects of informational cascades. Their experiment showed that early popularity can have long-term effects, because people learn from what other people do and seem to like. As people learn from early popularity, they can make something into a huge hit, even if the same song would do poorly in another world in which the early listeners were unenthusiastic.

  Cascades occur for judgments about facts and values as well as tastes. They operate within private and public institutions—small companies, large ones, the Catholic Church, labor unions, local governments, and national governments. And when people have affective connections with one another, the likelihood of cascades increases. In the area of social risks, cascades are especially common, with people coming to fear certain products and processes not because of private knowledge but because of the apparent fears of others.3 The system of legal precedent also produces cascades, as early decisions lead later courts to a certain result, and eventually most or all courts come into line, not because of independent judgments but because of a decision to follow the apparently informed decisions of others.4 The sheer level of agreement will be misleading if most courts have been influenced, even decisively influenced, by their predecessors, especially in highly technical areas.

  By themselves, cascades are neither good nor bad. It is possible that the underlying processes will lead people to sound decisions about songs, cell phones, laptops, risks, morality, or law. The problem, a serious one, is that people may well converge, through the same processes, on erroneous or insufficiently justified outcomes. But to say this is to get ahead of the story; let us begin with the mechanics.

  Informational Cascades: The Basic Phenomenon

  In an informational cascade, people cease relying, at a certain point, on their private information or opinions. They decide instead on the basis of the signals conveyed by others. Once this happens, the subsequent statements or actions of few or many others add no new information. They are just following their predecessors. It follows that the behavior of the first few actors can, in theory, produce similar behavior from countless followers. A particular problem arises if people think the large number of individuals who say or do something are acting on independent knowledge; this can make it very hard to stop the cascade. Because so many people have done or said something—a politician is great, a product is dangerous, or someone is a criminal—people think to themselves, How can they all be wrong? The reality is that they can be, if they are mostly reacting to what others have said or done, and so are amplifying the volume of a signal by which they have themselves been influenced.

  Here is a highly stylized illustration. Suppose that doctors are deciding whether to prescribe hormone therapy for menopausal women. If hormone therapy creates significant risks of heart disease, its net value, let us assume, is negative; if it does not create such risks, its net value is positive.5 Let us also assume that the doctors are in a temporal queue, and all doctors know their place on that queue. From their own experiences, each doctor has some private information about what should be done. But each doctor also cares, rationally, about the judgments of others. Anderson is the first to decide, and prescribes hormone therapy if his judgment is low risk but declines if his judgment is high risk. Suppose that Anderson prescribes. Barber now knows that Anderson’s judgment was low risk and that she too should certainly prescribe hormone therapy if she makes that independent judgment. But if her independent judgment is that the risk is high, she would—if she trusts Anderson no more and no less than she trusts herself—be indifferent about whether to prescribe and might simply flip a coin. Suppose that she really is not sure, and so she follows Anderson’s judgment.

  Now turn to a third doctor, Carlton. Suppose that both Anderson and Barber have prescribed hormone therapy but that Carlton’s own information suggests that the risk is high. At least if he is not confident, Carlton might well ignore what he knows and prescribe the therapy. After all, both Anderson and Barber apparently saw a low risk, and unless Carlton thinks his own information is better than theirs, he should follow their lead. If he does, Carlton is in a cascade. To the extent that subsequent doctors know what others have done, and unless they too are confident, they will do exactly what Carlton did: prescribe hormone therapy regardless of their private information. “Since opposing information remains hidden, even a mistaken cascade lasts forever. An early preponderance toward either adoption or rejection, which may have occurred by mere coincidence or for trivial reasons, can feed upon itself.”6

  Notice that a serious problem here stems from the fact that for those in a cascade, actions do not disclose privately held information. In the example just given, doctors’ actions will not reflect the overall knowledge of the health consequences of hormone therapy—even if the information held by individual doctors, if actually revealed and aggregated, would give a quite accurate picture of the situation. The reason for the problem is that individual doctors are following the lead of those who came before.

  As noted, this problem is aggravated if subsequent doctors overestimate the extent to which their predecessors relied on private information and did not merely follow those who came before. If this is so, subsequent doctors might fail to rely on, and fail to reveal, private information that actually exceeds the information collectively held by those who started the cascade. As a result, the medical profession generally will lack information that it needs to have. Patients will suffer and possibly die. Importantly, participants in cascades act rationally in suppressing their private information, whose disclosure would benefit the group more than the individual who has it. The failure to disclose private information therefore presen
ts a free-rider problem. To overcome that problem, some kind of reform seems to be necessary; it might involve changing institutional arrangements.

  Of course, cascades do not always develop, and they usually do not last forever. Doctors used to believe in the “humors” (four distinct bodily fluids) and think that a deficiency in any one has harmful effects on health. They do not believe that now. Often people have, or think that they have, enough private information to reject the accumulated wisdom of others. Medical specialists often fall in this category. When cascades develop, they might be broken by corrective information, as has apparently happened in the case of hormone replacement therapy itself.7 In the domain of science, peer-reviewed work provides a valuable safeguard.

  But even among specialists and indeed doctors, cascades are common. “Most doctors are not at the cutting edge of research; their inevitable reliance upon what colleagues have done and are doing leads to numerous surgical fads and treatment-caused illnesses.”8 Thus an article in the prestigious New England Journal of Medicine explores “bandwagon diseases” in which doctors act like “lemmings, episodically and with a blind infectious enthusiasm pushing certain diseases and treatments primarily because everyone else is doing the same.”9 Some medical practices, including tonsillectomy and perhaps prostate-specific antigen (PSA) testing, “seem to have been adopted initially based on weak information,” and extreme differences in tonsillectomy frequencies (and other procedures) provide good evidence that cascades are at work.10 And once several doctors join the cascade, it is liable to spread. There is a link here with Muzafer Sherif’s experiments, showing the development of divergent but entrenched norms, based on group processes in areas in which individuals lack authoritative information. In fact, prescriptions of hormone replacement therapy were fueled by cascade-like processes.11

 

‹ Prev