It is also true that some people have far more influence than others, simply because the decisions of those people convey more information. We are especially likely to follow those who are confident (“the confidence heuristic”), who have special expertise, who seem most like us, who fare best, or whom we otherwise have reason to trust. It is worth underlining the phrase “most like us”; for better or for worse, those are the people whose beliefs often have the largest impact on our own.
The second influence is the pervasive human desire to have and to retain the good opinion of others. If a number of people seem to believe something, there is reason not to disagree with them, at least not in public. The desire to maintain the good opinion of others breeds conformity and squelches dissent, especially but not only in groups that are connected by bonds of loyalty and affection, which can therefore prevent learning, entrench falsehoods, increase dogmatism, and impair group performance. In the highest reaches of government—including the White House—this can be a serious problem. We shall see that close-knit groups, discouraging conflict and disagreement, often do badly for that very reason. In any case much of human behavior is a product of social influences. For example, employees are far more likely to file suit if members of the same work group have also done so;5 teenage girls who see that other teenagers are having children are more likely to become pregnant themselves;6 the perceived behavior of others has a large effect on the level of violent crime;7 broadcasters mimic one another, producing otherwise inexplicable fads in programming;8 and lower courts sometimes do the same, especially in highly technical areas, and hence judicial mistakes may never be corrected.9
We should not lament social influences or wish them away. Much of the time, people do better when they take close account of what others do. Some of the time, we even do best to follow others blindly. But social influences also diminish the total level of information within any group, and they threaten, much of the time, to lead individuals and institutions in the wrong directions. Dissent can be an important corrective; many groups and institutions have too little of it.10
As we shall see, conformists are free riders, whereas dissenters often confer benefits on others. It is tempting to free ride.As we shall also see, social pressures are likely to lead groups of like-minded people to extreme positions. When groups become caught up in hatred and violence, it is rarely because of economic deprivation11 or primordial suspicions;12 it is far more often a product of the informational and reputational influences discussed here.13 Indeed, unjustified extremism frequently results from a “crippled epistemology,” in which extremists react to a small subset of relevant information, coming mostly from one another.14
Similar processes occur in less dramatic forms. Many large-scale shifts within legislatures, bureaucracies, and courts are best explained by reference to social influences. When a legislature suddenly shows concern with some formerly neglected problem—for example, unlawful immigration, climate change, hazardous waste dumps, or corporate misconduct—the concern is often a product of conformity effects, not of real engagement with the problem. Of course the new concern might be justified. But if social influences are encouraging people to conceal information that they have, or if the blind are leading the blind, serious problems are likely.
There is a further point. With relatively small “shocks,” similar groups can be led, by social pressures, to dramatically different beliefs and actions. When societies differ, or when large-scale changes occur over time, the reason often lies not where we usually look but in small and sometimes elusive factors.15 Serendipity is often the best explanation for major shifts; deep explanations about culture or the march of history are comforting but wrong.
An appreciation of informational influences and of people’s concern for the good opinion of others helps to show how, and when, law can alter behavior without being enforced—and merely by virtue of the signal that it provides. The central point here is that law can provide reliable evidence both about what should be done and about what most people think should be done. In either case, it can convey a great deal of relevant information. Consider bans on public smoking and on sexual harassment. If people think the law is speaking for the view of most or all, potential violators are less likely to smoke or to engage in sexual harassment. Potential victims are also more likely to take the steps to enforce the law privately, as, for example, through reminding people of their legal responsibilities and insisting that violators come into compliance. The #MeToo movement of 2017 and 2018 had many causes, and it is closely connected with several of the phenomena on which I will focus here; the law, forbidding sexual harassment, helped make it possible.
In this light we can better understand the much-disputed claim that the law has an “expressive function.”16 By virtue of that function, law can even stop or accelerate a social cascade. Here too the areas of cigarette smoking and sexual harassment are relevant examples. And the #MeToo movement can be seen as a cascade. But if would-be violators are part of a dissident subcommunity, they might well be able to resist law’s expressive effect; fellow dissidents can band together and encourage one another to violate the law. Indeed, informational and reputational factors can even encourage widespread noncompliance, as, for example, in drug use and failure to comply with the tax laws.17 The law’s expressive power is partly a function of its moral authority, and when law lacks that authority within a subcommunity, its signal may be irrelevant or even counterproductive. The law may say “no!” but some people will want to say “yes!”
This book is divided into four chapters. In chapter 1, I develop a central unifying theme, which is that in many contexts, individuals are suppressing their private signals—about what is true and what is right—and that this suppression can cause significant social harm. In chapter 2, I turn to social cascades, by which an idea or a practice spreads rapidly from one person to another, potentially leading to radical shifts. Focusing on group polarization, chapter 3 investigates how, why, and when groups of like-minded people go to extremes.
Chapter 4 explores institutions. I urge that the principal contribution of the framers of the U.S. Constitution lay both in their endorsement of deliberative democracy and in their insistence that cognitive diversity is an affirmative good, likely to improve deliberation. This enthusiasm for cognitive diversity helps account for the systems of checks and balances and federalism. I also suggest that it is important to attempt to provide a mix of views on the federal bench; indeed, consideration should be given to increasing the likelihood that panels, on courts of appeals, contain judges appointed by the president of different parties.
The analysis of diversity on the federal judiciary is of interest in itself, but I intend it also as an example of a large number of contexts in which cognitive diversity is important and in which conformity can have baleful effects. I urge as well that in those cases in which racial diversity will improve discussion, it is entirely legitimate for colleges and universities to attempt to promote racial diversity.
Chapter 1
How Conformity Works
Why, and when, do people do what others do? To answer this question, we need to distinguish between hard questions and easy ones. It is reasonable to speculate that when people are confident that they are right, they will be more willing to do what they think best and to reject the views of the crowd. Several sets of experiments confirm this speculation, but they also offer some significant twists. Most important, they suggest three points that I will emphasize throughout:
1. Those who are confident and firm will have particular influence, and can lead otherwise identical groups in dramatically different directions.
2. People are extremely vulnerable to the unanimous views of others, and hence a single dissenter, or voice of sanity, is likely to have a huge impact.
3. If people seem to be from some group we distrust or dislike, or a kind of “out group,” they are far less likely to influence us, even on the simplest questions.1 Indeed, we might say or do t
he very opposite (“reactive devaluation”). And if people are part of a group to which we also belong, they are far more likely to influence us, on both easy and hard questions. Bonds of affection have a large impact on how we react to what others say and do.
I shall have a fair bit to say about ordinary life, but my ultimate goal is to see how these points bear on policy and law. Let us begin by reviewing some classic studies.
Hard Questions
In the 1930s, the psychologist Muzafer Sherif conducted some simple experiments on sensory perception.2 Subjects were placed in a very dark room, and a small pinpoint of light was positioned at some distance in front of them. Because of a perceptual illusion, the light, which was actually stationary, appeared to move. On each of several trials, Sherif asked people to estimate the distance that the light had moved. When polled individually, subjects did not agree with one another, and their answers varied from one trial to another. This is not surprising; because the light did not move, any judgment about distance was a stab in the dark.
But Sherif found some striking results when subjects were asked to act in small groups. Here the individual judgments converged, and a group norm, establishing the right distance, quickly developed. Indeed, the norm remained stable within groups across different trials, thus leading to a situation in which different groups made, and were committed to, quite different judgments.3 There is an important clue here about how similar groups, indeed similar nations, can converge on very different beliefs and actions simply because of modest and even arbitrary variations in starting points. You can think of social media, and in some respects the Internet as a whole, as a contemporary version of Sherif’s experiments. People converge on group norms even if their individual judgments start in radically different places, and those norms become fairly stable over time. Different groups can end up in different epistemic universes, whether the issue involves immigration, sexual harassment, the Middle East, trade, or civil rights.
When Sherif added a confederate—his own ally, unbeknownst to subjects—something else happened.4 The judgment of a single confederate, typically much higher or much lower than those made by individual subjects, had a major effect. It helped produce correspondingly higher or lower judgments within the group. The large lesson is that at least in cases involving difficult questions of fact, judgments “could be imposed by an individual who had no coercive power and no special claim to expertise, only a willingness to be consistent and unwavering in the face of others’ uncertainty.”5
Perhaps more remarkable still, the group’s judgments became thoroughly internalized, so that subjects would adhere to them even when reporting on their own, even a year later, and even when participating in new groups whose members offered different judgments. The initial judgments were also found to have effects across “generations.” In an experiment in which fresh subjects were introduced and others retired, so that eventually all participants were new to the situation, the original group judgment tended to stick, even after the person who was originally responsible for it had been long gone.6 In this small experiment, there are two lessons about the formation and longevity of some cultural beliefs and practices: a single person, or a small group, may be responsible for them, and over a long period, these beliefs and practices can be enduring and become defining.
What accounts for these results? The most obvious answer points to the informational influences produced by other people’s judgments. After all, the apparent movements are a perceptual illusion, and the system of perception does not readily assign distances to those movements. In those circumstances, people are especially likely to be swayed by a confident and consistent group member. If one person seems clear about the distance, why not believe that person? There is considerable theoretical and empirical work on “the confidence heuristic,” which means that people are more likely to follow those who express their views confidently, assuming that confidently expressed views signal better information.7 Sherif’s finding has implications outside of the laboratory and for classrooms, workplaces, courtrooms, bureaucracies, and legislatures. If uninformed people are trying to decide whether immigration or climate change is a serious problem, or whether they should be concerned about existing levels of arsenic in drinking water, they are likely to be responsive to the views of confident and consistent others.8
What is true for factual issues is true for moral, political, and legal issues as well. Suppose that a group of legislators is trying to decide how to handle a highly technical issue. If a “confederate” is planted among the group, showing considerable confidence, that person is highly likely to be able to move the group in that individual’s preferred direction. So too if the person is not a confederate at all but simply a friend, neighbor, colleague, boss, or legislator with great confidence on the issue at hand. If public officials or judges are trying to resolve a complex issue on which they lack certainty, they too are vulnerable to conformity effects. And for judicial panels as well, Sherif-type effects can be expected on technical matters if one judge is confident and seems expert. The problem is that the so-called specialists may have biases and agendas of their own, leading to large errors. But there is an important qualification to these claims, to which I will return: Sherif’s conformity findings significantly decrease if the experimenter uses a confederate whose membership in a different social group is made salient to subjects.9 If you know that the confident person belongs to a group different from yours—one that you distrust or dislike—you might not be influenced at all.
Easy Questions
But what if perception does provide reliable guidance? What if people have good reason to know the right answer? Some famous experiments, conducted by Solomon Asch, explored whether people would be willing to overlook the apparently unambiguous evidence of their own senses.10 In those experiments, the subject was placed into a group of seven to nine people who seemed to be other subjects in the experiment but who were actually Asch’s confederates. The simple task was to “match” a particular line, shown on a large white card, to one of the three “comparison lines” that was identical to it in length. The two nonmatching lines were substantially different, with the differential varying from an inch and three quarters to three quarters of an inch.
In the first two rounds of the Asch experiments, everyone agrees about the right answer. “The discriminations are simple; each individual monotonously calls out the same judgment.”11 But “suddenly this harmony is disturbed at the third round.”12 All other group members make what is obviously, to the subject and to any reasonable person, a clear error, matching the line at issue to one that is conspicuously longer or shorter. In these circumstances, the subject, in almost all cases showing initial confusion and disbelief at the apparent mistakes of others, has a choice: he can maintain independent judgment or instead accept the view of the unanimous majority. What would you do? As it turns out, a large number of people end up yielding at least once in a series of trials. They defy the evidence of their own senses and agree with everyone else.
When asked to decide on their own, subjects erred less than 1 percent of the time. But in rounds in which group pressure supported the wrong answer, subjects erred no less than 36.8 percent of the time.13 Indeed, in a series of twelve questions, no less than 70 percent of subjects went along with the group and defied the evidence of their own senses, at least once.14 We should not overstate this finding. Most people, most of the time, say what they actually see. But Asch’s most noteworthy finding is that most people, some of the time, are willing to yield, even in the face of clear reason indicating that the group is wrong.
Conformity experiments of this kind have produced more than 130 sets of results from seventeen countries, including Zaire, Germany, France, Japan, Lebanon, and Kuwait.15 A meta-analysis of these studies uncovered a variety of refinements of Asch’s basic findings, with significant cultural differences, but it is fair to say that his basic conclusions hold up. For all results, the mean percentage error is 29 percent.16 Peo
ple in some nations, with “conformist” cultures, do err more than people in other nations, with more “individualist” cultures.17 The variations are real, but the overall pattern of errors—with subjects conforming between 20 and 40 percent of the time—shows the power of conformity across many nations.
Note that Asch’s findings contain two conflicting lessons. First, a significant number of people are independent all or much of the time. About 25 percent of people are consistently independent;18 such people are uninfluenced by the group. Moreover, about two-thirds of total individual answers do not conform. Hence “there is evidence of extreme individual differences” in susceptibility to group influences, with some people remaining completely independent and others “going with the majority without exception.”19 While independent subjects “present a striking spectacle to an observer,” giving “the appearance of being unshakable,”20 other people show a great deal of anxiety and confusion.21 Second, most subjects, at least some of the time, are willing to yield to the group even on an apparently easy question on which they have direct and unambiguous evidence.
For present purposes, the latter finding is the most relevant. It suggests that even when we see something very clearly, many or most of us might say, “If everyone else sees otherwise, we should go along with them.” There is a large lesson here about why people might seem to agree with stupid or horrible things—about science, about politics, and about members of different religious, ethnic, and racial groups. There is a lesson too about why different groups can go in radically different directions, even with respect to questions of fact. They might be interacting with the equivalent of Asch’s confederates.
Conformity Page 2