Strategy
Page 82
[People] fear failure and are prone to cognitive dissonance, sticking with a belief plainly at odds with the evidence, usually because the belief has been held and cherished for a long time. People like to anchor their beliefs so they can claim that they have external support, and are more likely to take risks to support the status quo than to get to a better place. Issues are compartmentalized so that decisions are taken on one matter with little thought about the implications for elsewhere. They see patterns in data where none exist, represent events as an example of a familiar type rather than acknowledge distinctive features and zoom in on fresh facts rather than big pictures. Probabilities are routinely miscalculated, so … people … assume that outcomes which are very probable are less likely than they really are, that outcomes which are quite unlikely are more likely than they are, and that extremely improbable, but still possible, outcomes have no chance at all of happening. They also tend to view decisions in isolation, rather than as part of a bigger picture.13
Of particular importance were “framing effects.” These were mentioned earlier as having been identified by Goffman and used in explanations of how the media helped shape public opinion. Framing helped explain how choices came to be viewed differently by altering the relative salience of certain features. Individuals compared alternative courses of action by focusing on one aspect, often randomly chosen, rather than keep in the frame all key aspects.14 Another important finding concerned loss aversion. The value of a good to an individual appeared to be higher when viewed as something that could be lost or given up than when evaluated as a potential gain. Richard Thaler, one of the first to incorporate the insights from behavioral economics into mainstream economics, described the “endowment effect,” whereby the selling price for consumption goods was much higher than the buying price.15
Experiments
Another challenge to the rational choice model came from experiments that tested propositions derived from game theory. These were not the same as experiments in the natural sciences which should not be context dependent. Claims that some universal truths about human cognition and behavior were being illuminated needed qualification. The results could only really be considered at all valid for Western, educated, industrialized, rich, and democratic (WEIRD) societies in which the bulk of the experiments were conducted. Nonetheless, while WEIRD societies were admittedly an unrepresentative subset of the world’s population, they were also an important subset.16
One of the most famous experiments was the ultimatum game. It was first used in an experimental setting during the early 1960s in order to explore bargaining behavior. From the start, and to the frustration of the experimenters, the games showed individuals making apparently suboptimal choices. A person (the proposer) was given a sum of money and then chose what proportion another (the responder) should get. The responder could accept or refuse the offer. If the offer was refused, both got nothing. A Nash equilibrium based on rational self-interest would suggest that the proposer should make a small offer, which the responder should accept. In practice, notions of fairness intervened. Responders regularly refused to accept anything less than a third, while most proposers were inclined to offer something close to half, anticipating that the other party would expect fairness.17 Faced with this unexpected finding, researchers at first wondered if there was something wrong with the experiments, such as whether there had been insufficient time to think through the options. But giving people more time or raising the stakes to turn the game into something more serious made little difference. In a variation known as the dictator game, the responder was bound to accept whatever the proposer granted. As might be expected, lower offers were made—perhaps about half the average sum offered in the ultimatum game.18 Yet, at about 20 percent of the total, they were not tiny.
It became clear that the key factor was not faulty calculation but the nature of the social interaction. In the ultimatum game, the responders accepted far less if they were told that the amount had been determined by a computer or the spin of a roulette wheel. If the human interaction was less direct, with complete anonymity, then proposers made smaller grants.19 A further finding was that there were variations according to ethnicity. The amounts distributed reflected culturally accepted notions of fairness. In some cultures, the proposers would make a point of offering more than half; in others, the responders were reluctant to accept anything. It also made a difference if the transaction was within a family, especially in the dictator game. Playing these games with children also demonstrated that altruism was something to be learned during childhood.20 As they grew older, most individuals turned away from the self-regarding decisions anticipated by classical economic theory and become more other-regarding. The exceptions were those suffering from neural disorders such as autism. In this way, as Angela Stanton caustically noted, the canonical model of rational decision-making treated the decision-making ability of children and those with emotional disorders as the norm.21
The research confirmed the importance of reputation in social interactions.22 The concern with influencing another’s beliefs about oneself was evident when there was a need for trust, for example, when there were to be regular exchanges. This sense of fairness and concern about reputation, though it appeared instinctive and impulsive, was hardly irrational. It was important for an individual to have a good reputation to consolidate her social networks, while a social norm that sustained group cohesion was worth upholding. There was further experimental evidence suggesting that when a proposer had been insufficiently altruistic, the responders would not accept their reward in order to ensure that the miserly proposer was punished.23
Another experiment involved a group of investors. When each made an investment everyone else gained, though they made a small loss. These losses should not have mattered, for they were covered by the gains resulting from the investments of others. Those motivated by a narrow self-interest would see an incentive to become a free rider. They could avoid losses by making no personal investments while benefiting from the investments of others. They would then gain at the expense of the group. Such behavior would soon lead to a breakdown in cooperation. To prevent this would require the imposition of sanctions by the rest of the group, even though this would cost them as individuals. When given a choice which group to join, individuals at first often recoiled from joining one with known sanctions against free riders but eventually would migrate to that group, as they appreciated the importance of ensuring cooperation.
Free riders, or unfair proposers in the ultimatum game, were also stigmatized. In another experiment, individuals who expected to play by the rules were told in advance of the game the identities of other players who would be free riders. Once these individuals had been described as less trustworthy, they were generally seen as less likable and attractive. When the games were underway, this prior profiling influenced behavior. There was a reluctance to take risks with those designated untrustworthy, even when these individuals were acting no differently from others. Little effort was made to check their reputations against actual behavior during the game. In experiments which showed individuals described as either free-riders or cooperators experiencing pain, far less empathy was shown for the free riders than for the cooperators.24
One response from those committed to the rational actor model was that it was interesting but irrelevant. The experiments involved small groups, often graduate students. It was entirely possible that as these types of situations became better understood, behavior would tend to become more rational as understood by the theory. Indeed, there was evidence that when these games were played with subjects who were either professors or students in economics and business, players acted in a far more selfish way, were more likely to free ride, were half as likely to contribute to a public good, kept more resources for themselves in an ultimatum game, and were more likely to defect in a prisoner’s dilemma game. This fit in with studies that showed economists to be more corruptible and less likely to donate to charity.25 One resea
rcher suggested that the “experience of taking a course in microeconomics actually altered students’ conceptions of the appropriateness of acting in a self-interested manner, not merely their definition of self-interest.”26 In studies of traders in financial markets, it transpired that while the inexperienced might be influenced by Thaler’s “endowment effect,” for example, the experienced were not.27 This might not be flattering to economists, but it did show that egotistical behavior could also be quite natural. This argument, however, could be played back to the formal theorists. To be sure, it showed the possibility of self-interested and calculating behavior but it also required a degree of socialization. If it could not be shown to occur naturally and if it had to be learned, then that demonstrated the importance of social networks as a source of guidance on how to behave.
When individuals were acting as consumers in a marketplace or in other circumstances that encouraged them to act as egotistical and self-regarding, their behavior could get close to what might be expected from models that assumed such conduct. The experiments employed to explore the degree of actual rationality reflected the preoccupations with a particular sort of choice, a type “with clearly defined probabilities and outcomes, such as choosing between monetary gambles.”28 It was almost by accident that as researchers sought to prove the rational actor models through experiments they came to appreciate the importance of social pressures and the value attached to cooperation. Within the complex social networks of everyday life, truly egotistical and self-regarding behavior was, in a basic sense, irrational.
Attempts were made to recast formal theories to reflect the insights of behavioral psychology, in the guise of behavioral economics, but they made limited progress. The most important insight from the new research was that rather than studying individuals as more complex and rounded than the old models assumed, it was even more important to study them in their social context.
Only a very particular view of rationality considered cooperation irrational and failed to understand why it made sense to make sacrifices to punish the uncooperative and free riders in order to uphold norms and sustain cooperative relationships. Many social and economic transactions would become impossible if at every stage there was suspicion and reason to doubt another’s motives. The essence of trust was to knowingly and willingly accept a degree of vulnerability, aware that trustees might intend harm but finding it more profitable to assume that they did not. The evidence suggested that by and large people would prefer to trust others than not to trust. There were formidable normative pressures to honor commitments once made, and a reputation for untrustworthiness could prove to be a hindrance. Life became a lot easier if the people with whom one was dealing trusted and could be trusted in turn, saving the bother of complicated contracts and enforcement issues. Trusting another did not necessarily assume good faith. The calculus could be quite balanced. On occasion there might be no choice but to trust someone, even though there were indicators to prompt suspicion, because the alternative of not trusting was even more likely to lead to a bad result. In other circumstances, with little information one way or another, accepting another’s trustworthiness would involve a leap of faith. This was why deception was deplored. It meant taking advantage of another’s trust, hiding malicious intent behind a mask of good faith. Trust involved accepting evidence of another’s intentions; deception involved faking this evidence.29
So important was trust that even when clues were arriving thick and fast that they were being deceived, individuals could stay in denial for a surprising time. A confidence trickster might be vulnerable to intensive probing and so would rely on those who were inclined to accept his story: the woman yearning for love or the greedy looking for a get-rich-quick proposition. Research showed that people were “poor deception-detectors and yet are overconfident of their ability to detect deception.”30 “Cognitive laziness” led to shortcuts that resulted in misapprehending people and situations, failing to explore context, ignoring contradictions, and sticking with an early judgment of another’s trustworthiness.31
Mentalization
The ability to recognize different traits in people, to distinguish them according to their personalities, is essential to all social interaction. It might be difficult to predict the responses of people to particular situations, but to the extent that it is possible to anticipate the responses of specific individuals, their behavior might be anticipated or even manipulated.
The process of developing theories about how other minds work has been described as “mentalization.” Instead of assuming that other minds resembled one’s own, by observing the behavior of others it became evident that others had distinctive mental and emotional states. The quality of empathy, of being able to feel as another feels, was drawn from the German Einfühlung, which was about the process of feeling one’s way into an art object or another person. Empathy might be a precursor to sympathy, but it was not the same. With empathy one could feel another’s pain; with sympathy one would also pity another for his pain. It could be no more than sharing another’s emotional state in a vicarious way, but also something more deliberative and evaluative, a form of role-playing.
Mentalization involved three distinct sets of activity, working in combination. The first set was an individual’s own mental state and those of others represented in terms of perceptions and feelings, rather than the true features of the stimuli that prompted the perceptions and feelings in the first place. They were beliefs about the state of the world rather than the actual state of the world. When simulating the mental states of others, people would be influenced by what was known of their past behavior and also of those aspects of the wider world relevant to the current situation. The second set of activities introduced information about observed behavior. When combined with what could be recalled from the past, this allowed for inferences about mental states and predictions about the next stage in a sequence of behavior. The third set was activated by language and narrative. Frith and Frith concluded that this drew on past experience to generate “a wider semantic and emotional context for the material currently being processed.”32
This wider context could be interpreted using a “script.” The concept comes from Robert Abelson, who developed an interest during the 1950s in the factors shaping attitudes and behavior. His work was stimulated by a 1958 RAND workshop with Herbert Simon on computer simulations of human cognition. Out of this came a distinction between “cold” cognition, where new information was incorporated without trouble into general problem-solving, and “hot” cognition, where it posed a challenge to accepted beliefs. Abelson became perplexed by the challenges posed by cognition for rational thinking and in 1972 wrote of a “theoretical despond,” as he “severely questioned whether information has any effect upon attitudes and whether attitudes have any effects upon behaviour.” It was at this point that he hit upon the idea of scripts. His first thoughts were that they would be comparable to a “role” in psychological theory and a “plan” in computer programming, “except that it would be more occasional, more flexible, and more impulsive in its execution than a role or plan, and more potentially exposed in its formation to affective and ‘ideological’ influences.”33 This led to his work with Roger Schank. Together they developed the idea of a script as a problem in artificial intelligence to refer to frequently recurring social situations involving strongly stereotyped conduct. When such a situation arose, people resorted to the plans which underlay these scripts.34 Thus, a script involved a coherent sequence of events that an individual could reasonably expect in these circumstances, whether as a participant or as an observer.35
Scripts referred to the particular goals and activities taking place in a particular setting at a particular time. A common example was a visit to a restaurant: the script helped anticipate the likely sequence of events, starting with the menu and its perusal, ordering the food, tasting the wine, and so on. In situations where it became necessary to make sense of the behavior of others, the appropriate
script created expectations about possible next steps, a framework for interpretation. As few scripts were followed exactly, the other mentalizing processes allowed them to be adapted to the distinctive features of the new situation. We will explore the potential role of scripts in strategy in the next section.
Individuals varied in their ability to mentalize. Those who were more cooperative, had a higher degree of emotional intelligence, and enjoyed larger social networks tended to be better mentalizers. It might be thought that this would also be an attribute of those of a Machiavellian disposition, who were inclined to deceive and manipulate. This might be expected to depend on an ability to understand another’s mind and its vulnerabilities. While such people might lack empathy or hot cognition, the expectation would be of a degree of cold cognition, an insight into what another knows and believes. Yet studies of individuals described as “Machiavellian”—used in psychological studies to refer to somewhat callous and selfish personalities largely influenced by rewards and punishments—suggested that both their hot and cold cognition were limited. This led to the proposition that these individuals’ limited ability to mentalize meant that they found it easier to exploit and manipulate others because there was little to prompt guilt and remorse.36 There could therefore be individuals who were so naturally manipulative that they were apparently incapable of dealing with other people on any other basis.
Such findings arguably provided more support for the view that the rational actor celebrated in economic theory tended to the psychopathic and socially maladroit. As Mirowski notes, in an awkward soliloquy, it was striking how many of the theorists who insisted on an egotistical rationality, who claimed to “theorize the very pith and moment of human rationality”—of which Nash was but one example—were not naturally empathetic and lived very close to the mental edge, at times tipping over into depression and even suicide.37