Book Read Free

Strategy

Page 81

by Lawrence Freedman


  Axelrod’s analysis was not irrelevant to the conflicts with which strategy was largely concerned, especially those where there were significant areas of cooperation even against the backdrop of a general antagonism or competition. But the specific form of the tit-for-tat approach, even in situations which approximated to the form of prisoner’s dilemma, would be hard to replicate. A symmetry in position between two parties was rare so that the impact of moves, whether cooperation or defection, would not be the same. Cooperation was as likely to be based on exchange of benefits of different types as on things of equivalent value. This was why there were many ways in which cooperation could develop, for example by means of barter, rather than through iterated games of prisoner’s dilemma. One important point was reinforced by Axelrod’s tournament. Strategies have to be judged over time, in a series of engagements rather than in a single encounter. This is why it was unwise to try to be too clever. Players who used “complex methods of making inferences about the other player” were often wrong. It was difficult to interpret the behavior of another without accounting for the impact of one’s own. Otherwise, what might have been assumed to be complex signaling just appeared as random messages.

  Using iterated games (though of assurance rather than prisoner’s dilemma), Dennis Chong looked hard at the civil rights movement to address the issue raised by Olson of rational participation in what he called “public-spirited collective action.” He saw the initial unwillingness to indulge in futile gestures and the later nervousness about taking personal risks when others were carrying the weight of the protest. This form of collective action offered no tangible incentives. Yet there were “social and psychological” benefits. It became a “long-term interest to cooperate in collective endeavors if noncooperation results in damage to one’s reputation, ostracism, or repudiation from the community.”

  Chong noted the difficulty with looking at strategy in terms of the one-off encounters to which game theory seemed to lend itself. The ability to think long term required taking into account the “repeated exchanges and encounters that one will have with other members of the community.” The difficulty collective movements faced was getting started. Chong’s model could not explain where the leaders came from. They acted “autonomously” and got engaged without being sure of success or followers. Once a start had been made with the acquisition of the first followers but prior to any tangible results, momentum developed as a result of a form of social contagion. This led to the conclusion, which might have been reached by more straightforward historical observation, that “strong organizations and effective leadership” combined with “symbolic and substantive concessions” from the authorities. In addition, it was wise to be cautious about being able to identify any “combination of objective factors in a society that will predictably set off a chain of events leading up to a collective movement.”25

  The problem was not that the methods used in rational choice could not lead to intriguing and significant insights but that so many really interesting questions were begged. Unless preferences were attributed (such as profit or power maximization) because they would work well for most actors in most circumstances, then only the actors themselves could explain what they were trying to achieve and what their expectations were with regard to their own options and the reactions of others. This meant that before the theory could get to work it had to be told a great deal. As Robert Jervis observed, the “actor’s values, preferences, beliefs, and definition of self all are exogenous to the model and must be provided before analysis can begin.”26 Rather than just take utility functions as givens, it was important to understand where they came from and how they might change with different contexts. “We need to understand not only how people reason about alternatives,” observed Herbert Simon, “but where the alternatives come from in the first place. The processes whereby alternatives are generated has been somewhat ignored as an object of research.”27

  The point could be illustrated by the intellectual trajectory of William Riker. It was always an important feature of his approach that he did not assume that individuals were motivated by simple measures of self-interest, such as money or prestige, but allowed for other more emotional or ethical considerations. That is, utilities could be subjective, which reinforced the point about the prior determination of the preferences that were brought to the game.28 He also stressed that the structure of the game made a big difference. If the issue at stake was framed one way rather than another, alternative possibilities were opened up even with the same set of players.

  In his outgoing address as president of the American Political Science Association in 1983, Riker identified three analytical steps. The first was to identify the constraints imposed “by institutions, culture, ideology and prior events,” that is, the context. Rational choice models came with the next step, which was to identify “partial equilibria from utility maximization within the constraints.” The third step was “the explication of participants’ acts of creative adjustments to improve their opportunities.” Unfortunately, he noted, not very much effort had been devoted to this third step. This was the arena of what he dubbed “heresthetics, the art of political strategy.” This came from Greek roots for choosing or electing. As areas of comparative ignorance, he listed “the way alternatives are modified in political conflicts” and the “rhetorical content of campaigns which is their principal feature.”29 These means were important because that is how politicians structured the environment and required others to respond to their agenda. They could prevail by creating a situation with its own inexorable logic. It was through these devices that they could persuade others to join them in coalitions and alliances. This led the field away from the position where Riker had previously placed his flag. Simon commented, “I could wish he had not invented the word ‘heresthetics’ to conceal the heresies he is propagating.”30

  Heresthetics was about structuring the way the world was viewed so as to create political advantage. Riker identified a number of heresthetic strategies: setting the agenda, strategic voting (supporting a less favored outcome to avoid something even worse), trading votes, altering the sequence of decisions, and redefining a situation. Initially he saw these forms of manipulation as separate from rhetoric, although it was hard to see how many of these strategies could work without persuasive skills. In an unfinished book, published posthumously, he was focusing much more on rhetoric. His disciples claimed that he was returning the discipline to “the science behind persuasion and campaigning,”31 but he acknowledged he was moving into terrain where the science would struggle. The point was made in the title of his book on heresthetics, The Art of Manipulation. He was clear that this was “not a science. There is no set of scientific laws that can be more or less mechanically applied to generate successful strategies.”32 In his posthumous book he expressed concern that “our knowledge of rhetoric and persuasion is itself minuscule.”33 Riker certainly did not abandon his conviction that statistical analysis could sharpen his propositions, and he was determinedly avoiding a large body of work that directly addressed exactly the issues of agenda setting, framing, and persuasion that were interesting him, because it was too “belle-lettres” and insufficiently rigorous. However, he still ended up where so many students of strategists found themselves, fascinated by why some players in the political game were smarter and more persuasive than their opponents.

  CHAPTER 37 Beyond Rational Choice

  Reason is and ought only to be the slave of the passions, and can never

  pretend to any other office than to serve and obey them.

  —David Hume, A Treatise of Human Nature, 1740

  THE PRESUMPTION of rationality was the most contentious feature of formal theories. The presumption was that individuals were rational if they behaved in such a way that their goals, which could be obnoxious as well as noble, would be most likely to be achieved. This was the point made by the eighteenth-century philosopher David Hume. He was as convinced of the importance of reason
as he was that it could not provide its own motivation. This would come from a great range of possible human desires: “Ambition, avarice, self-love, vanity, friendship, generosity, public spirit,” which would be “mixed in various degrees and distributed through society.”1 As Downs put it, the rational man “moves towards his goals in a way which to the best of his knowledge uses the least possible input of scarce resources per unit of valued output.” This also required focusing on one aspect of an individual and not his “whole personality.” The theory “did not allow for the rich diversity of ends served by each of his acts, the complexity of his motives, the way in which every part of his life is intimately related to his emotional needs.2 Riker wrote that he was not asserting that all behavior was rational, but only that some behavior was “and that this possibly small amount is crucial for the construction and operation of economic and political institutions.”3 In addition, the settings in which actors were operating—whether a congressional election, legislative committee, or revolutionary council—were also taken as givens, unless the issues being studied concerned establishing new institutions. The challenge then was to show that collective political outcomes could be explained by individuals ranking “their preferences consistently over a set of possible outcomes, taking risk and uncertainty into consideration and acting to maximize their expected payoffs.” This could easily become tautological because the only way that preferences and priorities could be discerned was by examining the choices made in actual situations.

  The main challenge to the presumption that intended egotistical choices was the best basis from which to understand human behavior, was that it was consistently hard to square with reality. To take a rather obvious example, researchers tried to replicate the prisoner’s dilemma in the circumstances in which it was first described.4 Could prosecutors gain leverage in cases involving codefendants by exchanging a prospect of a reduced sentence in return for information or testimony against other codefendants? The evidence suggested that it made no difference to the rates of pleas, convictions, and incarcerations in robbery cases with or without codefendants. The surmised reason for this was the threat of extralegal sanctions that offenders could impose on each other. The codefendants might be kept separate during the negotiations, but they could still expect to meet again.5 To the proponents of rational choice, such observations were irrelevant. The claim was not that rational choice replicated reality but that as an assumption it was productive for the development of theory.

  By the 1990s, the debate on rationality appeared to have reached a stalemate, with all conceivable arguments exhausted on both sides. It was, however, starting to be reshaped by new research, bringing insights from psychology and neuroscience into economics. The standard critique of rational choice theory was that people were just not rational in the way that the theory assumed. Instead, they were subject to mental quirks, ignorance, insensitivity, internal contradictions, incompetence, errors in judment, over-active or blinkered imaginations, and so on. One response to this criticism was to say that there was no need for absurdly exacting standards of rationality. The theory worked well enough if it assumed people were generally reasonable and sensible, attentive to information, open-minded, and thoughtful about consequences.6

  As a formal theory, however, rationality was assessed in terms of the ideal of defined utilities, ordered preferences, consistency, and a statistical grasp of probabilities when relating specific moves to desired outcomes. This sort of hyper-rationality was required in the world of abstract modeling. The modelers knew that human beings were rarely rational in such an extreme form, but their models required simplifying assumptions. The method was deductive rather than inductive, less concerned with observed patterns of behavior than developing hypotheses which could then be subjected to empirical tests. If what was observed deviated from what was predicted, that set a research task that could lead to either a more sophisticated model or specific explanations about why a surprising result occurred in a particular case. Predicted outcomes might well be counterintuitive but then turn out to be more accurate than those suggested by intuition.

  One of the clearest expositions of what a truly rational action required was set out in 1986 by Jon Elster. The action should be optimal, that is, the best way to satisfy desire, given belief. The belief itself would be the best that could be formed, given the evidence, and the amount of evidence collected would be optimal, given the original desire. Next the action should be consistent so that both the belief and the desire were free of internal contradictions. The agent must not act on a desire that, in her own opinion, was less weighty than other desires which might be reasons for not acting. Lastly, there was the test of causality. Not only must the action be rationalized by the desire and the belief, but it must also be caused by them. This must also be true for the relation between belief and evidence.7

  Except in the simplest of situations, meeting such demanding criteria for rational action required a grasp of statistical methods and a capacity for interpretation that could only be acquired through specialist study. In practice, faced with complex data sets, most people were apt to make elementary mistakes.8 Even individuals capable of following the logical demands of such an approach were unlikely to be prepared to accept the considerable investment it would involve. Some decisions were simply not worth the time and effort to get them absolutely right. The time might not even be available in some instances. Gathering all the relevant information and evaluating it carefully would use up more resources than the potential gains from getting the correct answer.

  If rational choices required individuals to absorb and evaluate all available information and analyze probabilities with mathematical precision, it could never capture actual human behavior. As we have seen, the urge to scientific rigor that animated rational choice theory only really got going once actors sorted out their preferences and core beliefs. The actors came to the point where their calculations might be translated into equations and matrices as formed individuals, with built-in values and beliefs. They were then ready to play out their contrived dramas. The formal theorists remained unimpressed by claims that they should seek out more accurate descriptions of human behavior, for example, by drawing on the rapid advances in understanding the human brain. One economist patiently explained that this had nothing to do with his subject. It was not possible to “refute economic models” by this means because these models make “no assumptions and draw no conclusions about the physiology of the brain.” Rationality was not an assumption but a methodological stance, reflecting a decision to view the individual as the unit of agency.9

  If rational choice theory was to be challenged on its own terms, the alternative methodological stance had to demonstrate that it not only approximated better to perceived reality but also that it would produce better theories. The challenge was first set out in the early 1950s by Herbert Simon. He had a background in political science and a grasp of how institutions worked. After entering economics through the Cowles Commission, he became something of an iconoclast at RAND. He developed a fascination with artificial intelligence and how computers might replicate and exceed human capacity. This led him to ponder the nature of human consciousness. He concluded that a reliable behavioral theory must acknowledge elements of irrationality and not just view them as sources of awkward anomalies. While at the Carnegie Graduate School of Industrial Administration, he complained that his economist colleagues “made almost a positive virtue of avoiding direct, systematic observations of individual human beings while valuing the casual empiricism of the economist’s armchair introspections.” At Carnegie he went to war against neoclassical economics and lost. The economists grew in numbers and power in the institution and had no interest in his ideas of “bounded rationality.”10 He gave up on economics and moved into psychology and computer science. This idea of “bounded rationality,” however, came to be recognized as offering a compelling description of how people actually made decisions in the absence of perfect information an
d computational capacity. It accepted human fallibility without losing the predictability that might still result from a modicum of rationality. Simon showed how people might reasonably accept suboptimal outcomes because of the excessive effort required to get to the optimal. Rather than perform exhaustive searches to get the best solution, they searched until they found one that was satisfactory, a process he described as “satisficing.”11 Social norms were adopted, even when inconvenient, to avoid unwanted conflicts. When the empirical work demonstrated strong and consistent patterns of behavior this might reflect the rational pursuit of egotistical goals, but alternatively these patterns might reflect the influence of powerful conventions that inclined people to follow the pack.

  Building upon Simon’s work, Amos Tversky and Daniel Kahneman introduced further insights from psychology into economics. To gain credibility, they used sufficient mathematics to demonstrate the seriousness of their methodology and so were able to create a new field of behavioral economics. They demonstrated how individuals used shortcuts to cope with complex situations, relying on processes that were “good enough” and interpreted information superficially using “rules of thumb.” As Kahneman put it, “people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.”12 The Economist summed up what behavioral research suggested about actual decision-making:

 

‹ Prev