Strategy

Home > Other > Strategy > Page 22
Strategy Page 22

by Lawrence Freedman


  Von Neumann and Morgenstern also found their popularizer. John McDonald’s Strategy in Poker, Business and War is curiously neglected in the histories of game theory. In 1949, McDonald came across von Neumann and Morgenstern when researching an article on poker for Fortune Magazine. Then McDonald wrote another article on game theory for the same magazine, before turning both articles into a book. The reason for the neglect of McDonald’s book may be that it did not take the theory forward and was geared to a popular exposition. But the author had extensive conversations with the academics and provided a clear statement of what they thought they might achieve. McDonald acknowledged that the mathematical proofs would challenge any lay reader, but he promised that the underlying concepts could be readily grasped. Game theory offered insights not just into military strategy but strategy in general. It was relevant whenever relationships involved conflict, imperfect information, and incentives to deceive. Because the theory was “formal and neutral, non-ideological,” it was “as good for one man as for another.” It would not help with assessing values and ethics, but “it may be able to tell what one can get and how one can get it.”

  In terms of the shift in strategic thinking prompted by game theory, the critical insight was that acting strategically depended on expectations about the likely actions of others over whom one has no control. The players in a game of strategy do not cooperate, yet their actions are interdependent. In such restrained circumstances the rational strategy was not to attempt to maximize gain but instead to accept an optimal outcome. Minimax, McDonald observed, was “one of the most talked about novelties in learned circles today.” When he moved on to consider its applications, paying particular attention to the importance of coalitions, he saw a number of possibilities. “War is chance,” he concluded, “and minimax must be its modern philosophy.” Yet he also described this as a theory with “imagination but no magic.” It involved “an act of logic with an unusual twist, which can be followed to the borderline of mathematical computation.”22

  The presumption behind the pioneering work on game theory, enthusiastically encouraged at RAND, was the conviction that there could be a scientific basis for strategy. Past endeavors to put these matters on a properly scientific basis had supposedly faltered because the analytical tools were not available. Specialists in military strategy lacked the mathematics, and the mathematics lacked the concepts and computational capacity. Now that these were available true breakthroughs could be made. Game theory was exciting because it directly addressed the problems posed by the fact that there was more than one decision-maker and then offered mathematical solutions. It was soon generating its own literature and conferences.

  In 1954, the sociologist Jessie Bernard made an early attempt to consider the broader relevance of game theory for the softer social sciences. She also worried about an inherent amorality, “a modernized, streamlined, mathematical version of Machiavellianism.” It implied a “low concept of human nature,” expecting “nothing generous, nothing noble, nothing idealistic. It expects people to bluff, to deceive, to feint, to withhold information, to play their advantages to the utmost, to make the most of their opponent’s weaknesses.” Although Bernard acknowledged the focus on rational decision, she misunderstood the theory, presenting it as a mathematical means of testing rather than of generating strategies. The misunderstanding was perhaps not unreasonable for she assumed that different qualities were required to come up with strategies: “Imagination, insight, intuition, ability to put one’s self in another person’s position, understanding of the wellsprings of human motivation—good as well as evil—these are required for the thinking up of policies or strategies.”23 For this reason, the “hardest work, so far as the social scientist is concerned, is probably already completed by the time the theory of games takes over.” In her grasp of the theory’s claim, she missed the point, though in her appreciation of the theory’s limits she was ahead of her time. The theory assumed rationality, but on the basis of preferences and values that the players brought with them to the game.

  Prisoners’ Dilemma

  The values attached to alternative outcomes of games were payoffs. The aim was to maximize them. Players were aware that in this respect they all had the same aim. In card games they accepted that their choices would be determined by the established rules of the game. As the application was extended, the choices could be shaped not only by mutually agreed upon and accepted rules but by the situation in which they found themselves. The theory progressed by identifying situations resembling real life that created challenging choices for the players. For the theory to move on, it was necessary to get beyond the limits of the von Neumann and Morgenstern analysis involving two players and “zero-sum payoffs,” which meant that what one won the other must lose. The normal approach for a mathematician having solved a comparatively simple problem was to move on to a more complex case, such as coalition formation. But this process turned out to be difficult in the case of game theory, especially if mathematical proofs were going to be required at each new stage.

  The key breakthrough came in the exploration of non-zero-sum games, in which the players could all gain or all lose, depending on how the game was played. The actual invention of the game of prisoners’ dilemma should be attributed to two RAND analysts, Merrill Flood and Melvin Dresher. The most famous formulation, however, was provided in 1950 by Albert Tucker when lecturing to psychologists at Stanford University. Prisoners’ dilemma involved two prisoners—unable to communicate with each other—whose fate depended on whether or not they confessed during interrogation and whether their answers coincided. If both remained silent, they were prosecuted on a minor charge and received light sentences (one year). If both confessed, they were prosecuted but with a recommendation for a sentence below the maximum (five years). If one confessed and the other did not, then the confessor got a lenient sentence (three months) while the other was prosecuted for the maximum sentence (ten years). The two players were left alone in separate cells to think things over.

  FIGURE 12.1 The figures in the corners refer to expectation of sentence.

  It should be noted that the matrix itself was a revolutionary way of presenting strategic outcomes and remained thereafter a fixture of formal analysis. This matrix demonstrated the prediction for prisoners’ dilemma (see fig. 12.1). They both confessed. A was unable to conspire with B and knew that if he remained silent he risked ten years’ imprisonment; if he confessed, he risked only five years. Furthermore, if B decided on the solution that would be of the greatest mutual benefit and so remained silent, A could improve his own position by confessing, in a sense double-crossing B. Game theory predicted that B would follow the same reasoning. This was the minimax strategy guaranteeing the best of the worst possible outcomes. A key feature of this game was that the two players were forced into conflict. They suffered a worse result than if they could communicate and coordinate their answers and then trust each other to keep to the agreed strategy. Prisoners’ dilemma came to be a powerful tool for examining situations where players might either work with or against each other (normally put as “cooperate” or “defect”).

  Game theory gained a boost during the early 1960s because it was presumed to have shaped nuclear strategy, although its actual influence was fleeting. It seemed to be of value because the core conflict could fit into a matrix as it was bipolar and between two alliances of roughly equivalent power. The conflict was clearly non-zero sum in that in any nuclear war both sides were likely to lose catastrophically. Thus they had a shared interest in peace, even while pursuing their distinct interests. There was no obvious way that the conflict would end, as the two alliances reflected opposing world views. There was a degree of stability in the relationship in terms of both the underlying antagonism and a fear of pushing matters to a decisive confrontation.

  The theory helped clarify the predicament facing governments. The challenge was to use it to generate strategies for dealing with the policy dilemmas it created.
Formal methodologies were favored by some analysts as a means of engaging in systematic thought in the face of the otherwise paralyzing contingency of nuclear war. It was easier to cope with the awful implications of any move if the discussion was kept abstract and impersonal. Yet when contributing to policy, analysts had to move beyond the theory. It soon reached its limits when it came to addressing such questions as to how vital interests could be defended without disaster when war was so dangerous, or whether it was possible to fight wars limited to conventional capabilities without escalation.

  CHAPTER 13 The Rationality of Irrationality

  This is a moral tract on mass murder: How to plan it, how to commit it, how to get away with it, how to justify it.

  —James Newman, review of Herman Kahn, On Thermonuclear War

  DESPITE BRODIE’S NOMENCLATURE, the first atomic weapons were not “absolute.” They were in the range of other munitions (the bomb that destroyed Hiroshima was equivalent to the load of some two hundred B-29 bombers). Also, at least initially, the weapons were scarce. The key development introduced by atomic bombs was less in the scale of their destructive power than in their efficiency. By the start of the 1950s, this situation had been transformed by two related developments. The first was the breaking of the U.S. monopoly by the Soviet Union, which conducted its first atomic test in August 1949. Once two could play the nuclear game, the rules had to be changed. Thought of initiating nuclear war would henceforth be qualified by the possibility of retaliation.

  The second development followed from the first. In an effort to extend its effective nuclear superiority, the United States developed thermonuclear bombs, based on the principles of nuclear fusion rather than fission. This made possible weapons with no obvious limits to their destructive potential. In 1950 the American government assumed that the introduction of thermonuclear weapons would allow the United States and its allies time to build up conventional forces to match those of the Soviet Union and its satellites. When Dwight D. Eisenhower became president in January 1953, he saw things differently. He wanted to take advantage of American nuclear superiority while it lasted, and also reduce the burden of spending on conventional rearmament. By this time, the nuclear arsenal was becoming more plentiful and more powerful. The strategy that emerged from these considerations became known as “massive retaliation,” following a speech made by Secretary of State John Foster Dulles in January 1954, when he declared that in the future a U.S. response to aggression would be “at places and with means of our own choosing.”1

  This doctrine was interpreted as threatening nuclear attack against targets in the Soviet Union and China in response to conventional aggression anywhere in the world. Massive retaliation was widely criticized for placing excessive reliance on nuclear threats, which would become less credible as Soviet nuclear strength grew. If a limited challenge developed and the United States had neglected its own conventional forces, then the choice would be between “suicide or surrender.” Dependence upon nuclear threats in the face of an opponent able to make threats of its own sparked a surge of intellectual creativity—later described as a “golden age” of strategic studies.2 At its core was the key concept deterrence, to be explored with a range of new methodologies designed to cope with the special demands of the nuclear age.

  Deterrence

  The idea that palpable strength might cause an opponent to stay his hand was hardly new. The word deterrence is based on the Latin deterre—to frighten from or away. In its contemporary use it came to reflect an instrumental sense of seeking to induce caution by threats of pain. It was possible to be deterred without being threatened, for example, one might be cautious in anticipation of how another might respond to a provocative act. As a strategy, however, deterrence involved deliberate, purposive threats. This concept developed prior to the Second World War in contemplation of strategic air raids. The presumption of civilian panic that had animated the first airpower theorists retained a powerful hold on official imaginations. The fear of the crowd led to musings on the likely anarchy that would follow sustained attacks. Although the British lacked capabilities for mass long-range attacks prior to the war, they doubted the possibility of defense and believed that only the threat of punitive attacks could hold Germany back. Ultimately, Britain had to rely on defense, which it did with unexpected success thanks to radar. The raids against Britain, and those mounted in return against Germany with even greater ferocity, resulted in terrible civilian pain but had limited political effects. Their main effect was on the ability to prosecute the war by disrupting production and fuel supplies. The surveys undertaken after the war demonstrated the modest impact of strategic bombing compared with the pre-war claims. But this did not really matter because the atom bomb pushed the dread to a new level. As Richard Overy put it, with air power the “theory had run ahead of the technology. After 1945 the two reached a fresh alignment.”3

  Deterrence answered the stark exam question posed by the arrival of nuclear weapons: What role can there be for a capability that has no tactical role in stopping armies or navies but can destroy whole cities? Answers in terms of war-fighting, though explored by the Eisenhower administration, appeared distasteful; answers in terms of deterrence promised the prevention of future war. It sounded robust without being reckless. It anticipated aggression and guarded against surprise but could still be presented as essentially reactive. The difficulty was whether deterrence could be expected to hold if it was self-evidently based on a bluff. Credibility appeared to depend on a readiness to convey recklessness, illustrated by another of John Foster Dulles’s comments about the need to be ready “to go to the brink” during a crisis. Thus the residual possibility of use left a formidable imprint, precisely because it would be so catastrophic.

  This reinforced the view that the main benefit of force lay in what was held in reserve. The military capacity of the West must never be used to its full extent, though for the sake of deterrence the possibility must exist. As decades passed, and the Cold War still did not turn hot, deterrence appeared to be working. At times of crisis there was a welcome caution and prudence all around. War was avoided because politicians were all too aware of the consequences of failure and the dangers of preparing to crush enemies with overwhelming force. The dread of total war influenced all considerations of the use of force, and not just those directly involving nuclear weapons. It was never possible to be sure where the first military step, however tentative, might lead.

  The impossibility of a fight to the finish affected all relations between the American and Soviet blocs. There developed “a predominance of the latent over the manifest, of the oblique over the direct, of the limited over the general.”4 If, as it seemed, there was no way of getting out of the nuclear age, then deterrence made the best of a bad job. While it was often difficult to explain exactly how deterrence had worked its magic—and historians can point to some terrifying moments when catastrophe was round the corner—yet a third world war did not happen. The fact that the superpowers were alarmed by the prospect of such a war surely had something to do with its failure to materialize.

  The importance of deterrence meant that considerable efforts were devoted to exploring the concept and examining its policy implications. Deterrence succeeded if nothing happened, which led to a problem when working out cause and effect. Inaction might reflect a lack of intention or an intention once present that had lapsed. Deterrence of an intended action could be due to a range of factors, including some unrelated to the deterrer’s threats and some related in ways the deterrer did not nececessarily intend. According to the most straightforward definition, that deterrence depended on convincing the target that prospective costs would outweigh prospective gains, it could be achieved by limiting gains as much as imposing costs. Preventing gain by means of a credible ability to stop aggression in its tracks became known as deterrence by denial,5 while imposing costs became deterrence by punishment. Denial was essentially another word for an effective defense, which if recogniz
ed in advance would provide a convincing argument against aggression. Thus the main conceptual challenges concerned punishment, especially the most brutal punishment of all: nuclear retaliation.

  As deterrence became wedded to a foreign policy of containment, interpreted as preventing any Soviet advances, both major war and minor provocations had to be deterred, not just those directed against the United States but also those directed against allies, and even the enemy’s enemies. Herman Kahn, an early popularizer of some of the more abstruse theories of deterrence, distinguished three types: Type I involved superpower nuclear exchanges; Type II limited conventional or tactical nuclear attacks involving allies; and Type III addressed most other types of challenges.6 At each stage, the requirements in terms of political will became more demanding, especially once both sides had acquired nuclear arsenals. It was one thing to threaten nuclear retaliation to deter nuclear attack, quite another to threaten nuclear use to deter a non-nuclear event. Because it was always unlikely that the United States would be directly attacked by a major power with anything other than nuclear weapons, the most likely non-nuclear event to be deterred would be an attack on an ally. This requirement came to be known as “extended deterrence.” Because of the development of Soviet capabilities, U.S. methods of deterrence became less confident, moving from disproportionate to proportionate retaliation, from setting definite obstacles to aggression to warnings that should aggression occur the consequences could be beyond calculation, from assured and unconstrained threats of overwhelming force to a shared risk of mutual destruction.

 

‹ Prev