Book Read Free

Code Warriors

Page 13

by Stephen Budiansky


  The 1946 coordination agreement left each service in complete control of its own intercept stations and cryptanalytic staff and preserved the Army’s exclusive authority over military and air force traffic and the Navy’s over naval traffic; it was only in “joint” areas of responsibility, diplomatic and commercial targets, that the new “coordinator of joint operations” and the interagency board to which he reported (most recently rechristened the U.S. Communications Intelligence Board, or USCIB) had any say. But the coordinator, who alternated annually between the chiefs of ASA and Op-20-G, had no actual authority to order anyone in the other agency to do anything. Knowledge was power when it came to bureaucratic turf fights, and the old habits of hoarding information continued unchanged. Rowlett was not above giving his Navy counterparts deliberately misleading information to avoid sharing ASA’s de facto control of the Russian one-time-pad problem; before a meeting with the Navy at Arlington Hall in June 1947 to discuss the project, he instructed Meredith Gardner not to reveal that he had already succeeded in reading some messages from a depth of two, and casually tried to steer the Op-20-G codebreakers into what he knew was a dead-end hunt searching for patterns in the one-time-pad keys, thus leaving the real work to his group at ASA.45

  The particular absurdity of the arrangement was that the agencies most directly interested in the results of the “joint” activity that came under USCIB’s purview—namely, the State Department, FBI, and now CIA—had no actual authority over how much effort would be allocated to these nonmilitary targets, or what priorities would be set among them. Although State and CIA were represented on USCIB, the board’s decisions had to be unanimous, giving each of the military services an effective veto over any initiatives they disagreed with. A proposal in 1948 to bring USCIB under the civilian-led National Security Council and give it the power to direct the collection and analysis of signals intelligence in areas of “national” importance met with furious resistance this time from the Army, which protested that this would allow civilians to give orders to its intercept stations, interfering with the military chain of command. In the end, a much watered-down directive from NSC brought the board under its control, but still with the requirement of unanimity in its decisions and with its power limited to “authoritative coordination” rather than “unified direction.” The entire attempt to effect “coordination” through such an ungainly mechanism was “totally useless,” recalled Oliver Kirby, who worked on the Russian problem at ASA at the time. “I don’t know what they ‘coordinated.’ They didn’t bother us at all.”46

  This endless tinkering with feckless bureaucratic mechanisms left completely unaddressed an even more serious weakness in the U.S. signals intelligence system. Almost entirely for reasons that were quirks of history and personalities, both the Army and Navy maintained that the product of decrypting signals was “information,” not “intelligence.” That semantic invention was the outgrowth of a long fight between the services’ communications and intelligence branches for control of signals interception and codebreaking, going back to the 1920s; more by dint of their forceful and politically adept commanders than through persuasive arguments about organizational logic or military efficiency, the Army Signal Corps and Naval Communications always managed to retain the mission, repeatedly fighting off attempts by Army and Navy intelligence to wrest control away from them. Still, some plausible justification had to be invented for such an odd division of responsibilities: thus the euphemistic pretense that signals intelligence was not intelligence.

  But bureaucratic fictions often have actual consequences, and as a matter of policy Arlington Hall and Op-20-G were formally limited to supplying translations of their decrypts to the intelligence analysts of other departments, rather than producing a finished “intelligence product” themselves: they could report what an intercepted signal said, but not what it meant. That distinction continued into the postwar period, with the result that the Office of Naval Intelligence, Army G-2, the State Department, and CIA each produced their own analyses of the decrypted messages that the Army and Navy signals intelligence organizations provided. The duplication of effort was one thing; much worse was the utterly schizophrenic separation of the job of translation from analysis.

  Bletchley Park by contrast had confronted this problem head-on from the very start. The great elation at breaking the first German army Enigma messages in January 1940 had just as quickly turned to deflation when they turned out to be “a pile of dull, disjointed, and enigmatic scraps, all about the weather, or the petty affairs of a Luftwaffe headquarters no one had heard of…the whole sprinkled with terms no dictionary knew,” as a veteran of Hut 3 recalled.47 It was immediately apparent that even to produce a meaningful translation of a message, much less extract useful intelligence from it, was fundamentally a job of intelligence analysis. In time Hut 3 would amass a huge card file of cross-indexed names, terms, places—which time and again led to discoveries of the first importance from just such “dull and enigmatic scraps” in decoded Enigma messages. The first identification of the location of the Nazis’ rocket experiments, notably, came from an Enigma message reporting the transfer to Peenemünde of a junior Luftwaffe NCO who Hut 3 knew, from earlier signals, had worked on radio guidance systems. The top translators at Bletchley were intelligence officers first, who sifted myriad pieces to assemble an insightful whole.48

  The U.S. Army and Navy signals intelligence organizations never attempted to perform a similar role. Although Carter Clarke and Alfred McCormack assembled an impressive array of analytic and legal minds in the office McCormack advised setting up in the wake of the Pearl Harbor attack to make sure no signals intelligence warnings were ever missed again (known as Special Branch, it was part of the War Department General Staff’s G-2), its analysts were one step removed from the translators who were on intimate terms with the traffic on a daily and hourly basis. Now CIA, with its mandate to serve as the central authority for correlating, evaluating, and disseminating national intelligence, not only began to insist on direct access to the “raw” signals but aggressively objected if the codebreakers added the smallest annotation or interpretation. (“We had people all over us with both feet and clubs,” recalled Oliver Kirby, if an ASA linguist dared to add any context to a translation.)49 It was a fundamental disconnect in the American intelligence establishment that would never be fully resolved, leaving an enduring weakness that would cause officials at crucial moments in the decades to come to treat decoded messages with exaggerated reverence or breezy disdain, with equally fatal consequences in both cases.

  *

  *1The Enigma’s rotors and plugboard (diagram, this page) had the effect of swapping the identities of thirteen letters of the alphabet with thirteen others at any given setting: if A was enciphered as F, then F was enciphered as A at that same position. Even with this constraint of “reciprocal” encipherment, a fantastic number of different cipher alphabets is possible; the total number of permutations is given by the expression 25 × 23 × 21 × 19 × 17 ×…× 5 × 3 × 1 = 7,905,853,580,625.

  *2John Cairncross, who worked at Bletchley for one year in 1942–43, was one; like a number of other Soviet spies of his generation who would rise to high levels in the British government, he was recruited at Cambridge in the 1930s by his tutor Anthony Blunt, an art scholar and later Soviet mole inside MI5. After the war, as a senior treasury official, Cairncross passed information to the Soviets about the British atomic bomb program. He came under suspicion in 1951, and finally confessed in 1964. The other spy at Bletchley, code-named BARON, worked in Hut 3 in 1941 and has never been identified.

  4

  Digital Dawn

  In the summer of 1944, Lieutenant Herman Goldstine, a thirty-year-old mathematician, was waiting for the train to Philadelphia at the railroad station in Aberdeen, Maryland, when he spotted one of the giants of his field walking toward him on the platform.

  John von Neumann had joined the faculty at Princeton’s Institute for Advanced Study in
1931, the same year as Albert Einstein. Von Neumann was one of a galaxy of astonishing scientific minds who had fled the rising anti-Semitism of Europe for America and Britain in the years before the war. So many were from Hungary—besides von Neumann, the Hungarian contingent included the renowned physicists Edward Teller, Leo Szilard, and Eugene Wigner and the aerodynamicist Theodore von Kármán—that their colleagues joked they must be a race of super-intelligent beings from another planet who had adopted the cover story of being Hungarian to explain away their accented English.

  Von Neumann had been an intellectual prodigy as a child, able to divide eight-digit numbers in his head at age six. Throughout his life he could effortlessly recite entire books verbatim after a single reading, and equally effortlessly provide a running translation in any number of languages. Years later, after he got to know him well, Goldstine tried to test von Neumann by asking him how Charles Dickens’s Tale of Two Cities begins. He was still going fifteen minutes later, without pause, when Goldstine finally stopped him.1 As a scientist, von Neumann had made seminal contributions to a bewildering array of fields, including game theory, quantum mechanics, economics, topology, and the theory of shock waves. Besides his eminent position at Princeton, he was now also working as a consultant to the Manhattan Project, already looking ahead to the possibility of the “super,” or hydrogen, bomb.

  At the time of their first meeting Goldstine was the U.S. Army’s liaison to a small group of mathematicians and engineers at the University of Pennsylvania’s Moore School of Electrical Engineering who were designing what would be one of the world’s first digital electronic calculating machines. Called ENIAC—the letters stood for Electronic Numerical Integrator and Computer—the device was being built for the Army’s Ballistics Research Laboratory at the Aberdeen Proving Ground, which was hoping to automate the extremely labor-intensive calculations involved in creating aiming tables that allowed gunners and bombardiers to predict the trajectories of artillery shells, bombs, and missiles. That day on the train platform the younger man, with some temerity, approached his world-famous colleague and introduced himself:

  Fortunately for me von Neumann was a warm, friendly person who did his best to make people feel relaxed in his presence. The conversation soon turned to my work. When it became clear to von Neumann that I was concerned with the development of an electronic computer capable of 333 multiplications per second, the whole atmosphere of our conversation changed from one of relaxed good humor to one more like the oral examination for the doctor’s degree in mathematics. Soon thereafter the two of us went to Philadelphia so that von Neumann could see the ENIAC.2

  ENIAC contained seventeen thousand vacuum tubes, weighed thirty tons, and took days to program for each calculation by means of huge banks of patch cords and switches. But the group was already thinking about the next generation of computers that would follow, and von Neumann enthusiastically joined in the discussions. A few months later he sent Goldstine a 101-page draft distilling the ideas for the architecture of the new computer. The key point was that its program instructions would be stored in the computer’s memory itself, providing a flexibility and general-purpose adaptability so far lacking in all electronic computing machines. Goldstine later called the paper “the most important document ever written on computers and computing.” Von Neumann’s enthusiasm for the possibilities of computers led the Los Alamos scientists to submit the very first problem run on the ENIAC, an extremely complex calculation related to the hydrogen bomb’s design.3

  EDVAC, ENIAC’s successor, was still on the drawing board in the summer of 1946 when the Moore School decided to invite a select handful of government and academic mathematicians to a special summer school lecture series on the project; as a result, the world’s first computer course predated the world’s first stored-program computer. The “Course in Theory and Techniques for Design of Electronic Digital Computers” consisted of eight weeks of lectures and seminars presented by EDVAC’s developers and outside experts, von Neumann among them.

  Howard Campaigne, who was about to take on his new civilian job of chief of mathematical research at Op-20-G, heard about the course from his old boss Howard T. Engstrom. As head of Op-20-G’s research section during the war, Engstrom, a former Yale mathematics professor, had always made nurturing talent his chief priority. A courtly and reserved man, he was, several colleagues later recalled, “a conceptualist” who thought about the process of making advances in the field rather than getting caught up in day-to-day crises. Engstrom knew he would never be a mathematical genius himself but felt he could make his most important contribution by encouraging others and creating the environment that would allow them to get on with their work. He often acted as an effective buffer between his research staff and the more hard-charging naval commanders in the organization who demanded instant results. Engstrom had since left the Navy to start his own research company, but kept in close touch with his former colleagues at Nebraska Avenue.4

  And so when Engstrom called one day and told Campaigne he really ought to find someone, quick, to send to the Moore School course, Campaigne needed no persuading. He chose a young mathematician on his staff, Lieutenant Commander James Pendergrass, who did need some persuading. Pendergrass was on leave, and the whole thing was so sudden there was no time for him to receive written orders, which meant he had to pay his travel expenses out of his own pocket—“always a dangerous thing to do in the Navy,” Pendergrass said, because if there was a subsequent hitch in approval of the orders you were stuck.5

  “I wasn’t thinking in terms of computing at all at the time,” Pendergrass recalled of his introduction to computers. But the thing that struck him most of all as he sat through von Neumann’s and the others’ descriptions of their computer design was that it really wasn’t a “computer” at all. It was fundamentally a logic machine that manipulated discrete pieces of data; in fact, it had to be “perverted” into doing mathematical calculations at all, Pendergrass thought. “You know,” he told his boss on his return to Washington at the end of the summer, “this is just absolutely ideal for our business. It’s better for our business than it is for the mathematicians.”6

  Pendergrass quickly wrote up a paper in which he worked out several examples of computer programs that could perform basic cryptanalytic tasks on the new machine. The speed of the digital computer was of course one advantage, but equally important was its flexibility in manipulating data according to a logical sequence of instructions, and the prospect of soon-to-be-available mass storage devices that could hold large amounts of data for ready access and comparison. “The purpose of this report is to set forth the reasons why the author believes that the general purpose mathematical computer, now in the design stage, is a general purpose cryptanalytic machine,” Pendergrass began his report. Referring to the large array of bombes and other special-purpose cryptanalysis devices the Navy had acquired at its Nebraska Avenue complex during the war, he continued, “It is not meant that a computer would replace all machines in Building 4, nor is it meant that it could perform all the problems as fast as the existing special purpose machines. It is, however, the author’s contention that a computer could do everything that any analytic machine in Building 4 can do, and do a good percentage of these problems more rapidly.”7

  Princeton, Harvard, and the National Bureau of Standards were all working on computers as well, but there was little prospect that they would have time to spare to build one just for the Navy codebreakers. National Cash Register, which had built the U.S. Navy bombes, had already made clear that it was not interested in continuing its wartime collaboration with Op-20-G and was anxious to get back to its commercial business, as were IBM and Eastman Kodak, which had helped with R&D on special-purpose cryptanalytic equipment.

  The company Howard Engstrom founded upon leaving Op-20-G was hardly on a par with such industrial technology powerhouses. But it had already begun to show it could deliver on R&D tasks that the Navy codebreakers needed done. That ha
d been the whole idea behind the company, and Wenger in fact had pulled every string imaginable to help set it up as a way to preserve some of Op-20-G’s technical expertise amid the postwar downsizing, and to continue the kind of arrangement it had with NCR to build the secret analytic machines it might need in the future. Those machines, the Navy codebreakers now swiftly decided, were going to include a pioneering digital electronic computer.

  —

  The whole arrangement with Engstrom’s company was more than a little irregular. The founding technical partners were Engstrom; his Op-20-G colleague Bill Norris, an outgoing electrical engineer from Nebraska who had sold X-ray machines for Westinghouse before the war; and Captain Ralph I. Meader, who had commanded the Navy contingent (including three hundred women, members of the Navy’s WAVES) that helped build the bombes at NCR’s Dayton factory. The Navy had no legal authority to set up a private company, but behind the scenes Rear Admiral Joseph R. Redman, the director of naval communications, “verbally authorized” the project in February 1945 and asked some top Navy officials to see if they could put Engstrom and his colleagues in touch with likely financial backers to get the company off the ground. Secretary of the Navy James Forrestal did not have to look very far: one of his staff aides was Captain Lewis Strauss, partner of the Wall Street investment bank Kuhn, Loeb. But after months of discussions with Engstrom, Strauss’s Wall Street associates concluded that the idea would never be a financial success and decided they could not put any money into it.8

 

‹ Prev