Book Read Free

Quantum Legacies: Dispatches From an Uncertain World

Page 8

by David Kaiser


  The unlikely setting for the transformation was the Institute for Advanced Study in Princeton, New Jersey. The Institute had been founded in 1930 with generous funding from department-store magnate Louis Bamberger and his sister, Caroline Bamberger Fuld. Advised by education reformer Abraham Flexner, the founders aimed to establish a place for budding young intellectuals to pursue their scholarship beyond their doctorates, before the routines of university life—teaching, committee work, and all that—could dampen their creativity. Flexner sought some quiet place where scholars would have no institutional obligations to write lectures or publish reports; a place where they could sit and think. “Well, I can see how you could tell whether they were sitting,” quipped leading science policymaker Vannevar Bush.5

  Flexner began to staff the Institute with world-famous intellectuals who were fleeing fascism in Europe. Albert Einstein became the first permanent faculty member in 1933; he was soon joined by eccentric, lonely geniuses like logician Kurt Gödel, who eventually starved himself to death out of paranoid fear that people were trying to poison his food. Even after Oppenheimer became director of the Institute in 1947, fresh from his frenzied work as director of Los Alamos, the Institute remained closer in spirit to a monastery than a laboratory—a place far more likely to stack leather-bound copies of Bierens de Haan’s old Nouvelles tables d’intégrales on its shelves than to host the whir of lathes and drills. A New Yorker reporter observed in 1949 that the Institute had a “little-vine-covered-cottage atmosphere.” Around that time Hans Bethe advised a young physicist who was about to relocate to the Institute for a year “not to expect to find too much going on” there.6

  The calm began to be disturbed by members of a new team assembled by legendary mathematician John von Neumann. Born in Budapest in 1903, von Neumann published his first mathematical research papers at the tender age of nineteen and completed his doctorate at twenty-two, before fleeing the Continent as Europe descended into turmoil. Flexner scooped him up and added him to the Institute’s roster soon after Einstein had joined. Von Neumann spent much of the war at Los Alamos, working alongside Bethe and Oppenheimer on nuclear weapons. While immersed in that work, he became gripped by a vision as remarkable as Charles Babbage’s a century before: perhaps one could build a machine to calculate. Von Neumann was motivated not only by curiosity about the working of the human brain and the essence of cognition (though he was fascinated by such topics). He had more pressing concerns as well: he needed to know whether various designs for nukes would go boom or bust.7

  The wartime weapons project gave von Neumann a taste for semiautomated computation. Among the challenges he and colleagues faced was tracking, in some quantitative way, the likely outcomes when neutrons were introduced into a mass of fissionable material. Would they scatter off the heavy nuclei, get absorbed, or split them apart? Equally important: how would a shock wave from exploding charges propagate through the bomb’s core? During the war, calculations like these were largely carried out by chains of human operators armed with handheld Marchant calculators, a process recounted in David Alan Grier’s fascinating book When Computers Were Human (2005). Young physicists like Richard Feynman carved up the calculations into discrete steps, and then assistants—often the young wives of the laboratory’s technical staff—would crunch the numbers, each assistant performing the same mathematical operation over and over again. One person would square any number given to her; another would add two numbers and pass the result to the next woman down the line.8

  That rough-and-ready process had worked well enough for wartime calculations pertaining to fission bombs. But hydrogen bombs were a whole different beast—not just in potential explosive power but computationally as well. Their internal dynamics, driven by a subtle interplay between roiling radiation, hot plasma, and nuclear forces, were far more complicated to decipher. Trying to determine whether a given design would ignite nuclear fusion—forcing lightweight nuclei to fuse together as they do inside stars, unleashing thousands of times more destructive power than the fission bombs that were dropped on Hiroshima and Nagasaki—or whether it would fizzle posed tremendous computational challenges. Such calculations could never be completed by teams of people brandishing Marchant calculators. They required, or so von Neumann concluded, a fully automated means of solving many complicated equations at once. They required an electronic, digital computer that could execute stored programs.9

  Some of the original ideas behind stored-program computation had been invented before the war by British mathematician and cryptologist Alan Turing, and indeed, the world’s first instantiation of Turing’s ideas was completed by a team in Manchester in 1948. Much like the Manhattan Project and the wartime radar program, however, what had begun as a British idea was scaled up to industrial size by the Americans. One group, centered at the University of Pennsylvania and sponsored by the US Army, had been hard at work on a similar device since 1943, code-named ENIAC for “Electronic Numerical Integrator and Computer.” Immediately after the war the Pennsylvania group gained competition, as von Neumann began to amass government contracts to build his own computer at the Institute in Princeton. His team included several young engineers as well as his talented wife, Klári, who dove into the minutiae of coding bomb simulations and coaxed the machine along as it ran for days on end.10

  Von Neumann had rubbed shoulders with Turing at the Institute during the 1930s, while Turing worked on his dissertation at nearby Princeton University. During the war, von Neumann had also consulted on the ENIAC in Pennsylvania. In fact, he helped to redirect the project from its original mandate—calculating artillery tables for the Army’s ballistics laboratory—to undertaking calculations on behalf of Los Alamos for its nuclear weapons designs. At the time, the Pennsylvania machine could execute only fixed programs: one had to set a given program by hand by physically rewiring components ahead of time, before any results could be calculated. Changing the program took weeks of physical manipulation—swapping cables, alternating switches, checking and rechecking the resulting combinations. Like the ENIAC’s inventors, von Neumann sought some means by which a computer could store its program alongside the resulting data, in the same memory. Just as Turing had envisioned, such a machine would store its instructions and its results side by side.

  Designed before the invention of the transistor, von Neumann’s computer required more than two thousand vacuum tubes to work in concert. Such tubes were already a decades-old technology. They produced electric current by boiling electrons off a heated chunk of metal—hardly the image of today’s sleek laptops or smartphones. To counter the constant heating from the tubes, the machine required a massive refrigeration unit, capable of producing fifteen tons of ice per day. With astounding dexterity, von Neumann’s small but energetic team brought their room-sized computer to life during the late 1940s. By the summer of 1951, the machine was chugging full-time on H-bomb calculations, running around the clock for two-month stretches at a time. When operating at full capacity, the Institute computer could boast 5 kilobytes of usable memory. As George Dyson notes in his engaging account of von Neumann’s project, that’s about the same amount of memory that today’s computers use in half a second when compressing music files.11

  Figure 6.1. John von Neumann (left) talks with J. Robert Oppenheimer, director of the Institute for Advanced Study, near a portion of the Institute’s electronic computer, ca. 1952. (Source: Photograph by Alan Richards, courtesy of the Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton, NJ, USA.)

  The Institute computer project was fueled largely by contracts from the Atomic Energy Commission, the postwar successor to the Manhattan Project. Those contracts stipulated that virtually no information about thermonuclear reactions could be released to the public—a position that President Harry S. Truman reiterated when, on the last day of January 1950, he committed the United States to crash-course development of an H-bomb. As a cover for their main task, therefore, von Neumann’s team als
o worked on unclassified problems as they put their new machine through its paces. Weather prediction became a popular challenge. Meteorology featured many of the kinds of complicated fluid flows that weaponeers also had to understand inside their H-bomb designs. Other early applications of the computer included simulations of biological evolution, whose processes branched like so many scattered neutrons inside a nuclear device.

  At the end of the 1950s, physicist and novelist C. P. Snow diagnosed a clash between “two cultures”: literary intellectuals versus natural scientists.12 At the Institute, von Neumann’s electronic monster induced a sharp clash of cultures, but not along the fault lines that Snow had in mind. Rather, the gap yawned between notions of an independent scholarly life and the teamwork regimen of engineers. By 1950, the budget for von Neumann’s computer project, drawn almost entirely from government defense contracts, dwarfed the entire budget for the Institute’s School of Mathematics. More than mere money was at stake; so, too, were ways of life. As Institute mathematician Marsten Morse wrote to a colleague in the early 1940s, “In spirit we mathematicians at the Institute would cast our lot in with the humanists.” Mathematicians, Morse continued, “are the freest and most fiercely individualistic of artists.”13 Morse’s Institute neighbor, Einstein, agreed. Reviewing an application from a young physicist to the Guggenheim Foundation in 1954, Einstein conceded that the proposed topics of study were worthy, but he considered the overseas trip unnecessary: “After all everybody has to do his thinking alone.”14 At the Institute, the computer project did not pit science against the humanities; the battle was between ideals of the Romantic Genius versus the Organization Man.

  That difference in temperament was expressed in more tangible ways as well. The computer project was originally housed in the basement of the Institute’s main building, Fuld Hall—out of sight, even if the clanging did not keep the engineers quite out of mind. Soon the computer group was moved to facilities further from the Institute’s more solitary scholars. But even the new building required a telling compromise. The drab, functional, concrete building envisioned by the government sponsors would never have satisfied the aesthetes among the Institute’s main residents, so the Institute paid an additional $9,000 (nearly $100,000 in today’s currency) to cloak the new building with a brick veneer.

  In the end, the Institute’s computer project became a victim of its success. Von Neumann moved more squarely into policymaking for the atomic age when, in 1955, he was tapped to serve as one of the five members of the Atomic Energy Commission—soon after the commission had infamously stripped von Neumann’s boss, Oppenheimer, of his security clearance. (Much of Oppenheimer’s hearing had turned on his counsel against H-bomb development.) Oppenheimer remained in charge of the Institute, even as von Neumann spent less and less time there. With von Neumann away from campus, the computer project had no local lion to protect its turf. Other centers across the country, meanwhile, began to make fast progress on replica machines. One of them, at the US Air Force–funded think tank RAND, was even dubbed the “Johnniac,” in honor of von Neumann. Von Neumann succumbed to cancer in 1957; the Institute computer project limped along for a few more months until it was finally shuttered in 1958. By that time, computation no longer relied upon solitary scholars poring over dusty reference volumes. The computer age had arrived.15

  I reread Bethe’s 1947 memo and the accompanying table of integrals a few summers ago while traveling in rural Montana. A tire on my rental car had picked up a nail somewhere between Bozeman and Kalispell. While I waited for a mechanic to get me roadworthy again, I flipped open my laptop and began to reevaluate some of the integrals that had been so carefully guarded sixty-five years earlier. These days run-of-the-mill software on typical laptops can evaluate such integrals in microseconds—the limiting factor is one’s typing speed rather than the processing power of the machine. Whereas the 1947 table listed numerical answers to four decimal places, my laptop spit back answers to sixteen decimal places quicker than I could blink. A few more keystrokes and I could have any answer I wanted to thirty decimal places. No need for access to a fancy library; no need to slog through a savant’s treatise in Dutch. I could sit in the service station, aside a dusty back road, and calculate.

  7

  Lies, Damn Lies, and Statistics

  “Russia Is Overtaking U.S. in Training of Technicians,” blared a typical front-page headline in the New York Times in 1954. “Red Technical Graduates Are Double Those in U.S.,” echoed the Washington Post.1 Even after news of once-secret wartime efforts like radar and the Manhattan Project had spurred lightning-fast growth in physics enrollments across the country, many feared that the nation’s technical workforce remained insufficient to meet the demands of the Cold War. Soviet advances in training young physicists seemed especially menacing. Widespread fears that the Soviet Union had surpassed the United States in training large cadres of physical scientists drove a massive escalation in American efforts to train more specialists. Graduate enrollments in physics surged even more quickly after the surprise launch of the Soviet Sputnik satellite in October 1957. Yet the runaway growth proved unsustainable. Less than fifteen years after Sputnik, funding, enrollments, and job opportunities for young physicists in the United States collapsed.

  Looking back at those wild swings today, a single pattern emerges. Whether one plots research funding, graduate-level enrollments, or job listings in physics over time, the curves all look the same: up races the curve until all at once the bottom falls out, crashing as sharply as the headlong rise had been. To our eyes today, in fact, such curves seem eerily familiar. Change the labels and the same graph could just as well describe a tech-stocks boom or a housing-market bust. During the Cold War, in other words, physics training in the United States had become a speculative bubble.

  Economist Robert Shiller defines a speculative bubble as “a situation in which temporarily high prices are sustained largely by investors’ enthusiasm rather than by consistent estimation of real value.” He emphasizes three distinct drivers for bubbles: hype, amplification, and feedback loops. Consumers’ enthusiasm for a particular item—be it an initial public offering for the latest Silicon Valley start-up or a hip loft in SoHo—can attract further attention to that item. Increased media attention, in turn, can elicit further consumer investment, and the rise in demand will drive prices up even further. Before long, the price increase can become a self-fulfilling prophecy. “As prices continue to rise, the level of exuberance is enhanced by the price itself,” Shiller explains.2

  As with stock prices, so with graduate training. The skyrocketing enrollments in fields like physics during the Cold War were fed by earnest decisions based on incomplete information, intermixed with hope and hype that had little grounding in fact. Feedback loops among scientists, journalists, and policymakers kept the demand for young physical scientists artificially inflated. Faulty assumptions that could easily have been checked instead came to seem natural, even inevitable, when refracted by geopolitical developments. When those conditions changed abruptly, physics had nowhere to go but down.

  : : :

  My favorite example of the hype-amplification-feedback process concerns a series of reports that were undertaken during the 1950s on Soviet advances in training scientists and engineers. As early as 1952, while the Korean War smoldered, several analysts began trying to assess Soviet “stockpiles” of scientific and technical manpower—those cadres who seemed so “essential for survival in the atomic age,” as one New York Times reporter put it. Three lengthy reports were prepared to assess what became known as the “cold war of the classrooms”: Nicholas DeWitt’s Soviet Professional Manpower (1955), Alexander Korol’s Soviet Education for Science and Technology (1957), and DeWitt’s Education and Professional Employment in the USSR (1961).3

  The three studies shared many features. Each was conducted in Cambridge, Massachusetts, by researchers who had themselves undergone some of their training in Russia and the Soviet Union. Nicholas DeW
itt completed the first report, Soviet Professional Manpower, while working at Harvard’s Russian Research Center. The center had been established in 1948 with aid from the US Air Force and the Carnegie Corporation; throughout this period it also maintained close ties with the Central Intelligence Agency. DeWitt, a native of Kharkov, in Ukraine, had begun his training at the Kharkov Institute of Aeronautical Engineering in 1939, before the Nazi invasion forced him to flee. He eventually landed in Boston in 1947 and enrolled as an undergraduate at Harvard the following year. In 1952, honors degree in hand, DeWitt began working as a research associate at the Russian Research Center while pursuing graduate study at Harvard in regional studies and economics. The National Science Foundation and the National Research Council jointly sponsored his investigation into Soviet scientific and technical training. Colleagues called him compulsive, an “indefatigable digger,” and it showed: his massive follow-up study, Education and Professional Employment in the USSR, ran to 856 pages, featuring 257 tables and 37 charts in the main text alone, followed by 260 dense pages of appendices.4

  The other major report, Alexander Korol’s Soviet Education for Science and Technology, took shape down the street at MIT’s Center for International Studies. Like the Russian Research Center at Harvard, MIT’s center (founded in 1951) also maintained close ties to the CIA, which secretly bankrolled Korol’s study. Korol, like DeWitt, was a Soviet expatriate who had first trained in engineering. Korol enlisted aid from several MIT faculty in the sciences and engineering to help him gauge the quality of Soviet pedagogical materials. He completed his study in June 1957; the book’s preface, by the center’s director Max Millikan, was dated 18 October 1957, barely two weeks after the surprise launch of Sputnik. The report was immediately heralded as “fastidious,” “perhaps the most conclusive study ever made of the Soviet education and training system.” Others marveled at the “400 pages of solid factual data” crammed between the book’s covers.5

 

‹ Prev