Darwin Among the Machines

Home > Other > Darwin Among the Machines > Page 15
Darwin Among the Machines Page 15

by George B. Dyson


  The group worked hard and played hard, and despite (or because of) delays attributed to Bigelow’s meticulous attention to detail the machine got finished and it worked. “The rate at which Julian could think, and the rate at which Julian could put ideas together was the rate at which the project went,” Ware observed.34 Although received coolly by the Institute (“We were doing things with our hands and building dirty old equipment. That wasn’t the Institute,” said Ware), the engineers were welcomed at von Neumann’s home and treated to the hospitality that was a trademark of von Neumann’s scientific career. My father, a visiting member at the Institute in 1948, remembers how the Institute’s abstract and theoretical atmosphere was enlivened by “von Neumann and his band of freaks.” Although Institute members were known for eccentric driving habits, Ware singled out one occasion when James Pomerene and Nick Metropolis drove home from one of the von Neumann gatherings in reverse.

  In late 1946, the AEC agreed to provide funds for a plain concrete (and officially temporary) building to house the computer along with some modest facilities for its construction and operational support. The Institute agreed to provide a brick veneer, cautiously accepting the structure as an outlying satellite of Fuld Hall. Arthur Burks reminisced about “going with Herman [Goldstine] and Oswald Veblen to pick a site for the new building. And we walked through the woods, but it was clear that Veblen didn’t want any trees to be cut down. . . . In the end, he picked a site which was low down, not too far away from the Institute building so it wasn’t inconveniently far away. He wanted the building to be one story only, so that this would not be a conspicuous building.”35

  The computer, on the other hand, was conspicuous—for the unprecedented power and economy of its design and the resourcefulness with which these principles were engineered. Von Neumann’s mathematical vision was translated into the visible elegance of a machine. In physical appearance it resembled a turbocharged V-40 engine, about six feet high, two feet wide, and eight feet long. The computer weighed one thousand pounds; the air-conditioning unit weighed fifteen tons. Overhead ducts removed 52,000 Btu of waste heat per hour via a network of cooling channels that infiltrated the core of the machine. Some twenty-six hundred vacuum tubes were neatly arranged in a series of shift registers and accumulators that shuffled electrons through gates, toggles, and switches at up to one million cycles per second, executing precisely those binary processes that Leibniz had envisioned performing with marbles in 1679. The geometry was compact (“perhaps too compact for convenient maintenance,” admitted Bigelow), but a minimal connection path between components was achieved by these convolutions in the chassis, like the folding of a cerebral cortex into a skull. The forty cylinders arranged in a bank of twenty along each side of the base of the machine contributed the driving force (and chief obstacle) to its design. They contained the world’s first fully-random-access memory, or RAM. There were only 1,024 bits per cylinder, but with twenty-four-microsecond access time this was more horsepower than the young science of electronic digital processing had ever seen.

  Digital computers, since the time of Babbage, had relied on serial memory (although the need for random access was recognized in the way in which Babbage’s mechanical “store” of variables was to be made available to his arithmetic “mill”). No matter what the medium—paper tape, punched cards, or magnetic media—the processor shuffled through the contents of its memory in sequence, with associated delays. IBM’s Selective Sequence Electronic Calculator (SSEC), completed in 1948 and housed in a windowed showroom on Fifty-seventh Street in New York, represented the punched-tape dinosaur against which the IAS machine would play the mouse that roared. The SSEC stored some twenty thousand 20-digit numbers on eighty-track paper tape, written by three punching units and referred to by a formidable array of sixty-six reading heads. Despite this ability to consult its memory in sixty-six places at once, access to a given location could take up to a second, impressive to onlookers but not to the future of IBM. Acoustic delay-line memory, though a thousand times faster, required ingenious coding and precise synchronization—a challenge similar to trying to play a game of cards while shuffling the deck.

  In association with Vladimir Zworykin and Jan Rajchman at RCA, von Neumann arranged to develop a digital memory tube for the IAS computer, christened the Selectron. Information was written by an electron beam projected through an electromagnetic mask controlled by digital switching and read from an array of 4,096 separate targets (tiny, nickel-plated eyelets arranged like Cheerios on a mica sheet) that shifted state individually to store one bit of data (accessible at random) each. After two years, there were no Selectrons in existence (“They were doing things inside that vacuum that hadn’t been done before,” said Ware), although a 256-bit version was eventually produced in limited quantities and used successfully in the IAS-derived JOHNNIAC built at Rand. The IAS team decided to pursue its own alternative, using commercially available parts.

  The IAS memory was based on the Williams tube—an ordinary cathode-ray tube (CRT) modified to allow data to be read, written, and continuously refreshed as a pattern of charged spots on the phosphor coating inside the tube. The state of an individual spot was distinguished by “interrogating” the spot with a brief pulse of electrons and noting the character of a very faint secondary current induced in a wire screen attached to the tube’s outside face. Von Neumann had discussed the underlying concept—in principle similar to Zworykin’s iconoscope but operating in reverse—while at the Moore School in 1944 and explored its possible use as a high-speed storage medium in the EDVAC report of 1945. Frederick C. Williams, after working on pulse-coded IFF (Identification Friend or Foe) radar systems at England’s Telecommunications Research Establishment during the war, developed a practical version in 1946 and succeeded in building a small computer at Manchester University, under the direction of M. H. A. Newman, that demonstrated CRT-based storage and a rudimentary stored program in June 1948. The prototype operated in serial mode, cycling through the pattern of spots in a series of traces, like an oscilloscope or a television, thereby reading and writing the entire sequence of bits thousands of times per second—a vastly accelerated version of one of the loops of paper tape used by the Colossus at Bletchley Park. You could watch the bits of information dancing on the screen as a computation proceeded, and Turing, who soon joined the Manchester group, was noted for his ability to read numbers directly off the screen, just as he had been able to read binary code directly from teletypewriter tapes as intercepted messages were being sorted out.

  It was evident (as had been recognized by Zworykin in the 1930s) that random access was possible if suitable control circuits for the electron-beam deflection voltages were engineered. Bigelow paid a visit to Manchester in June 1948, and the IAS team soon developed switching circuits that could read or write to any location at any time, appropriating a few microseconds before resuming the normal scanning and refresh cycles where they left off. The resulting memory organ was in effect an electronically switched 32 × 32 array of capacitors but was, as Bigelow noted, “one of mankind’s most sensitive detectors of electromagnetic environmental disturbances.”36 The internal coating had to be flawless, and shielding had to be religiously maintained. RCA and one other manufacturer allowed the IAS to scan their inventory for unblemished specimens and ship the other 80 percent of them back. A forty-first monitor stage could be switched over to any of the forty memory stages, allowing the operator to inspect the contents of the memory to see how a computation was progressing—or why it had unexpectedly stopped. This was later augmented by a separate seven-inch cathode-ray tube serving as a 7,000-points-per-second graphical display.

  All forty memory tubes had to work perfectly at the same time. Data were processed in parallel (not parallel processing as the term is used today) by operating on all the digits of a 40-bit word at once. The 40 bits represented either a number or a pair of 20-bit instructions, of which 10 bits designated the order and 10 bits a memory address. Each of th
e 40 bits making up a word was assigned the same position in a different Williams tube, an addressing scheme analogous to handing out similar room numbers in a forty-floor hotel. The forty Williams tubes were controlled in unison, like a bank of TV sets tuned to the same channel for display. This made the computer forty times as fast as a serial processor, but, in the opinion of numerous skeptics, unlikely to work without one small thing or another always going wrong. “The rig can be viewed as a big tube test rack,” observed Bigelow, and it is remarkable that between the forty Williams tubes and twenty-six hundred other vacuum tube envelopes, the machine eventually worked more than 75 percent of the time.37

  When Pomerene achieved a thirty-four-hour error-free test of a two-stage memory on July 28–29, 1949, the team knew their greatest obstacle had been solved. The rest of the computer could be built from standard components whose behavior was, for the most part, known. The arithmetic unit was kept as simple as possible: an accumulator, two shift registers, an adder, and a digit resolver. The core of the computer was essentially a very fast (thirty-one microsecond) adding machine. As Thomas Hobbes had pointed out in 1651, from simple addition (and the addition of a binary complement, which equals subtraction) one can, by careful bookkeeping, construct everything else. All the bits, represented by delicately balanced pulses of electrons, were forced to march cautiously, one step at a time. “Information was first locked in the sending toggle; then gating made it common to both sender and receiver, and then when securely in both, the sender could be cleared,” Bigelow explained. “Information was never ‘volatile’ in transit; it was as secure as an acrophobic inchworm on the crest of a sequoia.”38 There was no floating-point arithmetic. The prospect was considered but rejected as not essential at the time. The programmer had to guess where the most significant digit ended up and test accordingly to “bring it back into focus” as the computation moved along. There were twenty basic instructions, with forty-four order codes. “During the spring of 1951, the machine became increasingly available for use, and programmers were putting their programs on for exploratory runs,” said Bigelow. “The machine error rate had become low enough so that most of the errors found were in their own work.”39

  The original input and output to the computer was via five-hole paper teletypewriter tape, fed through a customized interface dubbed the “inscriber” and the “outscriber.” It took almost thirty minutes to load 1,024 words, one register at a time, into the memory of the machine. After a few months of operation, a standard IBM 516 reproducing punch (designed to read and write 12-bit columns) was rewired to read 40 bits in parallel (every other punch position in an eighty-column row), allowing the memory to be filled in five minutes or less. Output could be punched at one hundred cards per minute, allowing a skilled operator “to interpret the perforations visually and so diagnose what was happening to his computation while away from the machine.”40 IBM’s policy at the time allowed no customer modifications to its equipment. The exception granted to the Institute had consequences that were hardly envisioned at the time. The jury-rigged hybrid demonstrated at the Institute led directly to commercial production of the IBM 701, helping to secure leadership of the electronic data-processing industry for IBM.

  Von Neumann circulated at the highest levels of the scientific and political establishment. Largely through his influence, the project was duplicated rapidly around the world. In the race to build working computers, the “few more months” that always remained until a particular machine would be up and running became known as the “von Neumann constant.” It was the challenge of beating this famous constant, and the advantage of following rather than breaking the engineering trail, that led several groups—at the University of Illinois, the Bureau of Standards, Argonne National Laboratory, and Los Alamos—to get their machines running ahead of the official dedication ceremony (10 June 1952) at the IAS. “Many of us who are in the course of making copies of the IAS machine have a tendency to emphasize our deviations and forget the tremendous debt that we owe Julian Bigelow and others at the Institute,” admitted William F. Gunning of RAND in 1953. “The fact that so many of us have been able to make an arithmetic unit that works when first plugged in . . . is proof enough.”41

  A constant stream of brilliant individuals—from distinguished scientists to otherwise unknowns—appeared in Princeton to run their problems on the IAS machine. For this the Institute was ideal. The administration was flexible, intimate, and spontaneous. The computer project operated on a shoestring compared to other laboratories but was never short of funds. The facilities were designed to accommodate visitors for a day, a month, or a year, and the resources of Princeton University were close at hand. There were no indigenous computer scientists monopolizing time on the machine, although a permanent IAS meteorological group under Jule Charney ran their simulations regularly and precedence was still granted to the occasional calculation for a bomb. “My experience is that outsiders are more likely to use the machine on important problems than is the intimate, closed circle of friends,” recalled Richard Hamming, looking back on the early years of computing in the United States.42

  The machine was duplicated, but von Neumann remained unique. His insights permeated everything that ran on the computer, from the coding of Navier-Stokes equations for compressible fluids to S. Y. Wong’s simulation of traffic flow (and traffic jams) to the compilation of a historical ephemeris of astronomical positions covering the six hundred years leading up to the birth of Christ. “Quite often the likelihood of getting actual numerical results was very much larger if he was not in the computer room, because everybody got so nervous when he was there,” reported Martin Schwarzschild. “But when you were in real thinking trouble, you would go to von Neumann and nobody else.”43

  Von Neumann’s reputation, after fifty years, has been injured less by his critics than by his own success. The astounding proliferation of the von Neumann architecture has obscured von Neumann’s contributions to massively parallel computing, distributed information processing, evolutionary computation, and neural nets. Because his deathbed notes for his canceled Silliman lectures at Yale were published posthumously (and for a popular audience) as The Computer and the Brain (1958), von Neumann’s work has been associated with the claims of those who were exaggerating the analogies between the digital computer and the brain. Von Neumann, on the contrary, was preoccupied with explaining the differences. How could a mechanism composed of some ten billion unreliable components function reliably while computers with ten thousand components regularly failed?

  Von Neumann believed that entirely different logical foundations would be required to arrive at an understanding of even the simplest nervous system, let alone the human brain. His Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components (1956) explored the possibilities of parallel architecture and fault-tolerant neural nets. This approach would soon be superseded by a development that neither nature nor von Neumann had counted on: the integrated circuit, composed of logically intricate yet structurally monolithic microscopic parts. Serial architecture swept the stage. Probabilistic logics, along with vacuum tubes and acoustic delay-line memory, would scarcely be heard from again. If the development of solid-state electronics had been delayed a decade or two we might have advanced sooner rather than later into neural networks, parallel architectures, asynchronous processing, and other mechanisms by which nature, with sloppy hardware, achieves reliable results.

  Von Neumann was as reticent as Turing was outspoken on the question of whether machines could think. Edmund C. Berkeley, in his otherwise factual and informative 1949 survey, Giant Brains, captured the mood of the time with his declaration that “a machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”44 Von Neumann never subscribed to this mistake. He saw digital computers as mathematical tools. That they were members of a more general class of automata that included nervous sy
stems and brains did not imply that they could think. He rarely discussed artificial intelligence. Having built one computer, he became less interested in the question of whether such machines could learn to think and more interested in the question of whether such machines could learn to reproduce.

  “‘Complication’ on its lower levels is probably degenerative, that is, that every automaton that can produce other automata will only be able to produce less complicated ones,” he noted in 1948. “There is, however, a certain minimal level where this degenerative characteristic ceases to be universal. At this point automata which can reproduce themselves, or even construct higher entities, become possible.”45 Millions of very large scale integrated circuits, following in the footsteps of the IAS design but traced in silicon at micron scale, are now replicated daily from computer-generated patterns by computer-operated tools. The newborn circuits, hidden in clean rooms and twenty-four-hour-a-day “fabs,” where the few humans present wear protective suits for the protection of the machines, are the offspring of von Neumann’s Theory of Self-Reproducing Automata. Just as predicted, these machines are growing more complicated from one generation to the next. None of these devices, although executing increasingly intelligent code, will ever become a brain. But collectively they might.

 

‹ Prev