Darwin Among the Machines

Home > Other > Darwin Among the Machines > Page 16
Darwin Among the Machines Page 16

by George B. Dyson


  Von Neumann’s Silliman lecture notes gave “merely the barest sketches of what he planned to think about,” noted Stan Ulam in 1976. “He died so prematurely, seeing the promised land but hardly entering it.”46 Von Neumann may have envisaged a more direct path toward artificial intelligence than the restrictions of the historic von Neumann architecture suggest. High-speed electronic switching allows computers to explore alternatives thousands or even millions of times faster than biological neurons, but this power pales in comparison with the combinatorial abilities of the billions of neurons and uncounted synapses that constitute a brain. Von Neumann knew that a structure vastly more complicated, flexible, and unpredictable than a computer was required before any electrons might leap the wide and fuzzy distinction between arithmetic and mind. Fifty years later, digital computers remain rats running two-dimensional mazes at basement level below the foundations of mind.

  As a practicing mathematician and an armchair engineer, von Neumann knew that something as complicated as a brain could never be designed; it would have to be evolved. To build an artificial brain, you have to grow a matrix of artificial neurons first. In 1948, at the Hixon Symposium on Cerebral Mechanisms in Behavior, von Neumann pointed out in response to Warren S. McCulloch that “parts of the organism can act antagonistically to each other, and in evolution it sometimes has more the character of a hostile invasion than of evolution proper. I believe that these things have something to do with each other.” He then described how a primary machine could be used to exploit certain tendencies toward self-organization among a large number of intercommunicating secondary machines. He believed that selective evolution (via mechanisms similar to economic competition) of incomprehensibly complex processes among the secondary machines could lead to the appearance of comprehensible behavior at the level of the primary machine.

  “If you come to such a principle of construction,” continued von Neumann, “all that you need to plan and understand in detail is the primary automaton, and what you must furnish to it is a rather vaguely defined matrix of units; for instance, 1010 neurons which swim around in the cortex. . . . If you do not separate them . . . then, I think that it is achievable that the thing can be watched by the primary automaton and be continuously reorganized when the need arises. I think that if the primary automaton functions in parallel, if it has various parts which may have to act simultaneously and independently on separate features, you may even get symptoms of conflict . . . and, if you concentrate on marginal effects, you may observe the ensuing ambiguities. . . . Especially when you go to much higher levels of complexity, it is not unreasonable to expect symptoms of this kind.”47 The “symptoms of this kind” with which von Neumann and his audience of neurologists were concerned were the higher-order “ensuing ambiguities” that somehow bind the ingredients of logic and arithmetic into the cathedral perceived as mind.

  Von Neumann observed, in 1948, that information theory and thermodynamics exhibited parallels that would grow deeper as the two subjects were mathematically explored. In the last years of his foreshortened life, von Neumann began to theorize about the behavior of populations of communicating automata, a region in which the parallels with thermodynamics—and hydrodynamics—begin to flow both ways. “Many problems which do not prima facie appear to be hydrodynamical necessitate the solution of hydrodynamical questions or lead to calculations of the hydrodynamical type,” von Neumann had written in 1945. “It is only natural that this should be so.”48

  Lewis Richardson’s sphere of 64,000 mathematicians would not only model the large-scale turbulence of the atmosphere, they might, if they calculated and communicated fast enough, acquire an atmosphere of turbulence of their own. As self-sustaining vortices arise spontaneously in a moving fluid when velocity outweighs viscosity by a ratio to which Osborne Reynolds gave his name, so self-sustaining currents may arise in a computational medium when the flow of information among its individual components exceeds the computational viscosity by a ratio that John von Neumann, unfortunately, did not live long enough to define.

  7

  SYMBIOGENESIS

  Instead of sending a picture of a cat, there is one area in which we can send the cat itself.

  —MARVIN MINSKY1

  “During the summer of 1951,” according to Julian Bigelow, “a team of scientists from Los Alamos came and put a large thermonuclear calculation on the IAS machine; it ran for 24 hours without interruption for a period of about 60 days, many of the intermediate results being checked by duplicate runs, and throughout this period only about half a dozen errors were disclosed. The engineering group split up into teams and was in full-time attendance and ran diagnostic and test routines a few times per day, but had little else to do. So it had come alive.”2 The age of digital computers dawned over the New Jersey countryside while a series of thermonuclear explosions, led by the MIKE test at Eniwetok Atol on 1 November 1952, corroborated the numerical results.

  The new computer was used to explore ways of spawning as well as destroying life. Nils Aall Barricelli (1912–1993)—a mathematical biologist who believed that “genes were originally independent, virus-like organisms which by symbiotic association formed more complex units”—arrived at the institute in 1953 to investigate the role of symbiogenesis in the origin of life. “A series of numerical experiments are being made with the aim of verifying the possibility of an evolution similar to that of living organisms taking place in an artificially created universe,” he announced in the Electronic Computer Project’s Monthly Progress Report for March.

  The theory of symbiogenesis was introduced in 1909 by Russian botanist Konstantin S. Merezhkovsky (1855–1921) and expanded by Boris M. Kozo-Polyansky (1890–1957) in 1924.3 “So many new facts arose from cytology, biochemistry, and physiology, especially of lower organisms,” wrote Merezhkovsky in 1909, “that [in] an attempt once again to raise the curtain on the mysterious origin of organisms . . . I have decided to undertake . . . a new theory on the origin of organisms, which, in view of the fact that the phenomenon of symbiosis plays a leading role in it, I propose to name the theory of symbiogenesis.”4 Symbiogenesis offered a controversial adjunct to Darwinism, ascribing the complexity of living organisms to a succession of symbiotic associations between simpler living forms. Lichens, a symbiosis between algae and fungi, sustained life in the otherwise barren Russian north; it was only natural that Russian botanists and cytologists took the lead in symbiosis research. Taking root in Russian scientific literature, Merezhkovsky’s ideas were elsewhere either ignored or declared unsound, most prominently by Edmund B. Wilson’s dismissal of symbiogenesis as “an entertaining fantasy . . . that the dualism of the cell in respect to nuclear and cytoplasmic substance resulted from the symbiotic association of two types of primordial microorganisms, that were originally distinct.”5

  Merezhkovsky viewed both plant and animal life as the result of a combination of two plasms: mycoplasm, represented by bacteria, fungi, blue-green algae, and cellular organelles; and amoeboplasm, represented by certain “monera without nuclea” that formed the nonnucleated material at the basis of what we now term eukaryotic cells. Merezhkovsky believed that mycoids came first. When they were eaten by later-developing amoeboids they learned to become nuclei rather than lunch. It is equally plausible that amoeboids came first, with mycoids developing as parasites later incorporated symbiotically into their hosts. The theory of two plasms undoubtedly contains a germ of truth, whether the details are correct or not. Merezhkovsky’s two plasms of biology were mirrored in the IAS experiments by embryonic traces of the two plasms of computer technology—hardware and software—that were just beginning to coalesce.

  The theory of symbiogenesis assumes that the most probable explanation for improbably complex structures (living or otherwise) lies in the association of less complicated parts. Sentences are easier to construct by combining words than by combining letters. Sentences then combine into paragraphs, paragraphs combine into chapters, and, eventually, chapters
combine to form a book—highly improbable, but vastly more probable than the chance of arriving at a book by searching the space of possible combinations at the level of letters or words. It was apparent to Merezhkovsky and Kozo-Polyansky that life represents the culmination of a succession of coalitions between simpler organisms, ultimately descended from not-quite-living component parts. Eukaryotic cells are riddled with evidence of symbiotic origins, a view that has been restored to respectability by Lynn Margulis in recent years. But microbiologists arrived too late to witness the symbiotic formation of living cells.

  Barricelli enlarged on the theory of cellular symbiogenesis, formulating a more general theory of “symbioorganisms,” defined as any “self-reproducing structure constructed by symbiotic association of several self-reproducing entities of any kind.”6 Extending the concept beyond familiar (terrestrial) and unfamiliar (extraterrestrial) chemistries in which populations of self-reproducing molecules might develop by autocatalytic means, Barricelli applied the same logic to self-reproducing patterns of any nature in space or time—such as might be represented by a subset of the 40,960 bits of information, shifting from microsecond to microsecond within the memory of the new machine at the IAS. “The distinction between an evolution experiment performed by numbers in a computer or by nucleotides in a chemical laboratory is a rather subtle one,” he observed.7

  Barricelli saw the IAS computer as a means of introducing self-reproducing structures into an empty universe and observing the results. “The Darwinian idea that evolution takes place by random hereditary changes and selection has from the beginning been handicapped by the fact that no proper test had been found to decide whether such evolution was possible and how it would develop under controlled conditions,” he reported in a review of the experiments performed at the IAS. “A test using living organisms in rapid evolution (viruses or bacteria) would have the serious drawback that the causes of adaptation or evolution would be difficult to state unequivocally, and Lamarckian or other kinds of interpretation would be difficult to exclude. Reproduction plus evolution, however, does not necessarily equal life. In his earliest account of the first round of IAS experiments, submitted to the journal Methodos in 1953 and published (in Italian) in 1954, he cautioned his readers that “a question that might embarrass the optimists is the following: ‘If it’s that easy to create living organisms, why don’t you create a few yourself?’”8

  After forty-three years, Barricelli’s experiments appear as archaic as Galileo’s first attempt at a telescope—less powerful than half a pair of cheap binoculars—although Galileo’s salary was doubled by the Venetian Senate in 1609 as a reward. The two Italians compensated for their primitive instruments with vision that was clear. Barricelli tailored his universe to fit within the limited storage capacity of the IAS computer’s forty Williams tubes: a total of one two-hundredth of a megabyte, in the units we use today. Operating systems and programming languages did not yet exist. “People had to essentially program their problems in ‘absolute,’” James Pomerene explained, recalling early programming at the IAS, when every single instruction had to be hand-coded to refer to an absolute memory address. “In other words, you had to come to terms with the machine and the machine had to come to terms with you.”9

  Working directly in binary machine instruction code, Barricelli constructed a cyclical universe of 512 cells, each cell occupied by a number (or the absence of a number) encoded by 8 bits. Simple rules that Barricelli referred to as “norms” governed the propagation of numbers (or “genes”), a new generation appearing as if by metamorphosis after the execution of a certain number of cycles by the central arithmetic unit of the machine. These reproduction laws were configured “to make possible the reproduction of a gene only when other different genes are present, thus necessitating symbiosis between different genes.”10 The laws were concise, ordaining only that each number shift to a new location (in the next generation) determined by the location and value of certain genes in the current generation. Genes depended on each other for survival, and cooperation (or parasitism) was rewarded with success. A secondary level of norms (the “mutation rules”) governed what to do when two or more different genes collided in one location, the character of these rules proving to have a marked effect on the evolution of the gene universe as a whole. Barricelli played God, on a very small scale.

  The empty universe was inoculated with random numbers generated by drawing playing cards from a shuffled deck. Robust and self-reproducing numerical coalitions (patterns loosely interpreted as “organisms”) managed to evolve. “We have created a class of numbers which are able to reproduce and to undergo hereditary changes,” Barricelli announced. “The conditions for an evolution process according to the principle of Darwin’s theory would appear to be present. The numbers which have the greatest survival in the environment . . . will survive. The other numbers will be eliminated little by little. A process of adaptation to the environmental conditions, that is, a process of Darwinian evolution, will take place.”11 Over thousands of generations, Barricelli observed a succession of “biophenomena,” such as successful crossing between parent symbioorganisms and cooperative self-repair of damage when digits were removed at random from an individual organism’s genes.

  The experiments were plagued by problems associated with more familiar forms of life: parasites, natural disasters, and stagnation when there were no environmental challenges or surviving competitors against which organisms could exercise their ability to evolve. To control the parasites that infested the initial series of experiments in 1953, Barricelli instituted modified shift norms that prevented parasitic organisms (especially single-gened parasites) from reproducing more than once per generation, thereby closing a loophole through which they had managed to overwhelm more complex organisms and bring evolution to a halt. “Deprived of the advantage of a more rapid reproduction, the most primitive parasites can hardly compete with the more evolved and better organized species . . . and what in other conditions could be a dangerous one-gene parasite may in this region develop into a harmless or useful symbiotic gene.”12

  Barricelli discovered that evolutionary progress was achieved not so much through chance mutation as through sex. Gene transfers and crossing between numerical organisms were strongly associated with both adaptive and competitive success. “The majority of the new varieties which have shown the ability to expand are a result of crossing-phenomena and not of mutations, although mutations (especially injurious mutations) have been much more frequent than hereditary changes by crossing in the experiments performed.”13 Echoing the question that Samuel Butler had asked seventy years earlier in Luck, or Cunning? Barricelli concluded that “mutation and selection alone, however, proved insufficient to explain evolutionary phenomena.”14 He credited symbiogenesis with accelerating the evolutionary process and saw “sexual reproduction [as] the result of an adaptive improvement of the original ability of the genes to change host organisms and recombine.”15 Symbiogenesis leads to parallel processing of genetic code, both within an individual multicellular organism and across the species as a whole. Given that nature allows a plenitude of processors but a limited amount of time, parallel processing allows a more efficient search for those sequences that move the individual, and the species, ahead.

  Efficient search is what intelligence is all about. “Even though biologic evolution is based on random mutations, crossing and selection, it is not a blind trial-and-error process,” explained Barricelli in a later retrospective of his numerical evolution work. “The hereditary material of all individuals composing a species is organized by a rigorous pattern of hereditary rules into a collective intelligence mechanism whose function is to assure maximum speed and efficiency in the solution of all sorts of new problems . . . and the ability to solve problems is the primary element of intelligence which is used in all intelligence tests. . . . Judging by the achievements in the biological world, that is quite intelligent indeed.”16

  A century af
ter On the Origin of Species pitted Charles Darwin and Thomas Huxley against Bishop Wilberforce, there was still no room for compromise between the trial and error of Darwin’s natural selection and the supernatural intelligence of a theological argument from design. Samuel Butler’s discredited claims of species-level intelligences—neither the chance success of a blind watchmaker nor the predetermined plan of an all-knowing God—were reintroduced by Barricelli, who claimed to detect faint traces of this intelligence in the behavior of pure, self-reproducing numbers, just as viruses were first detected by biologists examining fluids from which they had filtered out all previously identified living forms.

  The evolution of digital symbioorganisms took less time to happen than to describe. “Even in the very limited memory of a high speed computer a large number of symbioorganisms can eirise by chance in a few seconds,” Barricelli reported. “It is only a matter of minutes before all the biophenomena described can be observed.”17 The digital universe had to be delicately adjusted so that evolutionary processes were not immobilized by dead ends. Scattered among the foothills of the evolutionary fitness landscape were local maxima from which “it is impossible to change only one gene without getting weaker organisms.” In a closed universe inhabited by simple organisms, the only escape to higher ground was by exchanging genes with different organisms or by local shifting of the rules. “Only replacements of at least two genes can lead from a relative maximum of fitness to another organism with greater vitality,”18 noted Barricelli, who found that the best solution to these problems (besides the invention of sex) was to build a degree of diversity into the universe itself.

 

‹ Prev