Seek!: Selected Nonfiction

Home > Other > Seek!: Selected Nonfiction > Page 14
Seek!: Selected Nonfiction Page 14

by Rudy Rucker


  Meaningless proliferation or utter stagnancy are the only alternatives to death. Although death is individually terrible, it is wonderful for the evolution of new kinds of life.

  40. Actually the phrase "piled many hundreds of meters deep" in this paragraph is an extreme understatement. Lying awake last night, I calculated that the immortal dinosaurs would fill all known space! To make it easier, I worked my example out in terms of people instead of dinosaurs. I claim that if one immortal couple had emerged in 5200 BC, their immortal descendants would now fill all space. For those who like playing with numbers, here's my calculation. Suppose that each person, on the average, produces a new person every thirty years. So if nobody dies, but everyone keeps on breeding, then the number of people will double every thirty years. If you start with exactly two immortals, there will be 2 to the Nth power immortals after 30*N years. One estimate is that the universe has the same size as a cube that is ten billion (or ten to the 10th) light-years per edge. A light-year is about ten trillion kilometers, or ten to the 16th meters, so the universe is a cube ten to the 26th meters per edge. Cubing ten to the 26th gives ten to the 3×26th, or ten to the 78th. Suppose that a person takes up a cubic meter of space. How many years would be needed to fill the universe with ten to the 78th immortal people? Well, for what value of N is 2 to the Nth power bigger than ten to the 78th? A commonly used computer science fact is that two to the 10th, known as a K, is almost equal to a thousand, which is ten cubed. Now ten to the 78th is ten to the 3×24th, which is one thousand to the 24th, which is about one K to the 24th, which is two to the 10×24th, which is two to the 240th. That means it would take 240 generations for the immortal humans to fill up the universe. At 30 years per generation, that makes 7,200 years. 5200 BC was at a time when people were giving up being hunter gatherers and were learning to farm; by comparison, Sumeria flourished in 4000 BC and the Early Period of Ancient Egypt was 3000 BC. So if two of those way early farmers had mastered immortality, the whole universe would be stuffed with their descendants!

  Page 107

  A Belusov-Zhabotinsky pattern in a cellular automaton.

  (Generated by CAPOW)

  Evolution is possible whenever one has (1) reproduction, (2) genome variation, and (3) natural selection. We've already talked about reproduction and the way in which mating and mutation cause genome variation - so that children are not necessarily just like their parents. Natural selection is where death comes in: not every creature is in fact able to reproduce itself before it dies. The creatures which do reproduce have genomes which are selected by the natural process of competing to stay alive and to bear children which survive.

  What this means in terms of computer A-life is that one ordinarily has some maximum number of memory slots for creatures' genomes. One lets the phenomes of the creatures compete for a while and then uses some kind of fitness function to decide which creatures are the most successful. The most successful creatures are reproduced onto the existing memory slots, and the genomes of the least successful creatures are erased.

  Nature has a very simple way of determining a creature's fitness: it manages to reproduce before death or it doesn't. Assigning a fitness level to competing A-life phenomes is a more artificial process. Various kinds of fitness functions can be chosen on the basis of what kinds of creatures one wants to see evolve. In most of the experiments I've worked on, the fitness is based on the creatures' ability to find and

  Page 108

  eat food cells, as well as to avoid "predators" and to get near "prey."

  So far in this essay we've talked about life in terms of three general concepts: gnarl, sex, and death. Computer A-life research involves trying to find computer programs which are gnarly, which breed, and which compete to stay alive. Now let's look at some non-computer approaches to artificial life.

  Biological A-Life

  In this section, we first talk about Frankenstein, and then we talk about modern biochemistry.

  Frankenstein. The most popular fictional character who tries to create life is Viktor Frankenstein, the protagonist of Mary Shelley's 1818 novel Frankenstein or, The Modern Prometheus.

  Most of us know about Frankenstein from the movie versions of the story. In the movie version, Dr. Frankenstein creates a living man by sewing together parts of dead bodies and galvanizing the result with electricity from a thunderstorm. The original version is quite different.

  In Mary Shelley's novel, Baron Viktor Frankenstein is a student with a deep interest in chemistry. He becomes curious about what causes life, and he pursues this question by closely examining how things die and decay - the idea being that if you can understand how life leaves matter, you can understand how to put it back in. Viktor spends days and nights in "vaults and charnel-houses," until finally he believes he has learned how to bring dead flesh back to life. He sets to work building the Frankenstein monster:

  In a solitary chamber . . . I kept my workshop of filthy creation: my eyeballs were starting from their sockets in attending to the details of my employment. The dissecting room and the slaughter-house furnished many of my materials; and often did my human nature turn with loathing from my occupation . . . Who shall conceive the horrors of my secret toil, as I dabbled among the unhallowed damps of the grave, or tortured the living animal to animate the lifeless clay?

  Finally Dr. Frankenstein reaches his goal:

  Page 109

  It was on a dreary night of November that I beheld the accomplishment of my toils. With an anxiety that almost amounted to agony, I collected the instruments of life around me, that I might infuse a spark of being into the lifeless thing that lay at my feet. It was already one in the morning; the rain pattered dismally against the panes, and my candle was nearly burnt out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs . . . The beauty of the dream vanished, and breathless horror and disgust filled my heart.

  The creepy, slithery aspect of Frankenstein stems from the fact that Mary Shelley situated Viktor Frankenstein's A-life researches at the tail-end of life, at the part where a living creature life dissolves back into a random mush of chemicals. In point of fact, this is really not a good way to understand life - the processes of decay are not readily reversible.

  Biochemistry. Contemporary A-life biochemists focus on the way in which life keeps itself going. Organic life is a process, a skein of bio-chemical reactions that is in some ways like a parallel three-dimensional computation. The computation being carried out by a living body stops when the body dies, and the component parts of the body immediately begin decomposing. Unless you're Viktor Frankenstein, there is no way to kick-start the reaction back into viability. It's as if turning off a computer would make its chips fall apart.

  The amazing part about real life that it keeps itself going on its own. If anyone could build a tiny, self-guiding, flying robot he or she would a hero of science. But a fly can build flies just by eating garbage. Biological life is a self-organizing process, an endless round that's been chorusing along for hundreds of millions of years.

  Is there any hope of scientists being able to assemble and start up a living biological system?

  Chemists have studied complicated systems of reactions that tend to perpetuate themselves. These kinds of reaction are called autocatalytic or self-exciting. Once an autocatalytic reaction gets started up, it produces by-products which pull more and more molecules into the

  Page 110

  reaction. Often such a reaction will have a cyclical nature, in that it goes through the same sequence of steps over and over.

  The cycle of photosynthesis is a very complicated example of an autocatalytic reaction. One of the simpler examples of an autocatalytic chemical reaction is known as the Belusov-Zhabotinsky reaction in honor of the two Soviet scientists who discovered it. In the Belusov-Zhabotinsky reaction a certain acidic solution is placed into a flat glass dish with a sprinkling of palladi
um crystals. The active ingredient of litmus paper is added so that it is possible to see which regions of the solution are more or less acidic. In a few minutes, the dish fills with scroll-shaped waves of color which spiral around and around in a regular, but not quite predictable, manner.41

  There seems to be something universal about the Belusov-Zhabotinsky reaction, in that there are many other systems which behave in a similar way: generating endlessly spiraling scrolls. It is in fact fairly easy to set up a cellular automaton-based computer simulation that shows something like the Belusov-Zhabotinsky reaction - Zhabotinsky scrolls are something that CAs like to "do."

  As well as trying to understand the chemical reactions that take place in living things, biochemists have investigated ways of creating the chemicals used by life. In the famous 1952 Miller-Urey experiment, two scientists sealed a glass retort filled with such simple chemicals as water, methane and hydrogen.42 The sealed vessel was equipped with electrodes that repeatedly fired off sparks - the vessel was intended to be a kind of simulation of primeval Earth with its lightning storms. After a week, it was found that a variety of amino acids had spontaneously formed inside the vessel. Amino acids are the building blocks of protein and of DNA - of our phenomes and of our genomes, so the Miller-Urey experiment represented an impressive first step towards understanding how life on Earth emerged.

  41. One of the first accounts of the Belusov-Zhabotinsky reaction can be found in: Arthur Winfree, "Rotating Chemical Reactions," Scientific American, June, 1974, pp. 8295.

  42. The Miller-Urey experiment was first announced in: S. L. Miller and H. C. Urey, "Organic Compound Synthesis on the Primitive Earth," Science 130 (1959), p. 245.

  Page 111

  Biochemists have pushed this kind of thing much further in the last decades. It is now possible to design artificial strands of RNA which are capable of self-replicating themselves when placed into a solution of amino acids; and one can even set a kind of RNA evolution into motion. In one recent experiment, a solution was filled with a random assortment of self-replicating RNA along with amino acids for the RNA to build with. Some of the molecules tended to stick to the sides of the beaker. The solution was then poured out, with the molecules that stuck to the sides of the vessel being retained. A fresh food-supply of amino acids was added and the cycle was repeated numerous times. The evolutionary result? RNA that adheres very firmly to the sides of the beaker.43

  Genetic engineers are improving on methods to tinker with the DNA of living cells to make organisms which are in some part artificial. Most commercially sold insulin is in fact created by gene-tailored cells. The word wetware is sometimes used to stand for the information in the genome of a biological cell. Wetware is like software, but it's in a watery living environment. The era of wetware programming has only just begun.

  Robots

  In this section we compare science fiction dreams of robots to robots as they actually exist today. We also talk a bit about how computer science techniques may help us get from today's realities to tomorrow's dreams.

  Science Fiction Robots.

  Science fiction is filled with robots that act as if they were alive. Existing robots already possess such life-like char-

  43. The RNA evolution experiment is described in Gerald Joyce, "Directed Molecular Evolution," Scientific American, December, 1992. A good quote about wetware appears in Mondo 2000: A User's Guide to the New Edge, edited by R. U. Sirius, Queen Mu and me for HarperCollins, 1992. The quote is from the bioengineer Max Yukawa: "Suppose you think of an organism as being like a computer graphic that is generated from some program. Or think of an oak tree as being the output of a program that was contained inside the acorn. The genetic program is in the DNA molecule. Your software is the abstract information pattern behind your genetic code, but your actual wetware is the physical DNA in a cell."

  Page 112

  A robot that reproduces by (a) using a blueprint to

  (b) build a copy of itself, and then (c) giving the new

  robot a copy of the blueprint.

  (Drawing by David Povilaitis.)

  acteristics as sensitivity to the environment, movement, complexity, and integration of parts. But what about reproduction? Could you have robots which build other robots?

  The idea is perhaps surprising at first, but there's nothing logically wrong with it. As long as a robot has an exact blueprint of how it is constructed, it can assemble the parts for child robots, and it can use a copying machine to give each child its own blueprint so that the process can continue. For a robot, the blueprint is its genome, and its body and behavior is its phenome. In practice, the robots would not use paper blueprints, but might instead use CAD/CAM (computer aided design and manufacturing) files.

  The notion of robot A-life interests me so much that I've written several science fiction novels about it. As will be discussed in a section below, The Hacker and the Ants talks about how one might use a virtual reality world in which to evolve robots.

  In Software, some robots are sent to the moon where they build factories to make robot parts. They compete with each other for the right to use the parts (natural selection), and then they get together in pairs (sex) to build new robots onto which parts of the parents' programs are placed (self-reproduction). Soon they rebel against

  Page 113

  human rule, and begin calling themselves boppers. Some of them travel to Earth to eat some human brains - just to get the information out of the tissues, you understand.

  In Wetware, the boppers take up genetic engineering and learn how to code bopper genomes into fertilized human eggs, which can then be force-grown to adult size in less than a month. The humans built the boppers, but now the boppers are building people - or something like people.

  At the end of Wetware, the irate humans kill off the boppers by infecting their silicon chips with a biological mold, but in Freeware, the boppets are back, with flexible plastic bodies that don't use chips anymore. The ''freeware" of the title has to do with encrypted personality patterns that some aliens are sending across space in search of bodies to live upon.

  In my most recent book of this series, Realware, the humans and boppers obtain a tool for creating new "realware" bodies solely from software descriptions of them.

  Real Robots.

  After such heady science fiction dreams, it's discouraging to look at today's actual robots. These machines are still very lacking in adaptability, which is the ability to function well in unfamiliar environments. They can't walk and/or chew gum at the same time.

  The architecture for most experimental robots is something like this: you put a bunch of devices in a wheeled can, wire the devices together, and hope that the behavior of the system can converge on a stable and interesting kind of behavior.

  What kind of devices go in the can? Wheels and pincers with exquisitely controllable motors, TV cameras, sonar pingers, microphones, a sound-synthesizer, and some computer microprocessors.

  The phenome is the computation and behavior of the whole system - it's what the robot does. The robot's genome is its blueprint, with all the interconnections and the switch-settings on the devices in the wheeled garbage can, and if any of those devices happens to be a computer memory chip, then the information on the chips is part of the genome as well.

  Traditionally, we have imagined robots as having one central processing unit, just as we have one central brain. But in fact a lot of our

  Page 114

  information processing is done in our nerve ganglia, and some contemporary roboticists are interested in giving a separate processor to each of a robot's devices.

  This robot design technique is known as subsumption architecture. Each of an artificial ant's legs, for instance, might know now to make walking motions on its own, and the legs might communicate with each other in an effort to get into synch. Just such an ant (named Atilla) has been designed by Rodney Brooks of MIT. Brooks wants his robots to be cheap and widely available.

  Another
interesting robot was designed by Marc Pauline of the art group known as Survival Research Laboratories. Pauline and his group stage large, Dadaist spectacles in which hand-built robots interact with each other. Pauline is working on some new robots which he calls "swarmers." His idea is to have the swarmers radio-aware of each other's position, and to chase each other around. The idea is to try to find good settings to give the swarmers maximally chaotic behavior.

 

‹ Prev