Book Read Free

Biomimicry

Page 26

by Janine M Benyus


  To complicate matters, there is not just one doorman receiving the message, but several different doormen, all getting different messages, which they may or may not pass on to helpers. Inside, the helpers have their own conundrums. They may receive messages from more than one doorman, and must then decide which message to respond to. In certain cases, they may decide to combine the messages and respond to the net action of the two.

  It’s no wonder that Gerald D. Fischbach, chairman of the Department of Neurobiology at Harvard Medical School, agrees that the neuron is “a sophisticated computer.” In a September 1992 article in Scientific American he writes: “To set the intensity (action potential frequency) of its output, each neuron must continually integrate up to 1,000 synaptic inputs, which do not add up in a simple linear manner…. The enzymes make a decision about whether the cells are going to fire and how they will fire…. [B]y fine-tuning their activity, [enzymes] may have an active role in learning. It may be their ability to change that gives us a malleable machine—the neuron.”

  Thinking is certainly not the yes-or-no, fire-or-not-fire proposition it was once believed to be. Each week, biological journals are filled with descriptions of newly discovered messenger molecules, helpers, and doormen. There’s a cast of thousands in there, weighing and considering inputs, using quantum physics to scan other molecules, transducing signals and amplifying messages, and after all that computation, sending signals of their own. In silicon computing, we completely ignore this complexity, replacing neurons with simple on-or-off switches.

  “When you want to find the real computer behind the curtain,” says Conrad, “you have to put your cursor on the neuron and double click. That’s where you’ll find the computer of the future. What I want to do is replace a whole network of digital switches with one neuronlike processor that will do everything the network does and more. Then I’d like to connect lots of these neuronlike processors together and see what happens.” By this point, I knew better than to ask him what that might be. When adaptable systems are involved, prediction is futile.

  8. Brains are equipped to evolve by using side effects. Computers must freeze out all side effects.

  “How is a brain like a box-spring mattress?” riddles Conrad. Answer: You take one spring out of a boxspring, and you’re not likely to notice it because there are plenty of others. In the same way, nature builds in redundancy so that change, good or bad, can be accommodated. When we look at the nerve circuitry in a fish, for instance, we are appalled—it seems to be loops circling back on loops, as if nature’s engineer was lazy, adding new circuitry without removing the old. Nevertheless, this seemingly messy system works beautifully. When part of it fails, other regions take up the slack.

  Nature’s redundancy is built into the shapely origamis called proteins too. Conrad draws me a schematic of a typical protein, a string of amino acids folded spontaneously into a lyrical but functional shape. He draws the amino acids as geometric shapes and connects them with either springs (representing weak bonds) or solid lines (representing stronger bonds). Having enough “springs” to accept change is the protein’s secret to success. If a mutation adds an amino acid, for instance (Conrad draws in an exaggerated beach ball of a newcomer), the springy connections give to absorb the new player. This allows the active site—where chemical reactions occur—to remain undisturbed so it can continue to do its lock-and-key rendezvous. The fact that proteins can graciously accept incremental, mutational change without falling apart is important. It means they can improve over time.

  Life experiments like a child at play, says German biophysicist Helmut Tributsch. It dabbles in all the possible computing domains and learns to solve its problems creatively, harnessing every single force in the library of physical forces—electrical, thermal, chemical, photochemical, and quantum—to physically tune up neurons and their ways of communicating with one another. When small changes are permitted without a fuss, helpful effects gradually accumulate, and evolution pounces to a new level.

  What would be a nightmare to computer engineers—quantumly small computing elements, connected catawampus in dizzying parallelism, randomly interacting and coloring outside the lines—is what gives life its unswerving advantage. If it needs to recognize a pattern, learn something new, or stretch to assimilate new information, it molds its substrate to the task, adding new elements, shaking up the works until it works. This is the world that biological organisms revel in. The ability to ride that riot of foreseeable and unforeseeable forces has allowed nature to exploit myriad effects, becoming more efficient and better equipped all the time. The power to be unpredictable and to try new approaches is what gives life the right stuff.

  Our computers, by comparison, are in shackles.

  Computers can’t brook too much change. If you add a random line of code to a program, for instance, it’s not called a new possibility—it’s called a bug. Unlike biology, which built its empire on faults that turned to gold, computers can’t tolerate so much as a comma out of place in their codes. Add a new piece of hardware to the inside of your computer, and no springs will adjust to accommodate it. The other components, which must remain true to their user-manual definitions, can’t interact with the newcomer or take advantage of the new interactions to bootstrap themselves to anything more efficient. No fraternizing among the transistors; no conspiring or self-organizing allowed.

  Unlike biology, which was able to transform the swim bladder in primitive fish into a lung, structurally programmable computers can’t transform their function, hitch up additional horses, or get any better at computing. In essence, they can’t evolve or adapt. When the really large problems crop up, they choke, and the bomb appears on the screen.

  In the age of Siliconus rex, says Michael Conrad, “We feel powerful, but what we’ve really done is trade away our power for control. To make sure only one thing happens at a time, we’ve frozen out all interactions and side effects, even those that could be beneficial or brilliant. As a result we have a machine that is thoroughly dead—inefficient, inflexible, and doomed by the limits of Newtonian physics.”

  And I had thought he was going to throw his arm around that old Mac Plus and gush.

  The nice thing about articulating the differences between brains and computers is that it gives you a clear mandate: If you want better computers, better stay to the brain side of the chart. First, design processors that are powerful in their own right. Fashion them in nature’s image by using a material that’s amenable to evolution, embedded in a system with a lot of springs. Then, when you challenge your computer with a difficult problem, it’ll hitch all its horses to the problem. Efficiency will soar. And when conditions change, and it needs to switch horses, it can adapt.

  So when Michael Conrad, way back in the seventies, went looking for a new computing platform, he had one big item on his wish list. He didn’t care if it was fast, he didn’t care if it could compute pi to the infinite decimal place. He didn’t even care if it could sing and dance. “I just wanted it to be a good evolver.”

  JIGSAW COMPUTING

  Back in those days, Conrad was thinking quite a bit about evolution at the molecular level. “I was in an origin of life lab and my professor wanted me to model the conditions necessary for evolution to evolve. I was to create a world in silicon, using linear string processing to represent proto-organisms that would have genotypes, phenotypes, material cycles, and environments—they would eat, compete, die, mutate, and have offspring. I was to find out what conditions would foment evolution and encourage the players to bootstrap themselves to higher states of complexity.”

  Conrad eventually created a program called EVOLVE—the first attempt at what is now called artificial life. “If I had claimed it was artificial life,” he says, “those programs would be more famous than they are today. But I didn’t see it as life; I saw it as a map in action.” Nevertheless, the exercise bore fruit and seeded his dream of nature-based computing. He says it happened one night when a dog was bark
ing and he couldn’t sleep.

  “I lay awake in active thought for hours. I was resisting the idea of using string processing language because it wouldn’t allow me to capture the essence of biological processes. Biological systems don’t work with strings, I realized; they work with three-dimensional shapes.”

  In nature, shape is synonymous with function. Proteins start out as strings of amino acids or nucleotides, but they don’t stay that way for long. They fold up in very specific ways. To put it in computing terms, it would be like putting Pascal programming language on magnetized beads. The program would run by folding up into a fork or spoon, thus determining its function—whether it could be used to stab a steak or slurp up bisque.

  Because molecules have a specific shape that can feel for other shapes, they are the ultimate pattern recognizers. And pattern recognition is what computing is all about! Patterns are not just physical arrangements in space, they can also be symbols—the Morse code is a pattern language, for instance, as is binary mathematics. Computing works because each switch in the tiny railyard recognizes a pattern of zeros and ones.

  Conrad began to fantasize. What if we built processors full of molecules that recognized patterns through shape-fitting—lining up like corresponding pieces of a puzzle and then falling together, crystallizing an answer? In this way, he thought, a lovely irony could occur. The pattern recognition that tiny molecules are so good at could be hitched together by the millions and used to solve larger problems of pattern recognition—like recognizing a face in real time in a complex environment. Acting as the Seeing Eye dog for digital computers would be a natural job for the efficient, parallel, and adaptable shape processor. And that would be only the beginning.

  “As I lay there I realized that the world’s best pattern processor, a protein, is also amenable to evolution. If we used proteinlike molecules to compute, we could vary them, or rather, allow them to mutate, tweaking their own amino acid structures until they were fit for a new task. Here was my evolver! In a rush, in a vision, the ‘tactilizing processor’ came to me.”

  Science Writer David Freeman calls the tactilizing processor a computer in a jar, although there’s no saying what physical form it might take—it could float in a vial of water, or be trapped inside a hydrogel-liquid wafer as thin as a contact lens. Whatever form it took, the surface would no doubt bristle with receptor molecules—sensors—that are sensitive to light. Each receptor, when excited by a different frequency of light, would release a shape (a molecule) into a liquid. One receptor might release a triangle, the other a square, the third a shape that would join a triangle and a square. These released molecules would then free-fall through the solution until they met the shapes that complemented them. These three shapes would dock together jigsaw style into a larger piece—a “mosaic”—that would geometrically represent the incoming frequencies, the light signals. Different mosaics would be a way of categorizing the light inputs, or naming them.

  Let’s take an example. An image of a snowshoe hare is flashed onto the membrane surface (actually the image would be projected at a whole array of processors, but we’ll keep it simple). The excited receptors release their shapes, and each shape represents a part of the image—long white ears, big feet, whiskers. The self-assembled mosaic of those shapes says “snowshoe hare.” This naming, or generalizing from specific inputs into a category, is what our vision system does all the time.

  Say you walk into a strange room and you see a chair you’ve never seen. It could be a kitchen chair, or an office chair, or an art sculpture of a chair covered with hair, yet your brain pegs it as a chair. It sees a place to sit down, a back, and four legs and shouts “I know, I know! It’s a chair!” Coding is also how the immune system works. When an immune cell recognizes a certain concentration of foreign objects on its membrane, it integrates those signals into a category—“We have a particular disease problem”—and it begins manufacturing the antibodies needed to fight the disease.

  For proof of his coding theory, Conrad points to the relatively small number of second messengers inside the cell compared to the vast number of messages impinging on the cell. “The fact that the cell employs so few second messengers to transduce [translate] this deluge of information is telling,” says Conrad. “It shows that there must be some kind of coding, or signal representation, going on in the cell.”

  In the tactilizing processor, the mosaic will play the role of the secondary messenger, transducing the signal and posting the answer in the form of a unique shape. Just as a cloud of cAMP in the neuron says “serotonin has arrived,” the mosaic’s shape will say “snowshoe hare.” But since the snowshoe hare mosaic is molecular (too small to be seen with the naked eye), we humans will need a way to amplify and read out the result of the computation. In the neuron, an enzyme called protein kinase “reads” the concentration of cAMP and responds to a threshold amount by opening or closing channel proteins. The enzyme in Conrad’s tactilizing processor will read the “snowshoe hare” mosaic by touch, and instead of opening a channel, it will get busy churning out a product that we can measure.

  The activated enzyme may grab two substrates in the solution, say chemical A and chemical B. Like a little machine, it will join these into product AB, then grab some more. After a time, the concentration of AB increases to the point that its characteristics can be measured by something like an ion-sensitive electrode or a dye that changes color when the pH or voltage changes. In this way, the enzyme amplifies the invisible to the visible.

  Amplification schemes like these are used in biosensors all the time. In at-home pregnancy or cholesterol tests, for instance, receptors are immobilized on the surface of the tester, and when their open arms “catch” telltale molecules in your blood or urine, the receptors change shape. This shape change cues an enzyme to do its thing, usually a chemical reaction. Suddenly, as you stare at it, the stick turns blue.

  In the tactilizing processor, the inputs would be light signals, and the “stick” would actually be a whole array of light-receptive processors. Each processor would recognize a bit of ear, a bit of tail, and so on, and when they were combined, the entire image would be recognized. Without a single electric wire or silicon circuit, a large number of disparate signals would be sorted, coded, and translated, simultaneously, into a coherent answer.

  Given the time it takes for objects to float through liquid, however, is jigsaw computing fast? “No, actually. It’s not,” says Conrad. “Compared to a digital switch, the action of a readout enzyme would be up to five orders of magnitude slower.” This doesn’t seem to worry him, however. “Remember that we are not trying to do what silicon computers do well—we’re not hoping to beat them at their own game.” Digital computers, with their ability to perform repetitive operations at great speed, are perfect at recognizing bar codes and typewritten characters because the domain—all possible typewritten characters and stripes—can be whittled down to something finite that you can place in the computer’s memory banks. But when you open up the domain to anything and everything that might hop past the sensors, you need a lot more than speed.

  The advantage of scanning shape to arrive at a conclusion is that you are able to consider all the inputs—they all contribute to the shape-matching process, so each is fully represented in the final conglomerate, the mosaic. By contrast, silicon terminals simply average the inputs of zeros and ones to decide whether to let electrons through or not. This averaging actually blurs the inputs. If you were to force a conventional computer to be more precise—to in fact replicate the thorough scanning that floating molecules do for free, it would take our most powerful computers thousands of years. Conrad politely calls it “computationally expensive” and doubts whether it’s possible at all.

  Besides, he says, tactilizing is not as slow as it seems, thanks to quantum mechanics. Conrad’s latest articles are all about the “speedup effect,” which may explain why molecules snap together faster than predicted at normal Brownian mingling rates. He
thinks that electrons are constantly “trying out” all possible orbitals or energy states, searching for the minimum, the spot where they can relax. Because of a quantum phenomenon known as quantum parallelism, they can actually explore more than one spot at once in the energy landscape. This parallel scanning allows two molecules to quickly line themselves into register and snap together for a secure fit. Our computers, with their strictly controlled regimes, couldn’t possibly be in two places at once. They might be able to digitally find a minimum energy level, but they would have to go through each and every possible conformation, one at a time. A glacially slow proposition.

  Another plus for the computer in a jar is its inborn talent for fuzzy computing. Patterns may dribble into the receptors, distorted in space or time, but the shapes floating in the medium will still find one another and compute the right answer. Given the flexible nature of shapes, mosaics, and enzymes, a good guess is likely to crystallize even if the inputs are faint or garbled.

  To everyone’s amazement, computing in this most natural of ways, going with the flow of physics and away from absolute control, turns out to be the most powerful form of computing. It’s both precise and fuzzy, depending on what is needed, and it handles vast oceans of data with ease.

  The question remains: When will tactilizing processors be sloshing inside in my Powerbook? Conrad, beret and all, is a pragmatist. He has a good feel for the biocomputing field, having been the elected president of the International Society for Molecular Electronics and Biocomputing for a number of years and serving as an editor and board member on several international computing journals. “In one of our very first conferences,” he remembers, “I was thrown into a piranha tank of news reporters who heard we were building organic computers. They wanted to know when. Being very generous, I said fifty years, and their faces fell.”

  What Conrad means is that we’d need at least fifty years (I wanted to say a thousand, he admits) to have a computer built on shape-based principles only—which for him is the best of all possible worlds. Between now and then, however, you are likely to see more and more hybrids cropping up—conventional computers with organic prostheses attached. For example, his tactilizing processor may be the eyes and ears—the input device that predigests ambiguous information and feeds it to the digital computer. Tactilizing processors might also show up at the output end of things, as actuators—the devices that move the arms and legs of robots. While each tactilizing processor would be a computer in its own right, they would be small enough to be hooked up in parallel, perhaps connected in neural network designs. This team of complex processors would be more powerful, and more task-specialized, than anything we work with today.

 

‹ Prev