Book Read Free

Biomimicry

Page 27

by Janine M Benyus


  We have miles to go before realizing even this halfway dream, however. As Felix Hong, a coworker of Michael Conrad’s, emphasizes, “There is no infrastructure in molecular electronics as yet. You can’t go to a catalog and order parts to make a computer like this. Biosensors are the closest thing we have, and we would no doubt build off that technology for receptor and readout parts of the processor. But everything else—the macromolecules, the system design, the software—it all has to be made from scratch.”

  And that’s where breeding comes in.

  Computer, Assemble Thyself

  There may not be a catalog of molecular computer parts, but in Conrad’s head there exists a factory, which he describes in papers as the molecular computer factory. It’s unlike any factory we’ve ever seen, he assures me. It’s more like a giant breeding facility, mimicking nature’s tricks of evolution. Each element, both hardware and software, will be bred, through artificial selection, to do the best possible job and to interact well with other parts of the system. In this coevolutionary way, the molecular computer factory will resemble an ecosystem made up of different “members” challenging each other to work seamlessly together and up the ante of performance.

  Conrad describes it this way: “Instead of being controlled from the outside, by us, each processor will mold itself to the task at hand, while together, several processors will sharpen their ability to work as a team. They will actually evolve through a process of variation and selection toward an optimal peak, the best possible system for the conditions at hand.

  “We as engineers will coach the process. We’ll be the invisible hand of natural selection, winnowing out the losers and putting the winners through increasingly tougher trials. Our biggest challenge won’t be to create solutions (those will be generated randomly, the way species’ adaptations are), but rather to describe the task we want done and then set up the evolutionary criteria—the environment that challenges the evolving forms to do their best. This is a whole new way for engineers to think.”

  It may be new to computer engineers, but stepping into nature’s shoes and “defining the evolutionary criteria” is something with which we humans are very familiar. Ten thousand years ago our ancestors started to get choosy about the plants they ate and began saving the seeds of the tastiest, best-germinating, most uniform plants, tossing the rest over the garden gate. We were showing gene favoritism way back then.

  Today, we have the awesome (and somewhat frightening) power to isolate our favorite genes and make millions of copies of them. We can insert a gene that produces insulin, for instance, into bacteria and essentially borrow their protein synthesis machinery to make insulin for us. Conrad would use a similar scheme, but instead of insulin, he would want the E. coli to produce jigsawing macro-molecules, light-sensitive receptors, and readout enzymes. The DNA blueprints for these molecules would probably be synthesized from scratch on oligio machines (which string together DNA bases into strands).

  “Finding the best structure for these molecules will be an evolutionary process,” says Conrad. “We’ll let the molecules, receptors, and enzymes strut their stuff in tactilizing processors, seeing how well they can recognize a test image. Each time they make an error, we’ll break apart the mosaic and let them try a new configuration. Just as biological systems are adept at finding a steady state, so too will the computer in a jar settle into a workable scheme for computing.

  “Swarms of variation trials would be running simultaneously with various teams of processors being played off against each other to see which one solves a problem most effectively. Each trial will yield star performers, who, like prize pigs, will be bred again for the next trial. We’ll encourage a mutation here and there, and then let them compete against their peers. Eventually, after a surprisingly small number of trials (thanks to the cumulative improvement power of variation and selection), we’ll have our custom-designed team.”

  Though it sounds outrageous at first, this idea of “directed evolution” has already proven worth its salt in the medical field. Gerald Joyce of the Scripps Research Institution in La Jolla, California, got everyone’s attention in 1990 when he announced that he was letting drugs design themselves.

  The technique is deceptively simple. Drug manufacturers often know that they need a molecule with a certain shape that will interfere with a disease mechanism—by clogging a receptor, for instance. Instead of designing it by hand, they mutate a starting molecule to produce billions of variants. They test those molecules by floating them past billions of receptors. The molecules that dock even partially are kept for the next trial. These are copied, mutated again, tested, and culled again. Since the fit keeps getting better and better, Joyce found that he was able to manufacture his first product (an RNA molecule called a ribozyme that cuts DNA in a specific place) in only ten generations. Now directed evolution, the biomimicking of natural selection, is being pursued by dozens of companies.

  Survival of the Fittest Code

  OK, I tell Conrad, test-tube evolution is a long way from the pea gardens of Gregor Mendel (the monk who first fathomed the rules of heredity), but at least the molecules inside are biological. I can imagine natural selection working its magic on them, because they are organic and three-dimensional. But how do you plan to breed system designs, neural-net architectures, and software programs, all of which live exclusively in silico? How does one go about breeding strings of information, or programming code?

  Computers, as it turns out, are dandy breeding devices. Say you are an artist, and you want to evolve art on the computer. You write a line of programming code that will instruct the computer to draw a pyramid and then you tell the computer to slightly mutate this pryamid. You run the program twenty times and get twenty different pyramid variations. You then use your aesthetic sense to pick an attractive variant that you will allow to survive. You have this survivor’s DNA (the programming code) copy itself with further mutations and draw out twenty new variations and pick another winner. Choose again, and again, and again. Each choice is nudging the drawing toward the artist’s ideal form—as if the artist is climbing the landscape of all possible forms to find the final, fully evolved form. This is already happening in a worldwide experiment called evolvable art on the World Wide Web. People vote for their favorites, and the group’s choice of code is then used to redraw the pictures, with slight mutations, every thirty minutes.

  In 1985, Richard Dawkins, zoologist and author of The Blind Watchmaker, took a similar journey of exploration inside a computer. Instead of art pieces, he was investigating biological forms. He was looking for the common denominators among biological forms, and so he wrote a program that gave the computer instructions for drawing a form. The instructions were simple rules, such as “draw a 1-inch line, fork it into two 1-inch lines, and repeat.” He then gave the program parameters such as “maintain left-right symmetry.”

  In all his years of crawling around jungles as a zoologist, Dawkins says he never experienced anything quite like the rapid blossoming of forms in his computer. Starting from complete randomness, his program managed to make something that looked vaguely biological within a few generations. When it did, Dawkins chose the most biological-looking forebear and had the program begin here, modifying this form. At each stage, he chose forms that looked more and more biological, until he began to recognize forms that actually exist in nature. That night, as the computer drew tulips and daisies and irises, he couldn’t pull himself away from the machine to eat or sleep.

  Early the next morning he decided to step back and start in a new direction with his selection. Amazingly, the program yielded beetles and water spiders and fleas—he’d run into the domain of insect forms! Instantly, Dawkins saw parallels between the instructions in his program code and genes. It was as if his programs were genes that, once “run,” came out with a phenotype—a drawing. Changing the instructions in the program was like changing genes to produce a slightly different individual. It was variation, which, when combin
ed with selection of a winning offspring, was the formula for evolution.

  What a powerful method this artificial evolution is for finding an optimum solution! What if instead of an insect or a tulip drawing, you used artificial evolution to design a jet aircraft? You could give the computer some criteria—weight, cost, materials, say—and let it begin to spin out a program code for the design of a jet aircraft. That code could be copied faithfully and it could be copied with mutations. As John Holland, the father of genetic algorithms, found, you can even have your program codes undergo mating. To “mate” two programs, you join half of one program’s string of code to half of another program’s string. The offspring is a thereby a mix of the two “parents.” With this digital sex, the generations of programs literally fly by, pausing only for testing against criteria that you select. Design programs that meet these criteria are mated to produce even better designs, which are once again tested. The selection process heads in one direction—successful designs survive and suboptimal ones “die” out of the population. This “hill climbing” in a landscape of possibilities toward an optimal design is what engineers do, but computers can generate random ideas much faster than most engineers. And computers, not yet able to feel embarrassment or peer pressure, are not afraid to try off-the-wall ideas. Ideas are just ideas; the more the merrier.

  Giving Up Control

  As computing tasks become more complex—running a telephone system, flying a space shuttle, delivering electricity to more homes—our systems become harder to centrally control and repair. If we are to break out of our control-hungry straitjacket and achieve true power, says Conrad, we may have to loosen up the reins a bit. We may have to give computers their head, so to speak, give them the substrate (carbon) and the computing environment (artificial evolution) they need to creatively problem-solve so they can avoid troubles and perhaps even repair themselves. In the ultimate molecular computer factory of Conrad’s imagination, self-improvement regimes will be built into the computers, so that when they run into snags, they’ll be prompted to “create a new program using artificial evolution” until operations are smooth once again. Instead of crashing, they’ll adapt to changing conditions without having to go off-line for repairs.

  What’s hard for some to accept is that we’re not the ones coming up with the solutions, and we may not wholly understand why they work as well as they do. Michael Conrad isn’t a bit bothered. “I knew that I would have to give up control if I hoped to get real power, which is the power to adapt. I may not know where every single electron is, and I may not know why my molecular, shape-based device is doing such a good job. I’ll just have to evolve it, test it, and marvel at how well it works without knowing exactly why.”

  This is the essence of the “letting go” that Conrad talks about. It is counterintuitive to the engineer who was schooled in the old way—being graded not only on the solution but also on how he or she derived the solution. This new paradigm asks us to admit that some approaches may work or even be superior to our own, even if we don’t recognize them as something that would have sprung from our imaginations.

  Life is like a rodeo—you can fight the bull’s every buck and be worn to a frazzle (if you aren’t gored first), or you can match your movements to your mount and see where it takes you. Deep inside our cells, where all the computing is going on, it’s still the Wild West. Proteins tumble in a maelstrom of Brownian motion, riding a riot of electrical attractions, quantum forces, and thermodynamic imperatives. The computer networks that can match their movements to these forces, says Conrad, are going to astonish and, sometimes, humble us, as only carbon-based creations can.

  SILICON COMPUTING IN A CARBON KEY

  Computing is not liable to convert to carbon overnight, however. Conrad acknowledges that we have an enormous investment in the silicon-based computers sitting on our desktops. Most of our data is now encoded in zeros and ones. One way to begin the transition to the biocomputer is to practice a hybrid of silicon and carbon computing—keeping the on-off switches from the silicon past, but replacing the silicon with molecules from nature.

  Conrad calls it “silicon computing in a carbon key.” It doesn’t change the fundamental approach to computing—that remains digital and linear—but it does bring organic molecules into play. Conrad doesn’t say as much, but I get the feeling he thinks using biomolecules to crunch zeros and ones is like using a Lamborghini to deliver newspapers. He’d rather put natural molecules through their real paces by utilizing their shape-matching talents, but, he concedes, it would be kind of fun to capitalize on their light-reacting capabilities right now.

  These days, one of the most promising avenues for speeding up computers is to think about abandoning electrons and using light pulses to represent zeros and ones. Many biological molecules are highly reactive to light. Some proteins actually move in predictable ways (they kink and unkink) when hit by certain frequencies of light. These proteins can be embedded in a solid material at densities orders of magnitude higher than conventional switches, and can be turned on and off via light waves—no tunneling electrons to worry about, and no buildup of heat.

  It sounded like a peak in the computing landscape worth visiting. At Michael Conrad’s suggestion, I contacted one of molecular computing’s gurus, a man who, according to Conrad, knows everything you’ve ever wanted to know about kinking proteins but were afraid to ask.

  When Light Flips the Switch

  Felix Hong is an irrepressible host. At 9:30 P.M., the lab is empty, and he’s unwrapping a new set of mugs. “Green tea?” Time slides by on stockinged feet when you are talking about someone’s favorite molecule, and bacteriorhodopsin (or, as its friends say, BR) is Hong’s very favorite. In the wild, BR is found spanning the membrane of a tiny, rod-shaped, flagellum-wielding bacterium called Halobacterium halobium. Halobacterium and its clan have survived for billions of years, in no small measure because of this strange protein in its cellular “skin.” In a poetic turnabout, this most ancient of proteins is now one of the hottest stars of molecular electronics, poised to fill a new niche in sixth-generation computers.

  Next time you fly into San Francisco, Hong tells me, look for the purplish smudge at the southeastern end of the Bay (toward Silicon Valley). That’s Halobacterium by the billions, living, reproducing, and fighting for survival in some of the harshest conditions life can handle. The daytime temperatures soar, the nights are cold, and the water is ten times saltier than the Pacific—enough to pickle most creatures. “Salty is a relative term,” he reminds me. “Halobacterium’s other favorite haunt is the Dead Sea.”

  These days, many laboratories around the world are trying to make Halobacterium feel at home. Engineers are growing the super-tolerant microbe in bulk, hoping it will be a willing ally for enzyme and bioplastics manufacture, desalination, enhanced oil recovery, and even cancer-drug screening. Besides being tough to kill (even at 100 degrees Celsius), it’s also full of strange engineering firsts, a brilliance born of adversity.

  For one thing, Halobacterium can toggle from being a food consumer to being a food producer. When conditions are good, explains Hong, it gathers food that other creatures produce, and metabolizes it, just as we do. But sometimes, when oxygen levels in their shallow sea home dip and there is no way to oxidize, or burn up, food, Halobacterium goes to Plan B. It assembles in its membrane a protein called BR that allows it to harness sunlight to make its own sugars.

  “Let me tell you how we think this works,” Hong says, launching into a summary that is the distilled liquor of thousands of studies (two hundred papers a year have been published on this one molecule since it was first discovered in the seventies). Basically, sunlight causes BR to change its shape in the membrane. As it moves, it hands a proton—a positively charged hydrogen ion—from the inside of the membrane to the outside. Photon after photon pumps proton after proton, until eventually there’s a buildup of positive charges outside the membrane relative to inside—a membrane potential that is pois
ed to do work.

  The protons on the outside of the membrane are like water in a high lake that wants to get back to the valley, to restore the balance of energies. Their only way back into the cell is through the “turbines” of ATP synthase, another molecular machine that spans the membrane. As the protons move through this tiny turbine and back into the cell, ATP synthase extracts a toll; it uses the energy to attach a third phosphate to adenosine diphosphate, making adenosine triphosphate, or ATP. ATP is then a molecular cache of energy—when the bacterium needs a boost, it can sever the high-energy phosphate bond, breaking ATP down to ADP, and releasing the energy that came originally from the sun.

  “So you see,” says Hong, with admiration lighting his face, “BR is both a photon harvester and a proton pump. It is also a smart material—whereas most pumps would slow down due to the ‘back-pressure’ of protons on the outside of the membrane, it adjusts to keep pumping protons. We admirers of this intelligent molecule are like corporate spies trying to reverse-engineer a machine that is only fifty angstroms by fifty angstroms, or one five millionth of an inch long.”

  After rummaging through his desk, Hong presents me with a postcard of the Renaissance Center in downtown Detroit—a futuristic-looking skyscraper with seven glassy, cylindrical towers in a ring. “A souvenir to help you remember bacteriorhodopsin!” When I tell him I don’t understand, he smiles and shows me a computer-generated picture of BR. Seven helical columns that look like baloney curls stand in a ring around a light-sensitive pigment called retinaldehyde, or retinal A. “Retinal A is a close relative of the compound in our eye that helps us see in dim light. Nature is fond of reusing her winning designs in new ways,” he says, as he pours me more green tea. “In BR, she uses an eye pigment to pull down the sun.”

 

‹ Prev