Book Read Free

Biomimicry

Page 25

by Janine M Benyus


  3. Brains are not structurally programmable the way computers are.

  In the silicon railyard of wires and switches, the modern-day switchmen are programmers. They write instructions in the special language of programming code, which we call software. When we double click on a screen icon, our software whirs to life, barking orders to the switches deep inside the computer, telling the gates when to open or close, connecting the tracks in new ways, and thereby changing the structure of the network, enabling it to perform a new function. Making the computer “structurally programmable” was the dream child of a man named John von Neumann. He wanted the computer to be the player piano of information—a universal device that could, with software to morph the network, become a word processor, a spreadsheet, or a game of Tetris.

  Our brains, of course, are not structurally programmable. When we want to learn something, we don’t read a book that tells us how to change our brain chemistry to remember a blues riff or the date of Delaware’s statehood. We take on information, and our neuronal net is free to structurally store the data on its own, using whatever mechanical and quantum forces it can muster. Neuron connections are strengthened, axons grow dendrites, chemicals move in mysterious ways.

  It’s this physical processing, then, that makes our cells so different from our computers. While our PCs process information symbolically, with long strings of zeros and ones, our cells compute physically, working at the level of the molecule. We brain-owners take our lessons on an interpretive level—and the body automatically takes care of the rest. Michael Conrad’s vision for computing is perched on this same peak.

  4. Brains compute physically, not logically or symbolically.

  Suddenly, Conrad holds his pencil high above his desk and lets go. “This,” he says triumphantly as the pencil bounces, skitters, and rolls to a stop among his papers, “is how nature computes.” Instead of switches, contends Conrad, nature computes with submicroscopic molecules that jigsaw together, literally falling to a solution.

  Molecules are groups of atoms assembled according to the laws of physics into three-dimensional sculptures (think of the colorful ball-and-rod sculptures that scientists on Nova are always displaying). Large biomolecules can be made up of tens of thousands of atoms, and yet the finished object is still ten thousand times smaller than the cells in our bodies, a thousand times smaller than our silicon transistors. A molecule can’t chip or erode, and though it can be bent or flattened, it’ll always spring back to shape. The driving force at this scale is not gravity, but the push and pull of thermodynamic forces.

  A molecule’s goal in life is, like the pencil’s, to fall to the minimum energy level—to relax. When two molecules free-floating in a liquid bump into one another so that their shapes correspond like jigsaw pieces and their electrical charges line up in register, there is an immediate attraction—an adding together of their weak forces—that is stronger than the urge to stay separated. In fact, it would take more energy at this point to keep them apart than to let them self-assemble. Like people falling asleep and finally rolling toward the sag in the bed, complementary molecules “snap” together as they relax. It’s called “minimizing their free energy.”

  Right now, mix-and-match molecules are snapping together in every cell in every life-form on the planet. Conrad believes their fraternizing is a form of information processing, and that each cell in our brain, each neuron, is a tiny, bona fide computer. The brain manages to wire together one hundred billion of these computers in one massive network. (To get a feel for that number, come stand under the velvet Montana sky and check out the Milky Way. It’s one hundred billion stars strong—one star for every person on Earth, times seventeen.) But there’s more. Inside each neuron are tens of thousands of molecules engaged in a fantastic game of chemical tag set in motion each time, for instance, the phone rings.

  It’s 2:00 A.M., and you are in a hotel room fast asleep. The phone rings, setting off an amazing feat of computation, biology style. The first set of sound waves pounds like a hurricane against the hairlike cilia in your ear canal. These movements are turned into electrical impulses that wake you. Your body’s mission is to integrate incoming signals, come to a conclusion, and do something, now.

  Adrenaline molecules, the Green Berets of fear and anger, bail out of a gland and into your bloodstream, heading for nerve endings. At the shoreline of the nerve endings, molecules called receptors hold out their “arms” to catch the adrenaline molecules. Once the receptors are full, they change shape and “switch on” special enzymes inside the cell, which in turn activate a whole cascade of chemical reactions. The effects differ depending on the cell.

  In your liver, the cascade may signal cells to start breaking down their stored sugar and swamping your bloodstream with glucose for fast energy. Your skin is told to tighten, your heart to speed, and your entire thirty-five feet of intestine to shut down (you have better things to do in a crisis than digest dinner). In your brain, the chemical cascade causes an electrical “action potential” to snake like a spark along a lipid (fat) fuse. At the end of its journey, it’s not the spark that jumps from one neuron to another, but another boatload of chemicals. And it’s this journey that most interests Michael Conrad.

  The chemicals that are released from one neuron to another are called neurotransmitters (serotonin, the mood regulator affected by Prozac, is one example). These burst through the cell membrane at the end of one neuron and float by the hundreds across the liquid strait—the synaptic gap—to the shore of another neuron. Here they dock in the waving arms of receptors, which, in turn, change shape and trip off a series of their own chemical cascades deep inside the new neuron.

  These chemical cascades cause gating proteins in the neuron’s membrane to open, letting in a milling crowd of salt ions. This influx of charged particles causes the electrical environment of the membrane to reverse itself right at the point of entry. The outside membrane, which was once positively charged relative to the inside, becomes negatively charged relative to the inside in that spot. This flip-flop travels like an electrical shiver down the neuron, and at the end, it prompts the release of yet another barrage of neurotransmitters that float across the synapse to the next neuron. The result of all this is you remembering who you are and where you are and what a phone is, and picking it up just in time to become simultaneously furious (it’s a prank) and relieved it wasn’t something worse.

  In crisis or in sleep, your body is busy at computational chores like this one. Carbon compounds in a million different forms are joining, separating, and rejoining to pass messages along. This process doesn’t happen just in neurons, either—it occurs in less flashy cells as well. Shape-based computing is at the heart of hormone-receptor hookups, antigen-antibody matchups, genetic information transfer, and cell differentiation, just to name a few. Life uses the shape of chemicals to identify, to categorize, to deduce, and to decide what to do: how many endorphins to make for the blissful runner’s high, which muscles to cause to contract, how many bacteria to kill, whether to become a tongue cell or an eye cell. Without shape-based computing, embryos—which begin life at the size of a period on this page and then divide only fifty times to become human babies—wouldn’t be able to follow their recipe for development. We literally wouldn’t be here without the chemical messenger system that is choreographed by shape-based, lock-and-key interactions.

  When Conrad explains these “chemical cascades,” he speaks as if he has floated across the straits of a synapse himself, ridden the fountain from the chemical signal up to the macroscopic electrical signal and back down to the chemical signal. “The most important conceptual journey for me was to go inside the neuron and slosh around at the chemical level,” he says. “There, three-dimensional molecules are computing by touch. Pattern recognition is a physical process, a scanning process, not the logical process it is when our computers recognize a pattern of zeros and ones. Life doesn’t number-crunch; life computes by feeling its way to a solution.”
/>
  5. Brains are made of carbon, not silicon.

  If you are going to rely on shape to feel your way to a solution, you have to use molecules that can assume millions of different shapes. Life knew what it was doing when it chose carbon as its substrate for computing. For one thing, carbon is free to participate in a great variety of strong bonds with other atoms and is quite stable once bonded, neither donating nor accepting electrons. Silicon, on the other hand, tends to be more fickle in its bonding, and is not able to form as many shapes as carbon can. As a result, Conrad believes life could not have evolved its shape-based computing using silicon. “And that’s why, if we want to try physical computing as opposed to logical or symbolic computing, we have to eventually say goodbye to silicon and hello to carbon.”

  The clamor for carbon is not exactly heard across the land, however. Many artificial intelligence researchers are still putting all their faith in silicon. The sci-fi idea of “porting” our brains, or at least our thought patterns, to a computer host would supposedly allow us to live forever in silico. According to Conrad, it’s the ultimate mind-body split. “It’s absurd to think you can remove the logic of conscious thought from its material base and think you haven’t lost anything. Even if you were able to put your thought patterns in a numerical code (the premise of ‘strong’ artificial intelligence theory), it would be only the map, not the territory. The territory, the seat of intelligence, is proteins and sugars and fats and nucleic acids—all carbon-based molecules.”

  Matter matters. And so, it seems, does the connectedness of this matter.

  6. Brains compute in massive parallel; computers use linear processing.

  Although neuroscientists have tried for decades to find the physical headquarters of consciousness, the grand central sage that organizes our thoughts, they have had to conclude that there is no central command. Instead, says author Kevin Kelly, the “wisdom of the net” presides. Thoughts arise from a meshwork of nodes (neurons) connected in democratic parallelism—thousands attached to thousands attached to thousands of neurons—all of which can be harnessed to solve a problem in parallel.

  Computers, on the other hand, are linear processors; computing tasks are broken down into easily executed pieces, which queue up in an orderly fashion to be processed one at a time. All calculations have to funnel through this so-called “von Neumann bottleneck.” Seers in the computing field bemoan the inefficiency of this setup; no matter how many fancy components you have under the hood, most of them are dormant at any given time. As Conrad says, “It’s like having your toe be alive one minute, and then your forehead, and then your thumb. That’s no way to run a body or a computer.”

  Linear processing also makes our computers vulnerable. If something blocks the bottleneck, that dreaded smoking bomb appears on the screen. The redundancy of net-hood, on the other hand, makes the brain unflappable—a few brain cells dying here and there won’t sink the whole system (good news to those who survived the sixties). A net is also able to accommodate newcomers—when a new neuron or connection comes on line, its interaction with other neurons makes the whole stronger. Thanks to this flexibility, a brain can learn.

  In an effort to imitate this brain-net in software form, a programming movement called “connectionism” has blossomed. In the last decade, “neural net” programs have been showing up on Wall Street, in manufacturing plants, and in political spin factories—wherever predictions need to be made. Neural nets are programs, like your word-processing program, that run on top of old-fashioned linear hardware. Inside your computer they create a virtual meshwork composed of input neurons, output neurons, and a level of hidden neurons in between, all copiously connected the way a brain might be.

  Neural nets digest vast amounts of historical data, then seek relationships between that data and actual outcomes. At a campaign headquarters, for instance, a net might crunch all the polling and demographic data for 1992 and then try to find a relation between that and who won the New Hampshire primary. Eventually, you want your net to concoct a rule about it all, something like “If X and Y occur, then chances are Z will happen.” Usually it takes some practice to come up with this rule, in the same way that a dog has to catch a few Frisbees before it makes up a rule about where a Frisbee will land. The neural net isn’t a great predictor right out of the box; you have to train it by tossing it statistics from the past and having it guess the outcome.

  Say a soda manufacturer wants a neural net to predict its sales figures in a particular town. It feeds the net reams of historical information: monthly temperatures, demographics, and advertising budget spent there in previous years. Given this constellation of conditions, the net connects its neurons in a certain way and tries to guess sales in previous years. At first, it ventures a wild guess. The trainer then feeds back the correct answer—the actual sales figures—and the net adjusts its connections and guesses again. It keeps readjusting its connections, revising its rule until it can correctly predict where the data will lead.

  The reason nets learn so quickly is that connections between inputs can be weighted, as in, this input is more important than that input, so this connection should be strengthened. To the student of brain science, this theory of learning seems more than faintly familiar. In 1949, Canadian psychologist Donald O. Hebb postulated that memories (associative learning) were processed physically—the connections between neurons actually changed—and they grew stronger or weaker depending on whether neuron A had caused neuron B to fire. The idea was that the next time neuron A fired, neuron B would be more likely to fire because of some sort of “growth process or metabolic change” that strengthened the connection between the two. Hebb’s guess was that dendritic, or branching, “spines” would grow between nerve cells to establish stronger connections. “It’s the neurons-that-play-together-stay-together idea,” says Conrad.

  While our in silico neurons can’t exactly grow spines, the network is able to adjust its connections again and again during a training process, all the while nudging toward a correct answer and, in the process, embodying a predictive model (a rule) in its network architecture. Once the winning network configuration is in place, these virtual neurons, run in virtual parallel, can quickly and uncannily reach the right solutions. In no time, they’re catching the Frisbee on the run.

  The next step, of course, is to build net-hood right into the hardware. Some computer designers are already etching neural nets onto silicon chips, while Thinking Machines, Inc., is hooking sixty-four thousand processors together into one giant Connection Machine. Assuming I could afford the $35 million model, I ask Conrad, would my new Connection Machine running a neural net be more like a brain?

  “Connectionist hardware and software bring us closer,” he says, “but they still miss an essential truth. Connections are important, but connecting simple switches or simple processors together is not how the brain got to where it is today.” The brain astounds because every single neuron in the net is a wizard in its own right. And neurons are far from simple.

  7. Neurons are sophisticated computers, not simple switches.

  In the late sixties and early seventies, Conrad thought extensively about neurons and their interplay. “I began to realize that the neuron was a full-fledged chemical computer, processing information at a molecular level.” His first papers about “enzymatic neurons” appeared in 1972 to somewhat skeptical reviews. “It’s still controversial to call a neuron a chemical computer,” he says, “but today, more and more neurophysiologists seem sympathetic to the idea. Finding someone who believed as I did twenty years ago—now that was a red-letter day.

  “It was 1978 or ’79, I think. A student came into my office and showed me an abstract of a paper on molecular computing by E. A. Liberman, and I thought, so there is someone else in the world using this term. I immediately arranged to visit his lab.” Conrad spent the following year as a U.S. National Academy of Sciences Exchange Scientist to what was then the Soviet Union.

  He and Libe
rman spent a lot of time talking about what makes neurons tick. Up to this point, neurons had been studied only for their response to electrical probings, the theory being that electrical impulses alone were responsible for thought. But as Liberman showed Conrad, neurons could fire without electric help. All a neuron needed was an injection of cyclic AMP, the chemical messenger that is instrumental in the cascade of signals leading to a neuron’s firing. The shot of cAMP not only caused the neuron to fire, but “different concentrations of cAMP had the neuron talking differently and fairly rapidly to other neurons.” It was a stunning sight, remembers Conrad.

  Other labs were doing similar experiments. It soon became clear to other scientists that neuron communication was an electrochemical phenomenon, a dance far more complex than the simple “yes or no” of neuronal firing. When a neuron makes a decision, it has to consider some one thousand opinions coming from the axons attached to it. Instead of just averaging votes, it considers these opinions in detail. The receptors bobbing in the cell membrane are like doormen that receive messages from at least fifty different brands of neurotransmitter. The doormen in turn relay the message to “helpers” inside the cell who create secondary messages in the form of clouds of chemicals such as cAMP. Above a certain threshold concentration, cAMP turns on an enzyme called protein kinase, which in turn opens a gating protein. The gating protein causes a channel in the membrane to open or close, letting in or keeping out charged particles, thereby controlling the electrical shiver, and controlling whether and just how rapidly the neuron will fire.

 

‹ Prev