Book Read Free

Connectome

Page 11

by Sebastian Seung


  Amazingly, neurosurgeons sometimes chill the body and brain intentionally. In a dramatic medical procedure called Profound Hypothermia and Circulatory Arrest (PHCA), the heart is stopped and the entire body is cooled below 18 degrees Celsius, slowing life’s processes to a glacial pace. PHCA is so risky that it’s used only when surgery is required to correct a life-threatening condition. But the success rate is quite high, and patients usually survive with memories intact, even though their brains were effectively shut down during the procedure.

  The success of PHCA supports a doctrine known as the “dual-trace” theory of memory. Persistent spiking is the trace of short-term memory, while persistent connections are the trace of long-term memory. To store information for long periods, the brain transfers it from activity to connections. To recall the information, the brain transfers it back from connections to activity.

  The dual-trace theory explains why long-term memories can be retained without neural activity. Once activity induces Hebbian synaptic plasticity, the information is retained by the connections between the neurons in a cell assembly or synaptic chain. During recollection later on, the neurons are activated. But during the period between storage and recall, the activity pattern can be latent in the connections without actually being expressed.

  It may seem inelegant to have two information stores. Wouldn’t it be more effective for the brain to use just one? Computers, which are also used to store information, provide a helpful analogy. A computer contains two storage systems : the random access memory (RAM) and the hard drive. A document remains stored on your hard drive for long time periods. When you open the document in your word-processing program, your computer transfers the information from the hard drive to RAM. As you edit the document, the information in RAM is modified. When you save the document, your computer transfers the information from RAM back to the hard drive.

  Since a computer was designed by human engineers, we know why it has two memory storage systems. The hard drive and the RAM both have their advantages. The hard drive has the virtue of stability; it can store information indefinitely, even if the power is turned off. In contrast, information in the RAM is volatile, easily lost. Imagine a power outage in the midst of editing, which causes all electrical signals inside the computer to cease. When you turn the computer on again (“reboot”) and open the document, it will be intact—it was stored stably on the hard drive. But if you look closely, you will see that the document is the old version. Your edits, which were stored in the RAM, have disappeared.

  If the hard drive is so stable, why use RAM at all? The answer is that RAM is speedy. Information in RAM can be modified more quickly than information on the hard drive. That’s why it pays to transfer the document into RAM while editing and then transfer it back to the hard drive for safekeeping. It’s often the case that the more stable something is, the more difficult it is to modify.

  This tradeoff has been named the “stability–plasticity dilemma” by the theoretical neuroscientist Stephen Grossberg. Plato already recognized it in his dialogue Theaetetus. He explained memory failures as being caused by wax that is too hard or too soft. Some people have trouble storing memories, because their wax is too hard to be imprinted. Others have trouble retaining memories, because impressions are easily effaced from their too-soft wax. Only if wax is neither too hard nor too soft can it both take an impression and retain it.

  The tradeoff between stability and plasticity may also explain why the brain uses two information stores. Like information in RAM, patterns of spiking change quickly and are suited to active manipulation of information during perception and thought. Because they are easily disturbed by new perceptions and thoughts, patterns of spiking are useful only for retaining information over short periods of time. Connections, in contrast, are analogous to the hard drive. Because connections change more slowly than spiking patterns, they are less suited to active manipulation of information. They are still plastic enough to store information, however, and stable enough to retain it for long durations. Hypothermia quenches neural activity, similar to the way that a power outage erases the RAM of a computer. Connections are left intact, so long-term memories survive. But recent information is lost, having not yet been transferred from activity to connections.

  Can the stability–plasticity tradeoff also help us understand why the brain might use reconnection in addition to reweighting as a means for storing memories? Through Hebbian plasticity, neural spiking is continually altering synaptic strengths. Therefore the strength of a synapse is not so stable, and the memories stored by reweighting might not be either. This could explain why the memory of what you had for dinner yesterday will most probably fade. On the other hand, the existence of a synapse may be more stable than its strength. A memory stored by reweighting might be further stabilized by reconnection. This is likely the case for memories that endure for a lifetime, such as your name. Indelible memories may depend less on maintaining synaptic strengths at constant values and more on maintaining the existence of synapses. As a more stable but less plastic means of storing memories, reconnection may serve a complementary role to reweighting.

  This chapter has been a mixture of empirical fact and theoretical speculation, biased uncomfortably toward the latter. We know for sure that reweighting and reconnection happen in the brain. Whether these phenomena create cell assemblies and synaptic chains is unclear, however. More generally, it has been difficult to prove that these phenomena are involved in any way in the storage of memories.

  One promising method is to disable Hebbian synaptic plasticity in animals using drugs or genetic manipulations that interfere with the appropriate molecules at synapses, and then do behavioral experiments on the animals to see whether and how memory is impaired. Such experiments have already yielded fascinating and tantalizing evidence in support of connectionism. Unfortunately, the evidence is only indirect and suggestive. And its interpretation is complicated, because there is no perfect way of getting rid of Hebbian synaptic plasticity without creating other side effects.

  The following parable is my attempt to illustrate the difficulties that neuroscientists face in testing theories of memory. Suppose that you are an alien from another planet. You find humans ugly and pathetic but are nevertheless curious about them. As part of your research you are spying on a particular man. He carries a notebook in his pocket. Every now and then he opens it and leaves marks on the pages with a pen. Sometimes he opens the notebook, looks at it briefly, and puts it back in his pocket.

  You find this behavior puzzling, since you’ve never seen or heard of writing. Tens of millions of years ago your ancestors used writing, but that stage of evolution has been completely forgotten. After a great deal of thinking, you formulate the hypothesis that the man is using the notebook as a memory device.

  One night, in order to test your hypothesis, you hide the book. In the morning the man spends a long time wandering about his house, looking under his bed, opening cabinets, and so on. For the rest of the day his behavior sometimes looks different, but only marginally so. You are feeling a bit discouraged, so you imagine other experiments to test your hypothesis: Cut just a few pages out of the book. Dunk it in water to erase the marks. Swap his notebook with someone else’s.

  The most direct test would be to read the writing in the notebook. By decoding the ink marks on the paper, you might be able to predict the events of the man’s coming day. If your predictions turned out to be correct, that would be strong evidence that the notebook stores information. Unfortunately, you are now over twenty thousand years old and farsightedness has set in. Although your surveillance device allows you to look at the notebook, you can’t see the writing clearly. (It’s a bit far-fetched, but let’s suppose that your alien civilization hasn’t invented reading glasses or bifocals.)

  Like you, the farsighted alien, neuroscientists want to test a hypothesis about memory. They believe that information is stored by modifying the connections between neurons. To test the hypothes
is, they destroy the brain areas that contain the connections, just as you hide the notebook that contains the writing. They measure whether the brain area is activated when memory tasks are being done, just as you check whether the man pulls the notebook from his pocket when he needs to remember something.

  Another strategy would be more direct and conclusive: attempt to read memories from connectomes. Look for the cell assembly and synaptic chain to see if they actually exist. Unfortunately, in the same way that your farsighted eyes can’t even see the writing in the man’s book clearly (much less decode it), neuroscientists can’t see connectomes. That’s why we need better technologies to understand the mysteries of memory.

  Before I describe these emerging technologies and their potential applications, I need to talk about one more important factor that shapes connectomes. Experience may reweight and reconnect neurons, but genes shape connectomes as well. In fact, one of the most exciting prospects for connectomics is the promise of finally uncovering the interplay between the two. The connectome is where nature meets nurture.

  Part III: Nature and Nurture

  6. The Forestry of the Genes

  The ancient greeks compared human life to a slender thread— spun, measured, and cut by three goddesses called the Fates. Today biologists search for the secrets of human destiny in a different thread. The molecule known as DNA consists of two strands wound into a double helix. Each strand is a chain of smaller molecules called nucleotides, which come in four types designated by the letters A, C, G, and T. Your DNA spells out billions of these letters, in a sequence known as your genome. This sequence contains tens of thousands of shorter segments called genes.

  It has been obvious throughout human history that children look a lot like their parents. When a baby is born, the comments start almost immediately—“She’s got your eyes!” “He has your curly hair!” DNA provides an explanation. A child inherits half its genes from one parent and half from the other, and therefore inherits traits from both. Everyone accepts this idea for the body, but it’s more controversial for the mind.

  Perhaps the human mind is so malleable that it is shaped more by experiences than by genes, as Locke believed when he compared the mind to white paper, ready to be inscribed. Then again, there’s no question that children often resemble their parents in more than just looks. You can try to deny it when someone tells you, “The apple doesn’t fall far from the tree” or “You’re a chip off the old block,” but there will come a day when you realize you just responded to a situation in exactly the way your father did three decades earlier. But of course this anecdotal observation, while suggestive, won’t prove anything. The similarity might be the result of upbringing rather than genes.

  These two explanations—genes and upbringing—were called “nature” and “nurture” by Francis Galton. Only in the twentieth century did the nature–nurture debate finally move beyond philosophical assertion and personal anecdote. Convincing evidence came from monozygotic (MZ) twins, who originated from a single zygote (fertilized egg cell) and therefore share the same genome. Researchers identified and studied MZ, or “identical,” twins who were separated at an early age and raised in different adoptive families. Their IQ scores turned out to be as similar as their physical traits, such as height and weight. They were much more similar than the IQ scores of two persons chosen at random. The extra similarity can’t be explained by shared environment, because these twins were raised in different adoptive families. It can plausibly be explained by their shared genome. From this data, it appears that genes influence IQ as strongly as they influence physical traits.

  This kind of comparison has been repeated for many other mental traits beyond IQ. Personality tests are filled with questions like “I see myself as someone who tends to find fault with others,” to which the test taker responds with an answer between 1 (“strongly disagree”) and 5 (“strongly agree”). Twins score less similarly on personality tests than on IQ tests, but their scores are still more similar than those of two persons chosen at random, even if the twins were raised apart. This means that personality is more malleable than IQ, but genetic factors are still important.

  For a long time, twin studies aroused intense opposition from believers in the power of nurture. By now, though, the studies have been replicated so many times that there remains little room for argument. The psychologist Eric Turkheimer has promulgated the First Law of Behavior Genetics : “All human behavioral traits are heritable.”

  This law holds not only for mental differences between normal people but also for mental disorders. Early on, those trained in the psychoanalytic tradition believed that autistic children were the product of “refrigerator mothers.” In a 1960 profile of Leo Kanner, the psychologist who first defined autism, Time magazine wrote: “All too often this child is the offspring of highly organized, professional parents, cold and rational—the type that Dr. Kanner describes as ‘just happening to defrost enough to produce a child.’” But Kanner was actually ambivalent in his beliefs about the cause of autism. In the conclusion of the 1943 paper in which he originally defined autism, he noted that many of his patients had emotionally cold parents, but he went on to say that their condition was innate.

  This leads us to another possible cause of autism: faulty genes. Researchers have explored this idea, too, by studying twins. If autism were completely determined by genetic factors, we’d expect MZ twins to both be autistic or both be normal. In fact, the agreement is not perfect. If one twin has autism, so does the other, with 60 to 90 percent probability. Since this concordance rate, as it is called, is less than 100 percent, autism is not completely determined by genes. Nevertheless, the rate is still high, and suggests that genetic factors are important for autism.

  Of course, this statistic is not conclusive by itself. Because twins generally grow up in the same household, they tend to have similar experiences. If Kanner’s “refrigerator mothers” were the cause of autism, that too would lead to high concordance rates. In the IQ studies, the effects of genes and environment were teased apart by studying MZ twins adopted and raised in separate households. It’s difficult to locate such twins, and even more difficult to find such twins with autism, so geneticists have taken a different approach. They study twins raised together, and assess the importance of genes by comparing MZ twins with dizygotic (DZ), or “fraternal,” twins. It turns out that the concordance rate for autism is relatively low in DZ twins, just 10 to 40 percent. This lower concordance rate is easily explained if autism is influenced by genetic factors, since DZ twins are genetically less similar than MZ twins. (DZ twins share 50 percent of their genes, while MZ twins share 100 percent.)

  What about schizophrenia? The concordance rate is again lower for DZ twins (0 to 30 percent) than for MZ twins (40 to 65 percent). These numbers suggest that genetic factors are important for schizophrenia as well.

  The studies of twins show that genes matter, but they do not explain why. Before I tackle the answer (or many answers) to this question, let me explain some things about genes.

  You can think of a cell as an intricate machine built from molecular parts of many types. One of the main types is a class of molecules known as proteins. Some protein molecules can be structural elements, supporting the cell like the studs and joists of a wooden house frame. Other protein molecules perform functions on other molecules, much as workers in a factory handle parts. Many proteins combine both structural and functional roles. And the cell is more dynamic than most man-made machines, as many of its proteins move around from place to place.

  It’s commonly said that DNA is the blueprint of life, because it contains the instructions that cells follow to synthesize proteins. Just as DNA is a chain of nucleotides, a protein molecule is a chain of smaller molecules called amino acids, which come in twenty types. Each kind of protein is specified by a sequence of letters, but the alphabet contains twenty letters rather than the four used in DNA. This amino acid sequence is specified by a (mostly) contiguous strin
g of letters—a gene—in your genome. To produce a protein molecule, the cell reads the nucleotide sequence of a gene and “translates” this into an amino acid sequence to synthesize a protein. (The dictionary for translation is known as the genetic code.) When a cell reads a gene and constructs a protein, it is said to “express” the gene.

  You started your life as a single cell, an egg fertilized by a sperm. This cell divided in two, and its progeny divided, and so on for many generations to produce the huge number of cells in your body. Every dividing cell replicated its DNA and passed on identical copies to its progeny. That’s why every cell in your body contains the same genome. Why then do a liver cell and a heart cell look different and perform different functions? The answer is that cells of different types express different genes. Your genome contains tens of thousands of genes, each corresponding to a different kind of protein. Each type of cell expresses some of these genes but not others. Neurons are arguably the most complex type of cell in the body, so it’s no surprise that many genes encode proteins that are exclusively or partially devoted to supporting functions in neurons. This is a preliminary answer to the question of why genes matter for the brain.

  Your genome and mine are almost identical, conforming almost exactly to the sequence that was found by the Human Genome Project. But there are also slight differences, and the field of genomics is developing faster and cheaper technologies for detecting them. Sometimes the differences reside in single letters, while other times a longer stretch of letters is deleted or duplicated. If a genomic difference alters a gene, we can make a guess about the consequences if we know the function of the protein encoded by the gene.

 

‹ Prev