Book Read Free

Connectome

Page 9

by Sebastian Seung


  Recall that the memory of your first kiss included your sweetheart’s mother calling for you to have a glass of lemonade. Let’s say you have another memory involving lemonade, from the hot summer day when you sat in front of your house and sold ice-cold lemonade in paper cups to passersby. This memory is different from that of your first kiss, but they have lemonade in common, so their cell assemblies overlap in the “lemonade neuron,” as shown in Figure 21. (The double-headed arrows represent synapses going in both directions.) The danger of overlap is obvious: Activating one of these cell assemblies might also ignite the other. The magnolia smell might activate a mishmash of two memories, a confused combination of your first kiss and the lemonade stand. This scenario could be a cause of inaccurate memory recall more generally.

  Figure 21. Overlapping cell assemblies

  To prevent the indiscriminate spread of activity, the brain could give each neuron a high threshold for activation. Let’s suppose that a neuron is not activated unless it receives at least two “yes” votes from its advisors. Since the cell assemblies of Figure 21 overlap only in a single neuron, activity will not spread from one to the other.

  But the protection mechanism of a high threshold has its own pitfall. It also makes the criterion for recalling a memory more stringent. Because of it, activation of at least two neurons in a cell assembly is necessary to cause recollection of the entire memory. The magnolia smell alone would not be enough to trigger recall of your first kiss. It would have to be accompanied by the sound of a plane overhead, or some other stimulus that was part of your first kiss.

  Whether the brain should be that selective about recollection depends on the details of the situation. But it’s clear that activity might sometimes fail to spread even when it should. This could be the cause of another common complaint about memory, the failure to recall anything at all. (It doesn’t explain the tantalizing “tip-of-your-tongue” feeling, but it could explain the failure that causes the feeling.) So I imagine the brain’s memory systems as balanced on a knife edge. Too much spread of activity leads to confused recall, while too little causes no recall. This could be one reason why memory can never function perfectly, no matter how much we wish it did.

  The amount of overlap between cell assemblies depends on how many we try to jam into the network. Clearly, the overlap will become large if we try to store too many memories. At some point there will no longer be any value of the threshold that both allows recollection and prevents confusion. This catastrophe of information overload sets the network’s maximum capacity for storing memories.

  In the cell assembly, all neurons make synapses on all other neurons, so any part of the memory can trigger recall of the rest. A photo of your sweetheart might trigger recollection of his house, and a visit to his house might trigger recollection of him. Recollection is bidirectional in this case, but there are also cases in which it has a unique direction, as in a memory that is essentially a story, a sequence of events unfolding in a particular chronological order. How do we account for that? The obvious answer is to arrange the synapses so that activity can flow in one direction. In the synaptic chain shown in Figure 22, activity spreads from left to right.

  Figure 22. A synaptic chain

  Let me summarize this theory of recollection. Ideas are represented by neurons, associations of ideas by connections between neurons, and a memory by a cell assembly or synaptic chain. Memory recall happens when activity spreads after ignition by a fragmentary stimulus. The connections of a cell assembly or synaptic chain are stable over time, which is how a childhood memory can persist into adulthood.

  The psychological component of this theory is known as associationism, a school of thought that began with Aristotle and was later revived by English philosophers such as John Locke and David Hume. By the late nineteenth century, neuroscientists had recognized the existence of fibers in the brain and were speculating about pathways and connections. It was only logical to suppose that physical connections are the material basis of psychological associations.

  The theory of connectionism was developed by several generations of researchers in the second half of the twentieth century. Over the decades, it was dogged by a persistent set of critiques. As early as 1951, Karl Lashley, the originator of cortical equipotentiality, had published a withering attack in his famous paper “The Problem of Serial Order in Behavior.” His first critique was rather obvious: The brain can generate a seemingly infinite variety of sequences. A synaptic chain might be ideal for reciting a poem, for generating the same sequence of words every time, but doesn’t seem appropriate for normal language, in which the same sentence is rarely ever repeated exactly.

  This first concern of Lashley’s is fairly easy to address. Imagine a synaptic chain that diverges into two chains, like a fork in the road. These two chains could diverge into four, and so on. If there are many branch points in a network, it could potentially generate a huge variety of activity sequences. The trick here is to make sure that activity always “chooses” one branch or the other, but not both. Theorists have shown this can be done through inhibitory neurons that are wired up to make the branches “compete” with each other.

  Lashley’s second, more fundamental critique focused on the problem of syntax. A synaptic chain uses connections to represent the association of one idea with the next in the sequence. Lashley pointed out that generating a grammatical sentence is not so simple, because “each syllable in the series has associations not only with adjacent words in the series, but also with more remote words.” Whether the end of a sentence is correct may depend on the exact arrangement of words at the beginning of the sentence. Lashley’s ideas prefigured the later emphasis of the linguist Noam Chomsky and his many followers on the problem of syntax.

  Connectionists have also addressed Lashley’s second critique, though a discussion of this research is outside the scope of this book. In any case, researchers have shown that connectionism is not as limited as its critics initially believed. I don’t think it’s possible to reject the doctrine on purely theoretical grounds; it needs to be tested empirically, and connectomics can be used to do that, as I’ll explain later.

  But first let me complete the theory. The hypothesis that synapses are the material basis of associations and that recollections arise from cell assemblies and synaptic chains is only half the story. It’s time to confront a question I’ve postponed until now: How is a memory stored in the first place?

  5. The Assembly of Memories

  The great pyramid of Giza has stood for forty-five hundred years, an island of eternity in the shifting desert sands near Cairo. Its massive form invites awe, but just one of its large blocks is imposing enough. No one knows for sure how the two-and-a-half-ton stones were cut at the quarry, transported to the site, and lifted up to 140 meters off the ground. If construction took twenty years, as the ancient Greek historian Herodotus estimated, the 2.3 million blocks were placed at the staggering rate of one every minute.

  The Egyptian pharaoh Khufu built the Great Pyramid to serve as his tomb. If we were not separated from the suffering of one hundred thousand workers by the cool distance of history, we might condemn the pyramid as a cruel display of power by an egotistical despot. But perhaps it is better to forgive Khufu and simply marvel at the fantastic accomplishment of these nameless workers. We can regard the pyramid not as a monument to the pharaoh but as a testament to human ingenuity.

  Khufu’s strategy was straightforward: If you want to be remembered forever, build a massive structure out of material durable enough to survive the ravages of time. By the same token, perhaps the brain’s ability to remember depends on the persistence of its material structure. What else could account for the indelibility of memories that last an entire lifetime? Then again, we sometimes forget or misremember, and we add new memories every day. That’s why Plato compared memory to another kind of material, one more flexible than the pyramid’s stone blocks:

  There exists in the mind of man a block o
f wax. . . . Let us say that this tablet is a gift of Memory, the mother of the Muses; and that when we wish to remember anything . . . we hold the wax to the perceptions and thoughts, and in that material receive the impression of them as from the seal of a ring.

  In the ancient world, wooden boards coated with wax were a common sight, functioning much like our modern-day notepads. A sharp stylus was used to write text or draw diagrams in the wax. Afterward, a straight-edged instrument smoothed the wax, erasing the tablet for its next use. As an artificial memory device the wax tablet served as a natural metaphor for human memory.

  Plato did not mean, of course, that your skull is literally filled with wax. He imagined some analogue—a material that could hold its shape and could also be reshaped. Artisans and engineers mold “plastic” materials and hammer or press “malleable” ones. Likewise, we say that parents and teachers mold young minds. Could that be more than metaphor? What if education and other experiences literally reshape the material structure of the brain? People often say that the brain is plastic or malleable, but what exactly does this mean?

  Neuroscientists have long hypothesized that the connectome is the analogue of Plato’s wax tablet. Neural connections are material structures, as we’ve seen from electron microscope images. Like wax, they are stable enough to remain the same for long periods of time, but they are also plastic enough to change.

  One important property of a synapse is its strength, its weight in the vote conducted by a neuron when “deciding” when to spike. It’s known that synapses can strengthen and weaken; you can think of such changes as reweighting. What exactly happens at a synapse when it strengthens? The discoveries of the many neuroscientists who are investigating this question could fill an entire book. Here I’ll only give a simplistic answer, one that the phrenologists would have liked: Synapses strengthen by getting bigger. Recall that there are neurotransmitter vesicles on one side of the synaptic cleft, and neurotransmitter receptors on the other. A synapse strengthens by creating more of both. To release more neurotransmitter in each secretion, it amasses more vesicles. To be more sensitive to a given amount of neurotransmitter, it deploys more receptors.

  Synapses can also be created and eliminated, a phenomenon I’ll call reconnection. It has long been known that young brains create synapses in droves as neurons connect themselves into a network. The creation of a synapse happens at a point of contact between two neurons. For reasons that are not well understood, vesicles, receptors, and other types of synaptic machinery aggregate at this point. Young brains eliminate synapses as well, by removing such molecular machinery from contact points.

  In the 1960s most neuroscientists believed that synapse creation and elimination ceased by adulthood. Their belief was based on theoretical preconceptions rather than empirical evidence. Maybe they thought of brain development as resembling the construction of an electronic device: We have to connect a lot of wires to build the device, but we never reconnect them differently after it becomes operational. Or maybe they thought of synapse strength as being easy to modify, like computer software, but considered the synapses themselves to be fixed like hardware.

  In the last ten years neuroscientists have done an about-face. It is now widely accepted that synapses are created and eliminated even in adult brains. Convincing evidence was finally obtained directly, by watching synapses in living brains using a new imaging method known as two-photon microscopy. The images in Figure 23 show a dendrite in the cortex of a mouse changing over the course of two weeks. (The day is indicated by the number in the lower left of each image.)

  Figure 23. Evidence for reconnection: spines appearing and disappearing on a dendrite in the cortex of a mouse

  The dendrite bears thornlike protuberances known as spines. Most synapses between excitatory neurons are made onto spines rather than onto the shaft of the dendrite. In the figure, some spines are stable for the whole two weeks, but others appear (for example, look at the spine indicated by the arrowhead) and disappear (see the starred spine). This is good evidence that synapses are being created and eliminated. Researchers still debate how frequently such reconnection happens, but all agree that it is possible.

  Why are reweighting and reconnection so important? These two types of connectome change continue to happen for our entire lives. We must study them if we want to understand personal change as a lifelong phenomenon. No matter how old we get, we never stop storing new memories, barring some kind of brain disorder. As we age, we may complain that it’s more difficult to learn, but even the elderly can acquire new skills. It seems likely that reweighting and reconnection are involved in such changes.

  But do we have any proof? Evidence implicating reweighting in memory storage has come from Eric Kandel and his collaborators, who studied the nervous system of Aplysia californica, a squishy creature found in tide pools of California beaches. This animal retracts its gill and siphon when disturbed, and can become more or less sensitive to disturbances—a simple kind of memory. We previously learned that such behaviors depend on neural pathways from sense organs to muscles. Kandel identified a single connection in the relevant pathway and showed that changes in its strength were related to the simple memory mentioned above.

  Is reconnection involved in memory storage? Earlier I mentioned the phrenological idea of learning as thickening of the cortex. In the 1970s and 1980s, William Greenough and other researchers found evidence that such thickening was caused by an increase in the number of synapses. Their findings—which were made by counting synapses in the thickened cortex of rats who had been raised in enriched cages—led some to propose a neo-phrenological theory: Memories are stored by creating synapses.

  Neither of these approaches truly succeeded in elucidating memory storage, however. Kandel’s approach has faltered for brains more like our own, in which memories do not appear localized to single synapses. It seems more probable that memories are stored as patterns of many connections. Greenough’s approach is also incomplete, because counting synapses does not tell us how they are organized into patterns. Furthermore, increases in synapse number, like cortical thickening, are correlated with learning, but it’s not clear whether they are causally related.

  To really crack the problem of memory, we need to figure out whether reweighting and reconnection are involved, and if so, exactly how. Earlier I explained the theory that the patterns of connection relevant for memory are cell assemblies and synaptic chains. Here I’ll take a further step and propose that these patterns are created by reweighting and reconnection, and I’ll explore the many questions that arise. Are these two processes independent, or do they work together? Why would the brain use both rather than just one? Can we explain some limitations of memory as malfunctions of these storage processes?

  Beyond satisfying our basic curiosity about memory, research on reweighting and reconnection could have practical consequences. Suppose that your goal is to develop a drug that improves memory storage. If you believe neo-phrenology, you might try to develop a drug that enhances the molecular processes involved in synapse creation. But if neo-phrenology is wrong—as it most likely is—your creation of more synapses might have effects very different from what you intended. More generally, whether we want to improve our memory abilities or prevent them from malfunctioning, knowledge about the basic mechanisms will be essential.

  We’ve seen how a cell assembly might retain associations between ideas as connections between neurons. But how does the brain create a cell assembly in the first place? This is the connectionist version of a much older question posed by philosophers: Where do ideas and their associations come from? While some might be innate, it’s clear that others must be learned from experience.

  Over the ages, philosophers came up with a list of principles by which associations can be learned. At the top of the list is coincidence, sometimes called contiguity in time or place. If you see photos of a pop singer with her baseball-player boyfriend, you will learn an association between them. A second
factor is repetition. Seeing these celebrities together just once might not be enough to create the association in your mind, but if you see them ad nauseam day after day in every magazine and newspaper, you will not be able to avoid learning the association. Ordering in time also seems important for some associations. As a child you recited the letters of the alphabet repeatedly until you knew them by heart. You learned the association from each letter to the next, since the letters always followed one another in the same sequence. In contrast, the association between the pop singer and her boyfriend will be bidirectional, since they always appear simultaneously.

  So philosophers proposed that we learn to associate ideas when one repeatedly accompanies or succeeds another. This inspired connectionists to conjecture:

  If two neurons are repeatedly activated simultaneously, then the connections between them are strengthened in both directions.

  This rule of plasticity is appropriate for learning two ideas that repeatedly occur together, like the pop singer and her boyfriend. For learning associations between sequential ideas, connectionists proposed a similar rule:

  If two neurons are repeatedly activated sequentially, the connection from the first to the second is strengthened.

 

‹ Prev