Connectome

Home > Other > Connectome > Page 10
Connectome Page 10

by Sebastian Seung


  In both rules, by the way, it’s assumed that the strengthening is permanent or at least long-lasting, so that the association can be retained in memory.

  The sequential version of the rule was hypothesized by Donald Hebb, who also proposed the cell assembly in his 1949 book, The Organization of Behavior. Both simultaneous and sequential versions have come to be known as Hebbian rules of synaptic plasticity. Both are said to be “activity-dependent,” because plasticity is triggered by the activity of the neurons involved in the synapse. (There are other ways of inducing synaptic plasticity that do not involve activity, such as the application of certain drugs.) Typically, Hebbian plasticity refers only to synapses between excitatory neurons.

  Hebb was way ahead of his time. Neuroscientists had no means of detecting synaptic plasticity. In fact, they could not even measure synaptic strengths at all. Measurements of spiking had been conducted for decades using metal wires inserted into the nervous system. Since the tip of the wire remained outside the neuron, this method was known as “extracellular” recording. The signals from the wire carried the spikes of several neurons, mixing them together like conversations in a crowded bar. This method, still in use today, is the one that was employed by Itzhak Fried and his collaborators to find the “Jennifer Aniston neuron.” By carefully maneuvering the tip of the wire, it’s possible to isolate the spikes of a single neuron, much as you do when you stick your ear close to the mouth of one of your friends at the bar.

  While extracellular recording was sufficient for detecting spikes, it failed to measure the weak electrical effects of individual synapses. This was first accomplished in the 1950s by inserting a glass electrode with an extremely sharp tip into a single neuron. Such “intracellular” recording is so precise that it can detect signals much weaker than spikes, the equivalent of sticking your ear inside the mouth of a speaker at a bar. An intracellular electrode can also be used to stimulate a neuron to spike, by injecting electrical current into the neuron.

  To measure the strength of a synapse from neuron A to neuron B, we insert electrodes into both neurons; we stimulate neuron A to spike, which causes the synapse to secrete neurotransmitter; and we measure the voltage of neuron B, which responds with a blip. The size of this blip is the strength of the synapse.

  Along with measuring a synapse’s strength, we can also measure changes in its strength. To induce Hebbian plasticity, we stimulate spiking in a pair of neurons. Repeated stimulation, either sequential or simultaneous, has been shown to strengthen synapses in accordance with the two versions of the Hebbian rule given earlier.

  After a change in synaptic strength has been induced, it can last for the rest of the experiment—a few hours at most, as it’s not easy to keep the neurons alive after they’ve been penetrated with electrodes. But cruder experiments involving populations of neurons and synapses, first done in the 1970s, suggest that changes in synaptic strength can last for weeks or longer. The issue of persistence is critical if Hebbian plasticity is to be the mechanism of memory storage, as some memories can last for a lifetime.

  These experiments from the 1970s provided the first evidence for synaptic strengthening. By that time a theory of memory storage had also emerged, based on Hebb’s original ideas. In the simplest version of the theory, a network starts out with weak synapses in both directions between the neurons of every pair. This assumption will turn out to be problematic, but let’s accept it for now, for the purpose of introducing the theory.

  Return to the scene of your first kiss, the actual event that imprinted your memory. The “magnolia neuron,” the “red brick house neuron,” the “sweetheart neuron,” the “plane neuron,” and so on were being activated by the stimuli around you—quite vigorously, I imagine. If we assume the simultaneous version of the Hebbian rule, all this spiking strengthened the synapses between these neurons.

  The strengthened synapses together constitute a cell assembly, if we redefine this concept to mean a set of excitatory neurons mutually interconnected by strong synapses. Our original definition didn’t have this stipulation. We need it now because the network contains many weak synapses that do not belong to the cell assembly. They existed before your first kiss, and remained unchanged afterward.

  The weak synapses have no effect on recollection. Activity spreads from neuron to neuron within the cell assembly but does not spread any farther, because synapses from the cell assembly to other neurons are too weak to activate them. Thus the new definition of a cell assembly functions just as the old one did.

  An analogous theory applies for the synaptic chain. Suppose that a sequence of stimuli activates a sequence of ideas. Each idea is represented by the spiking of a group of neurons. If the groups spike in this sequence repeatedly, the sequential version of the Hebbian rule will strengthen all existing synapses from neurons in each group to neurons in the next group. This is a synaptic chain, if we redefine this concept to mean a pattern of strong connections.

  If the connections are sufficiently strong, then the spiking will propagate through the chain without any need for a sequence of external stimuli. Any stimulus that activates the first group of neurons will trigger the recollection of a sequence of ideas, as described in Chapter 4. Every successive recollection of the sequence will further strengthen the connections of the chain by Hebbian plasticity. This is analogous to the way that the flowing water of a stream slowly deepens its bed, making it even easier for the water to flow.

  While it’s important to remember things, it’s also vital to forget. At one time your Jennifer Aniston and Brad Pitt neurons were linked by strong synapses into a cell assembly. But one day you started to see Brad with Angelina. (I know it was sad, but I hope you didn’t feel too devastated.) Hebbian plasticity strengthened the connections between your Brad and Angelina neurons, creating a new cell assembly. What happened to the connections between your Brad and Jen neurons?

  You could imagine an analogue of the Hebbian rule serving the function of forgetting. Perhaps the connections between two neurons are weakened if one is repeatedly active while the other is inactive. This would weaken the synapses between Brad and Jen every time you saw him without her.

  Alternatively, one can imagine that weakening is caused by direct competition between synapses. Perhaps the synapses between Brad and Angelina directly compete with those between Brad and Jen for some foodlike substance that synapses need in order to survive. If some synapses strengthen, they consume more of the substance, leaving less for the others, which grow weak. It’s not clear whether such substances exist for synapses, but analogous “trophic factors” are known to exist for neurons. Nerve growth factor is one example; its discovery won Rita Levi-Montalcini and Stanley Cohen a 1986 Nobel Prize.

  The Romans used the phrase tabula rasa to refer to the wax tablets mentioned by Plato. It’s traditionally translated as “blank slate,” since little chalkboards replaced wax tablets in the eighteenth and nineteenth centuries. In “An Essay Concerning Human Understanding,” the associationist philosopher John Locke resorted to yet another metaphor:

  Let us then suppose the mind to be, as we say, white paper, void of all characters, without any ideas. How comes it to be furnished? Whence comes it by that vast store which the busy and boundless fancy of man has painted on it with an almost endless variety? Whence has it all the materials of reason and knowledge? To this I answer, in one word, from experience.

  A sheet of white paper contains zero information but unlimited potential. Locke argued that the mind of a newborn baby is like white paper, ready to be written on by experience. In our theory of memory storage, we assumed that all neurons started out connected to all other neurons. The synapses were weak, ready to be “written on” by Hebbian strengthening. Since all possible connections existed, any cell assembly could be created. The network had unlimited potential, like Locke’s white paper.

  Unfortunately for the theory, the assumption of all-to-all connectivity is flagrantly wrong. The brain is actually at th
e opposite extreme of sparse connectivity. Only a tiny fraction of all possible connections actually exist. A typical neuron is estimated to have tens of thousands of synapses, much less than the total of 100 billion neurons in the brain. There’s a very good reason for this: Synapses take up space, as do the neurites they connect. If every neuron were connected to every other neuron, your brain would swell in volume to a fantastic size.

  So the brain has to make do with a limited number of connections. This could present a serious problem when you are learning associations. What if your Brad and Angelina neurons had not been connected at all? When you started seeing them together, Hebbian plasticity could not have succeeded in linking the neurons into a cell assembly. There is no potential to learn an association unless the right connections already exist.

  Especially if you think a lot about Brad and Angelina, it’s likely that each is represented by many neurons in your brain, rather than just one. (In Chapter 4 I argued that this “small percentage” model is more plausible than the “one and only” model.) With so many neurons available, it’s likely that a few of your Brad neurons happen to be connected to a few of your Angelina neurons. That might be enough to create a cell assembly in which activity can spread from Brad neurons to Angelina neurons during recollection, or vice versa. In other words, if every idea is redundantly represented by many neurons, Hebbian learning can work in spite of sparse connectivity.

  Similarly, a synaptic chain can be created by Hebbian plasticity even if some connections are missing. Imagine removing the connection represented by the dashed arrow shown in Figure 24. This would break some pathways, but there would still be others extending from the beginning to the end, so the synaptic chain could still function. Each idea in the sequence is represented by only two neurons in the diagram, but adding more neurons would make the chain even more able to withstand missing connections. Again, a redundant representation enables learning to establish associations in spite of sparse connectivity.

  Figure 24. Elimination of a redundant connection in a synaptic chain

  The ancients already knew the paradoxical fact that remembering more information is often easier than remembering less. Orators and poets exploited this fact in a mnemonic technique called the method of loci. To memorize a list of items, they imagined walking through a series of rooms in a house and finding each item in a different room. The method may have worked by increasing the redundancy of each item’s representation.

  So sparse connectivity could be a major reason why we have difficulty memorizing information. Because the required connections don’t exist, Hebbian plasticity can’t store the information. Redundancy solves this problem somewhat, but could there be some other solution?

  Why not create new synapses “on demand,” whenever a new memory needs to be stored? We could imagine a variant of Hebb’s rule of plasticity: “If neurons are repeatedly activated simultaneously, then new connections are created between them.” Indeed this rule would create cell assemblies, but it conflicts with a basic fact about neurons: There is negligible crosstalk between electrical signals in different neurites. Let’s consider a pair of neurons that contact each other without a synapse. They could create one, but it’s implausible that this event could be triggered by simultaneous activity. Because there is no synapse, the neurons can’t “hear” each other or “know” they are spiking simultaneously. By similar arguments, the “on-demand” theory of creation doesn’t seem plausible for synaptic chains either.

  So let’s consider another possibility: Perhaps synapse creation is a random process. Recall that neurons are connected to only a subset of the neurons that they contact. Perhaps every now and then a neuron randomly chooses a new partner from its neighbors and creates a synapse. This may seem counterintuitive, but think about the process of making friends. Before you speak with someone, it’s almost impossible to know whether you should be friends. The initial encounter might as well be random—at a cocktail party, in the gym, or even on the street. Once you start to talk, you develop a sense of whether your relationship could strengthen into friendship. This process isn’t random, as it depends on compatibility. In my experience, people with the richest sets of friends are open to chance meetings but also very skilled at recognizing new people with whom they “click.” The random and unpredictable nature of friendship is a large part of its magic.

  Similarly, the random creation of synapses allows new pairs of neurons to “talk” with each other. Some pairs turn out to be “compatible,” because they are activated simultaneously or sequentially as the brain attempts to store memories. Their synapses are strengthened by Hebbian plasticity to create cell assemblies or synaptic chains. In this way, the synapses for learning an association can be created even if they don’t initially exist. We may eventually succeed at learning after failing at first, because our brains are continually gaining new potential to learn.

  Synapse creation alone, however, would eventually lead to a network that is wasteful. In order to economize, our brains would need to eliminate the new synapses that aren’t used for learning. Perhaps these synapses first become weaker by the mechanisms discussed earlier (recall what happens when you are unlearning the Brad–Jen connection), and the weakening eventually causes the synapses to be eliminated.

  You could think of this as a kind of “survival of the fittest” for synapses. Those involved in memories are the “fittest,” and get stronger. Those not involved get weaker, and are finally eliminated. New synapses are continually created to replenish the supply, so that the overall number stays constant. Versions of this theory, known as neural Darwinism, have been developed by a number of researchers, including Gerald Edelman and Jean-Pierre Changeux.

  The theory argues that learning is analogous to evolution. Over time, a species changes in ways that might seem intelligently designed by God. But Darwin argued that changes are actually generated randomly. We end up noticing only the good changes, because the bad ones are eliminated by natural selection, the “survival of the fittest.” Similarly, if neural Darwinism is correct, it might seem that synapses are “intelligently” created, that they are generated “on demand” only if needed for cell assemblies or synaptic chains. But in fact synapses are created randomly, and then the unnecessary ones are eliminated.

  In other words, synapse creation is a “dumb,” random process that endows the brain only with the potential for learning. By itself, the process is not learning, contrary to the neo-phrenological theory mentioned earlier. This is why a drug that increases synapse creation might be ineffective for improving memorization, unless the brain also succeeds at eliminating the larger number of unnecessary synapses.

  Neural Darwinism is still speculative. The most extensive studies of synapse elimination are by Jeff Lichtman, who has focused on the synapses from nerves to muscles. Early in development, connectivity starts out indiscriminate, with each fiber in a muscle receiving synapses from many axons. Over time, synapses are eliminated until each fiber receives synapses from just a single axon. In this case, synapse elimination refines connectivity, making it much more specific. Motivated to see this phenomenon more clearly, Lichtman has become a major proponent of superior imaging technologies—a topic I’ll return to in later chapters.

  Through the images of dendritic spines shown earlier in Figure 23, we saw that reconnection has also been studied in the cortex. The researchers showed that most new spines disappear within a few days, but a larger fraction survive when the mouse is placed in an enriched cage like the ones Rosenzweig used. Both of these observations are consistent with the idea of “survival of the fittest,” that new synapses survive only if they are used to store memories. The evidence is far from conclusive, however. It’s an important challenge for connectomics to reveal the exact conditions under which a new synapse survives or is eliminated.

  ***

  We’ve seen that the brain may fail to store memories if the required connections don’t exist. That means reweighting has limited capa
city for storing information in connectivity that is fixed and sparse. Neural Darwinism proposes that the brain gets around this problem by randomly creating new synapses to continually renew its potential for learning, while eliminating the synapses that aren’t useful. Reconnection and reweighting are not independent processes; they interact with each other. New synapses provide the substrate for Hebbian strengthening, and elimination is triggered by progressive weakening. Reconnection provides added capacity for information storage, compared with reweighting alone.

  A further advantage of reconnection is that it may stabilize memories. For a clearer understanding of stability it’s helpful to broaden the discussion. So far I’ve focused on the idea that synapses retain memories. I should mention, however, that there is evidence for another retention mechanism based on spiking. Suppose that Jennifer Aniston is represented not by a single neuron but by a group of neurons organized into a cell assembly. Once the stimulus of Jen causes these neurons to spike, they can continue to excite each other through their synapses. The spiking of the cell assembly is self-sustaining, persisting even after the stimulus is gone. The Spanish neuroscientist Rafael Lorente de Nó called this “reverberating activity,” because of its similarity to a sound that persists by echoing in a canyon or cathedral. Persistent spiking could explain how you can remember what you have just seen.

  Judging from many experiments, such persistent spiking appears to retain information over time periods of seconds. There is good evidence, however, that retention of memories over long periods does not require neural activity. Some victims of drowning in icy water have been resuscitated after being effectively dead for tens of minutes. Even though their hearts had stopped pumping blood, the icy cold prevented permanent brain damage. The lucky ones recovered with little or no memory loss, despite the complete inactivity of their neurons while their brains were chilled. Any memories that were retained through such a harrowing experience cannot depend on neural activity.

 

‹ Prev