Book Read Free

Know This

Page 32

by Mr. John Brockman


  The State of Brain Science

  Terrence J. Sejnowski

  Computational neuroscientist; Francis Crick Professor, Salk Institute; co-author (with Patricia Churchland), The Computational Brain

  The big news on April 2, 2013, was the announcement of the BRAIN Initiative from the White House; its goal is to develop innovative neurotechnology for understanding brain function. Grand challenges like this happen once every few decades, including the announcement in 1961 of the Apollo Program to land a man on the Moon, the War on Cancer in 1971, and the Human Genome Project in 1990. These were ten-to-fifteen-year national efforts bringing together the best and the brightest to attack a problem that could be solved only on a national scale.

  Why the brain? Brains are the most complex devices in the known universe and so far our attempts to understand how they work have fallen short. It will take a major international effort to crack the neural code. Europe weighed in earlier with the Human Brain Project and Japan later announced a Brain/MINDS project to develop a transgenic nonhuman primate model. China is also planning an ambitious brain project.

  Brain disorders are common and devastating. Autism, schizophrenia, and depression destroy personal lives and place an enormous economic burden on society. In this country, the annual cost of maintaining patients with Alzheimer’s disease is $200 billion and climbing as our population ages. Unlike heart diseases and cancers that lead to rapid death, patients can live for decades with brain disorders. The best efforts of drug companies to develop new treatments have failed. If we don’t find a better way to treat broken brains now, our children will suffer terrible economic consequences.

  A second reason to reach a better understanding of brains is to avert a catastrophic collapse of civilization, which is happening in the Middle East. The Internet has enabled terrorist groups to proliferate, and modern science has created existential threats ranging from nuclear weapons to genetic recombination. The most versatile weapon-delivery system is the human being. We need to better understand what happens in the brain of a suicidal terrorist planning maximum destruction.

  These motivations are based on brains behaving badly, but the ultimate scientific goal is to discover the basic principles of normal brain function. Richard Feynman once noted that “what I cannot create, I do not understand.” That is, if you can’t prove something to yourself, you don’t really understand it. One way to prove something is to build a device based on what you understand and see if it works. Once we have truly uncovered the principles of how the brain works, we should be able to build devices with similar capabilities. This will have a profound effect on every aspect of society, and the rise of artificial intelligence based on machine learning is a harbinger. Our brain is the paramount learning machine.

  These are the goals of the BRAIN Initiative, but its results may be different from our expectations. The goal of the Apollo Program was accomplished, but if the Moon was so important why have we not gone back there? In contrast, building the technologies needed to reach the Moon has produced many benefits: a thriving satellite industry; advances in digital communications, microelectronics, and materials science; and a revamping of the science and engineering curriculum. The War on Cancer is still being fought, but the invention of recombinant DNA technology allowed us to manipulate the genome and created the biotechnology industry. The goal of the Human Genome Project was to cure human diseases, which we now know are not easily deciphered by reading the base pairs, but the sequencing of the human genome has transformed biology and created a genomic industry that fosters personalized, precision medicine.

  The impact of the BRAIN Initiative will be the creation of neurotechnologies that match the complexity of the brain. Genetic studies have uncovered hundreds of genes that contribute to brain disorders. Drugs have not been as effective in treating brain disorders as they have for heart diseases, because of the diversity of cell types in the brain and the complexity of the signaling pathways. The development of new neurotechnologies will create tools that can more precisely target the sources of brain disorders. Tools from molecular genetics and optogenetics are already giving us an unprecedented ability to manipulate neurons, and more powerful tools are on the way from the BRAIN Initiative.

  An important lesson from the history of national grand challenges is that there is no better way to invest in the future than focusing the best and brightest minds on an important problem and building the infrastructure needed to solve it.

  Nootropic Neural News

  George Church

  Professor of genetics, Harvard Medical School; director, Personal Genome Project; co-author (with Ed Regis), Regenesis

  The most accessed parts of the Internet focus on new news and old news via search engines and social-network news about shopping, pets, and humans—especially sportful and celebrity humans. What is the distinction between popularity and enduring importance?

  In remote indigenous peoples (300 million strong, including Kawahiva, Angu, Sentineli) and our primate relatives, the distinction seems small. In contrast, in our hypercivilization, the importance of survival has been decoupled from popularity. Our ancient starvation for sugar and fat has morphed today into nearly limitless ad libitum cardio-challenging doughnuts and steaks. Our instincts to reproduce can now be rechanneled into a wide variety of diversions. Practice for the hunt with rocks and spears has become inflated to fill 514 stadiums holding 40,000 to 220,000 spectators, with up to 4.8 billion viewers via electronics. Mild analgesic herbal medicines have become powerfully pure and addictive. Running toward (or away from) a predator-prey encounter has transformed into a market for fast cars, killing 1.2 million people per year (roughly equal to all humans alive 10,000 years ago).

  Our Darwinian drive to improve our survival relative to other species now includes augmentations that would be baffling to our ancestors—dodging asteroids via Mars colonies and handheld neural prosthetic supercomputers with two video cameras.

  The new news is that Greenpeace, KMP, and MASIPAG are accused of “crimes against humanity” for blocking (including vandalizing safety-testing experiments), from 2002 to 2016, golden rice, which could save a million souls per year from vitamin-A deficiency.

  The old news, again (courtesy of the national academies of the U.S., U.K., and China), is that after forty years we still haven’t reached a consensus on whether we want embryo (germline) augmentation. But this is likely a moot point, since genetic and non-genetic adult augmentation represents hundredfold larger markets and much faster potential return on enhancement—weeks rather than decades; Web-warp-drive speed vs. human-generation speed. As with ancient (DNA) evolution, so too with new techno-cultural (r)evolution: Even a fractional-percent advantage grows exponentially, resulting in a swift and complete displacement of the old.

  We seek news of aging reversal and nootropics (memory and cognitive enhancers). We hunt down ways to get ahead of the FDA-EMA-CFDA curve, even risking the very youth and cognition we seek to extend. Loopholes in the global regulatory fabric include “natural” products, medical tourism, “practice of medicine” (including surgical procedures and stem-cell therapies).

  Our ability to prioritize and process the news is in an autocatalytic, positive feedback loop in which we extend our brain both biologically and electronically. Surgery could extend our brain capacity from 1.2 kg to 50 kg (routine head loads of the Sherpas of Nepal). The rate of growth of neural systems could be as fast as the doubling time of human cells (about one day) with differentiation from generic stem cells to complex neural nets recently engineered to occur in four days.

  With sufficiently intimate proximity of two or more kg-scale brains, the possibility of mind-backups might be closer than via cloning (which lacks neural copying) or via computer simulation (which requires deeper understanding than mere bio-copying and has a millionfold energy inefficiency relative to brains).

  The news is that we can measure and manipulate human neural development and activity with the exponentially impro
ving “innovative neurotechnologies” (the last two letters of the BRAIN Initiative acronym). If (when) these augmentations begin to seriously help us process information, that would be mind-boggling and important news.

  Memory Is a Labile Fabrication

  Kate Jeffery

  Professor of behavioral neuroscience, Dept. of Psychology, University College London

  We used to think of memory as a veridical record of events past, like a videotape in our heads always on hand to be replayed. Of course, we knew memory to be far more fragile and incomplete than a real videotape: We forget things, and many events aren’t even stored in the first place. But when we replay our memories, we feel sure that what we do recall really happened. Indeed, our entire legal system is built on this belief.

  Three scientific discoveries in the past century have changed that picture: two some time ago, and one (the “news”) recent. Some time ago, we learned that memory is not a record so much as a reconstruction. We don’t recall events so much as reassemble them, and crucial aspects of the original event may get substituted: It wasn’t Georgina you ran into that day, it was Julia; it wasn’t Monte Carlo, it was Cannes; it wasn’t sunny, it was overcast (it rained later, remember)? Videotapes never do that—they get ragged and skip sections or lose information, but they don’t make things up.

  It has also been known since the 1960s that the act of reactivating a memory renders it temporarily fragile, or “labile.” In its labile state, a memory is vulnerable to disruption and might be stored again in altered form. In the laboratory, this alteration is usually a degradation induced by some memory-unfriendly agent, like a protein-synthesis inhibitor. We knew such drugs could affect the formation of memories, but surprisingly they can also disrupt a memory after it has been formed.

  The story doesn’t end there. Recently it has been shown that memories aren’t just fragile when they’ve been reactivated but can actually be deliberately altered. Using some of the amazing new molecular genetic techniques developed in the past three decades, we can identify which subset of neurons participated in the encoding of an event, and then experimentally reactivate only those specific neurons, so that the animal is forced (we believe) to recall the event. During this reactivation, scientists have been able to tinker with these memories so that they end up different from the originals. So far, these tinkerings have just involved changing emotional content—such that, for example, a memory of a place which was neutral becomes positive, or a positive one becomes negative, so that the animal subsequently seeks out or avoids those places. But we aren’t far from trying to write new events into these memories, and this will likely be achievable.

  Why would we evolve a disconcerting system like this? Why can’t memory be more like a videotape, so that we can trust it more? We don’t know the answer for sure yet, but evolution doesn’t care about veracity, it cares only about survival, and there’s usually a good reason for apparently odd design features.

  The advantages of the constructive nature of memory seem obvious: To remember every pixel of a life experience requires enormous storage capacity; it’s a far more economical use of our synapses to stockpile a collection of potential memory ingredients and simply record each event in the form of a recipe: Take a pinch of a Southern French beach, add a dash of old school friend, mix in some summer weather, etc. Many theoretical neuroscientists think the labile nature of memory may allow construction of supermemories (called semantic memories)—agglomerations of individual event-memories combined to form an overarching piece of knowledge about the world. After a few visits to the Mediterranean, you learn that it’s usually sunny, and so the odd incidence of overcast gloom gets washed out and fades from recollection. Your behavior thus becomes adapted not to a specific past event but to the general situation, and you know on holiday to pack sunscreen and not umbrellas.

  The fabricated, labile nature of memory is at once a reason for amazement and concern. It is amazing to think that the brain is constantly reassembling our past, and that the past is not really as we think it is. It is concerning because this constructed past seems extraordinarily real—almost as real as our present—and we base our behavior on it. Thus, an eye-witness will make confident assertions that lead to someone’s lifelong incarceration, and nobody worries about this except neuroscientists. It is also amazing/concerning that, as scientists and doctors, we are now on the threshold of memory editing—able to selectively alter a person’s life memories.

  The therapeutic potential of this is exciting—imagine being able to surgically reduce the pain of a traumatic memory! But these are technologies to use with care. In reaching into the brain and changing a person’s past, we may change who they are. However, one could argue that the fabricated and labile nature of our memories means that perhaps we aren’t really who we think we are anyway.

  The Continually New You

  Stephen M. Kosslyn

  Founding dean, Minerva Schools, Keck Graduate Institute; co-author (with G. Wayne Miller), Top Brain, Bottom Brain

  One of my undergraduate mentors, a senior scientist nearing the end of a long and distinguished career, once commented to me that even after an extraordinarily close marriage of more than fifty years, his wife could still say and do things that surprised him. I suspect he could have extended the observation: For better or worse, even after a lifetime of living, you can still learn something new and surprising about yourself. Who and what we are will always have an element of something new, simply because of how the brain works. Here’s why:

  How we respond to objects and situations we perceive or ideas we encounter depends on our current cognitive state, wherein different concepts are “primed.” Primed concepts are activated in our minds and influence how we interpret and respond to current situations. A huge literature now documents the effects of such priming.

  How we interpret new stimuli or ideas relies in part on chaotic processes. Here’s my favorite analogy for this: A raindrop dribbles down a windowpane. An identical raindrop, starting at the same spot on the window, will trace a different path. Even very small differences in the start state will affect the outcome (this is part and parcel of what it is to be a chaotic system). The state of the windowpane, which depends on ambient temperature, effects of previous raindrops, and other factors, is like the state of the brain at a particular point in time: Depending on what one has just encountered and what one was thinking and feeling, different concepts will be primed, and this priming will influence the effects of a new perception or idea.

  With age and experience, the structure of information stored in long-term memory becomes increasingly complex. Hence, priming has increasingly nuanced effects, which become increasingly difficult to predict.

  In short, each of us grows as we age and experience more and varied situations and ideas, and we can never predict perfectly how we’ll react to a new encounter. Why not? What we understand about ourselves depends on what we paid attention to at the time events unfolded and on our imperfect conceptual machinery for interpreting ourselves. Our understanding of ourselves will not capture the subtle effects of the patterns of priming affecting our immediate perceptions, thoughts, and feelings. Thus, although we cannot be forever young, we can be indefinitely new—at least in part.

  So we should give others and ourselves some slack. We should be forgiving when friends surprise us negatively—the friends, too, may be surprised. And the same is true of us.

  Toddlers Can Master Computers

  Alison Gopnik

  Psychologist, UC Berkeley; author, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children

  In the last couple of years, toddlers and even babies have begun to be able to use computers. This may seem like the sort of minor news that shows up in the lifestyle section of the paper and in cute YouTube videos. But it actually presages a profound change in the way human beings live.

  Touch and voice interfaces have become ubi
quitous only recently; it’s hard to remember that the iPhone is just eight years old. For grown-ups, these interfaces are a small additional convenience, but they transform the way young children interact with computers. For the first time, a toddler can directly control a smartphone or tablet.

  And they do. Young children are fascinated by these devices and remarkably good at getting them to do things. In recognition of this, in 2015 the American Academy of Pediatrics issued a new report about very young children and technology. For years the Academy had recommended that children younger than two should have no access to screens at all. The new report recognizes that this recommendation has become impracticable. It focuses instead, sensibly, on ensuring that when young children look at screens, they do so in concert with attentive adults, and that adults supervise what children see.

  But this isn’t just news for anxious parents; it’s important for the future of the human species. There is a substantial difference between the kind of learning we do as adults, or even as older children, and the kind of learning we do before we are five. For adults, learning mostly requires effort and attention; for babies, learning is automatic. Grown-up brains are more plastic than we once thought (neural connections can rewire), but very young brains are far more plastic; young children’s brains are designed to learn.

  In the first few years of life, we learn about the way the physical, biological, and psychological world work. Even though our everyday theories of the world depend on our experience, by the time we’re adults we simply take them for granted—they’re part of the unquestioned background of our lives. When technological, culturally specific knowledge is learned early, it becomes part of the background too. In our culture, children learn how to use numbers and letters before they’re five. In rural Guatemala, they learn how to use a machete. These abilities require subtle and complicated knowledge, but it’s a kind of knowledge that adults in the culture hardly notice (though it may startle visitors from another culture).

 

‹ Prev