Book Read Free

Know This

Page 15

by Mr. John Brockman


  Aubrey de Grey

  Gerontologist; chief science officer, SENS Foundation; author, Ending Aging

  We’ve been hearing the tales of doom for quite a few years now: The breathtaking promiscuity of bacteria, which allows them to mix and match their DNA with that of others to an extent that puts Genghis Khan to shame, has allowed them to accumulate genetic resistance to more and more of our antibiotics. It’s been trumpeted for decades that the rate at which this occurs can be slowed by careful use, especially by not ceasing a course of antibiotics early—but inevitably there is lack of compliance, and here we are with MRSA, rife in hospitals worldwide, and other major species becoming more broadly antibiotic-resistant with every passing year. The bulk of high-profile expert commentary on this topic is becoming more dire with every passing year.

  But this pessimism rests on one assumption: that we have no realistic prospect of developing new classes of antibiotics anytime soon—antibiotics that our major threats have not yet seen and thus not acquired resistance to. And it now seems that that assumption is unwarranted. It is based the fact that no new antibiotic class with broad efficacy has been identified for decades. But recently a novel method was identified for isolating exactly those, and it seems to work really, really well.

  It arose from a case of sheer chutzpah. Scientists from Boston and Germany got together and reasoned as follows:

  Antibiotics are generally synthesized in nature by bacteria (or other microbes) as defenses against each other.

  We have identified antibiotics in the lab, and thus necessarily only those made by bacterial species that we can grow in the lab.

  Almost all bacterial species cannot be grown in the lab using practical methods.

  That hasn’t changed for decades.

  But those bacteria grow fine in the environment, typically the soil.

  So can we isolate antibiotics from the soil?

  And that’s exactly what they did. They built a device that let them isolate and grow bacteria in the soil itself, with molecules freely moving into and out of the device, thereby sidestepping our ignorance of which such molecules actually matter. They were able to isolate the compounds those bacteria were secreting and test them for antibiotic potency. And it worked. They found a completely new antibiotic, which has been shown to have broad efficacy against several bacterial strains resistant to most existing antibiotics.

  As if that were not enough, here’s the kicker. This was not some kind of massive high-throughput screen of the kind we so often hear about in biomedical research these days. The researchers tried this approach just once, in essentially their backyard, on a very small scale, and it still worked the first time. What that tells us is that it can work again—and again, and again.

  Don’t get me wrong. There is certainly no case for complacency at this stage. This new compound and those discovered by similar means will still need to grind their way through the usual process of clinical evaluation—though there is reason for considerable optimism that that process is speeding up, with the recent case of an Ebola vaccine being a case in point. But still, even though any optimism must for now be cautious, it is justified. Pandemics may not be our future after all.

  The 6 Billion Letters of Our Genome

  Eric Topol, M.D.

  Professor of genomics, Scripps Research Institute; director, Scripps Translational Science Institute; author, The Patient Will See You Now

  In 2015, we crossed a threshold: The first million people had their genomes sequenced. Beyond that, based on progress in sequencing technology, it is projected that we’ll hit 1 billion people sequenced by 2025. That seems formidable but likely, given that the pace of DNA-reading innovation has exceeded Moore’s Law. The big problem is not amassing billions of human genome sequences but how to understand the significance of each of the 6 billion letters that comprise our genome.

  About 98.5 percent of our genome is not made of genes and so doesn’t directly code for proteins. But most of this non-coding portion of the genome influences, in one way or another, how genes function. While it’s relatively straightforward to understand genes, the non-coding elements are far more elusive.

  So the biggest breakthrough in genomics—Science’s 2015 Breakthrough of the Year—is the ability to edit a genome, via so-called CRISPR technology, with remarkable precision and efficiency. We have had genome-editing technologies for several years, including zinc-finger nucleases and TALENs [transcription activator-like effector nucleases], but they weren’t easy to use, nor could they achieve a high rate of successful editing in the cells that were exposed. The precision problem also extended to the need to avoid editing in unintended portions of the genome (so-called off-target effects). Enter CRISPR, and everything has quickly changed.

  Many genome-editing clinical trials are now under way, or will be soon, to treat medical conditions for which treatment or a cure has proved remarkably challenging. These include sickle-cell disease, thalassemia, hemophilia, HIV/AIDS, and some very rare metabolic diseases. Indeed, the first person whose life was saved (by TALENs) was a young girl with leukemia who had failed all therapies attempted until she had her T-cells genome-edited. George Church and his colleagues at Harvard were able to edit sixty-two genes of the pig’s genome to make it immunologically inert, so that the idea of transplanting an animal’s organ into humans—xenotransplantation—has been resurrected. A number of biotech and pharma companies (Vertex, Bayer, Celgene, and Novartis), have recently partnered with the editing-company startups (CRISPR Therapeutics, Editas Medicine, Intellia Therapeutics, Caribou Biosciences) to rev up clinical programs.

  But the biggest contribution of genome editing, and specifically that of CRISPR, is to catapult the field of functional genomics forward. Not understanding the biology of the DNA letters is the biggest limitation of our knowledge base in the field. Many interesting DNA-sequence variant “hits” have been discovered but overshadowed by uncertainty. Determining functional effects of the VUS (variants of unknown significance) has moved at a sluggish pace, with too much of our understanding of genomics based on population studies rather than on pinpointing the biology and potential change in function due to an altered (compared with the reference genome) DNA letter.

  Now we’ve recently seen how we can systematically delete genes to find out which are essential for life. From that, we learned that only about 1,600 (8 percent) of the nearly 19,000 human genes are truly essential. All of the known genes implicated in causing or contributing to cancer can be edited—and, indeed, that systematic assessment is well under way. We have just learned how important the 3D structure of DNA is for cancer vulnerability, by using CRISPR to edit out a particular genomic domain. Moreover, we can now generate a person’s cells of interest (from their blood cells, via induced pluripotent stem cells)—to make heart, liver, brain, or whatever the organ/tissue of interest. When this is combined with CRISPR editing, it becomes a remarkably powerful tool that takes functional genomics to an unprecedented level.

  What once was considered the “dark matter” of the genome is about to be illuminated. The greatest contribution of genome editing will ultimately be to understand the 6 billion letters that comprise our genome.

  Systems Medicine

  Stuart A. Kauffman

  Theoretical biologist; founding director, Institute for Biocomplexity and Informatics, University of Calgary, Canada; author, Humanity in a Creative Universe

  Systems Medicine is emerging, a new holistic view of the organism and the integrated molecules, cells, tissues, and organs comprising that organism living in its world. We are heritors of over forty years of wonderful molecular biology, which was, however, somewhat overconfident of a molecular reductionism that failed to integrate the pieces.

  Within each cell is a vast genetic regulatory system coordinating the activities of thousands of genes—that is, which genes are transcribed when and where, along with new knowledge about epigenetic factors such as histone modifications. These comprise a complex n
onlinear dynamical system whose coordinated behaviors, coupled with the physics and chemistry of molecules and structures within and between cells and the environment, mediate ontogeny and disease.

  It is now becoming known that some of these genetic factors form autoregulatory feedback loops, which are likely to underlie alternative dynamical “attractors,” or stable alternative patterns of gene expression, underlying different cell types. The idea of cell types as alternative attractors goes back to Nobel laureates Jacob and Monod in 1963. If cell types are such attractors, each drains a “basin of attraction” in its state space. Then cell differentiation is a flow among attractors induced by signals or noise, or “bifurcations” to new attractors, as parameters change. Not only cells but tissues and organs may be nonlinear dynamical systems with attractors linked hierarchically in unknown ways.

  This fine, if early, holistic dynamical picture leaves out the myriad biological functions of these variables. We need an enhanced physiology of the total organism in its world. We live in environments. Odd chemicals can switch an antenna to a leg in genetically normal developing fruit flies. What of the thousands of new chemicals unleashed into the atmosphere?

  How can we control and try to “treat” such complex systems? Think of a spring mattress, with linked springs all wiggling. Now, would you try to control the wiggling springs by throwing a small pillow on one spring? Not often, unless its unique product directly mediated a disease. You would try to subtly alter the wiggling of the springs to get the coordinated behavior you want. The same applies to us as patients with vastly complex nonlinear systems underlying health and disease. We need to begin carefully to move toward combinatorial therapies (or multiple pillows), a move that is gradually happening. This move may require new testing procedures beyond our current gold standard of randomized clinical trials, which really only work well if the many factors involved each affect the “phenotype” independently. This is rare in biology, where causality is multiple and interwoven, with feedback loops in complex networks with complex topology and “logic.”

  But there is hope: We can empirically climb “clinical fitness landscapes,” each described with many variables, where peaks represent good treatments by one or many variables, from one or a set of drugs to environmental factors. In fact, almost anecdotal evidence, a kind of “learning by doing,” can search such rugged clinical landscapes. More, Bayesian and other models of the underlying multi-causal mechanisms can guide our empirical search.

  It is a time of hope as we step toward a holistic view of the organism in its world.

  Growing a Brain in a Dish

  Simon Baron-Cohen

  Professor of developmental psychopathology, director, Autism Research Centre, Cambridge University; author, The Science of Evil and The Essential Difference

  One morning three years ago, my talented PhD student Dwaipayan Adhya (known affectionately in the lab as Deep) came into my office. He looked me straight in the eye and said he’d like to grow autistic and typical neurons in a dish, from the earliest moment of development, to observe how the autistic neuron differs from the typical neuron day by day. I dropped everything and listened.

  Sounds like science fiction? You might imagine that to grow a brain cell in a dish the scientist would first have to pluck a neuron from a human embryo, keep it alive in a petri dish, and then watch it under the microscope, measuring how it grows day by day. If that’s what you’re imagining, you’re wrong. There is no way to get a neuron from a human embryo in any ethically acceptable way, for obvious reasons.

  So what method was Deep planning to use? He told me about Shinya Yamanaka of Kyoto University, awarded the Nobel Prize in 2012 (with Cambridge scientist John Gurdon) for his work on induced pluripotent stem cells, or iPSC. In the lab, we call this magic. Here’s how it works.

  Pluck a hair from the head of an adult, then take the follicle from that hair and, using the Yamanaka method, reverse the cell, backward from the adult hair follicle into the state of a stem cell—that is, back to being an undifferentiated cell before it became a hair follicle. This is not an embryonic stem cell; it is an “induced pluripotent” stem cell. Induced because the scientist has forced the adult hair follicle (though you could use any cell in the body), by genetic reprogramming, back into the stem-cell state. And “pluripotent” means it can now be genetically programmed to become any kind of cell in the body—an eye cell, a heart cell, a neuron. If the last one, this is referred to as “neuralizing” the iPSC.

  I said to Deep, “Let’s do it!” It seems entirely ethical: Most adults would be happy enough to donate a single hair from their head; no animal is “sacrificed” in this kind of science; and it enables scientists to study the development of human neurons in the lab.

  The importance of Yamanaka’s scientific breakthrough is that if you want to study development from the first moment of life, iPSC bypasses the need for an embryo. Before this, if you wanted to understand the autistic brain, scientists would rely on postmortem studies when the next of kin donated their autistic relative’s brain to scientific research.

  Brain donations are invaluable, but from a scientific perspective, postmortem brain tissue has many limitations. For example, you may end up with a set of brains donated from individuals of different ages, each of whom died from different causes. Interpretation of results thus becomes difficult. A further complication is that you may know very little about the deceased (e.g., what their IQ was or what their personality was like) and it is often too late to gather such information. Postmortem studies are still informative but come with a handful of caveats.

  Alternatively, if you want to study the autistic brain, you can use an animal model; for example, you create a “knockout” mouse—a mouse genetically engineered to lack a particular gene that you suspect may play a role in autism—and observe its behavior compared to a typical (or wild-type) mouse. If the knockout mouse shows “autistic” behavior—for example, being less sociable—you conclude that the gene that was removed may be causing one or other of the symptoms of human autism. You can see the limitations of such animal studies immediately: How do you know that sociability in a mouse is the same thing as sociability in a human? The interpretation of results from such animal experiments is as littered with caveats as the postmortem studies.

  Now we can see the power of adding iPSC to the scientist’s toolkit for getting answers to questions. If you want to observe the living human brain, you can study the brain from the person you’re interested in and gather as much information about that person as you want: IQ, personality, precise diagnosis—anything else you want. You can even look at the effects of different drugs or molecules on the neuron without having to do these arguably unethical drug studies on an animal.

  The technique is not without its own limitations. An iPSC may not be exactly identical to an embryonic stem cell, so the neuralized iPSC may not be exactly the same as a naturally growing neuron. All tools in the scientist’s toolkit have their limitations, but this one—to my mind—is more ethical and more directly relevant to autism than is animal research. Many labs, like ours, are testing whether you get the same results from both iPSC and postmortem studies, since this strengthens the conclusions that can be drawn.

  Deep’s exciting results will be published in 2016. The combination of a breakthrough scientific method in the hands of a talented young PhD student might just be a game-changer in our understanding of the causes of autism.

  Self-Driving Genes Are Coming

  Stewart Brand

  Founder, The Whole Earth Catalog; cofounder, The Well; cofounder, The Long Now Foundation; author, Whole Earth Discipline: An Ecopragmatist Manifesto

  The new biotech tool called “gene drive” changes our relation to wild species profoundly. Any gene (or set of genes) can be forced to “drive” through an entire wild population. It doesn’t matter if the trait the genes control is deleterious to the organism. With one genetic tweak to its germline, a species can even be compelled to go
extinct.

  The technique works by forcing homozygosity. Once the genes for a trait are homozygous (present on both chomosomes) and the parents are both homozygous, they will breed true for that trait in all their descendants. Artificially selecting for desired traits via homozygosity is what breeders do. Now there’s a shortcut.

  In effect, gene-drive genes forbid the usual heterozygosity in cross-bred parents. In any two parents, if one of them is gene-drive homozygous, all their offspring will be gene-drive homozygous and will express the gene-drive trait. Proviso: it only works with sexually reproducing species—forget bacteria. And it spreads quickly enough only in rapidly reproducing species—forget humans.

  The mechanism was first described as a potential tool by Austin Burt of Imperial College London, in 2003. The way it works is that a “homing endonuclease gene” cuts the DNA in the adjoining chromosome and provides the template for the DNA repair, thus duplicating itself. In Richard Dawkins’s terms, it is an exceptionally selfish gene. Heterozygous becomes homozygous, and after several generations the gene is present in every individual of the population. The phenomenon is common in nature.

  Gene drive shifted from an interesting concept to a powerful tool with the arrival in the last few years of a breakthrough in genome editing called CRISPR/Cas9. Suddenly genes could be edited easily, cheaply, quickly, and with great precision. It was a revolution in biotech.

  In 2014, George Church and Kevin Esvelt at Harvard published three papers spelling out the potential power of CRISPR-enabled gene drive and the kind of public and regulatory oversight needed to ensure its responsible deployment. They also encouraged the development of an “undo” capability. Ideally the effects of an initial gene-drive release could, if desired, be reversed before it spread too far, with the release of a countermanding secondary gene drive.

 

‹ Prev