Book Read Free

End Times: A Brief Guide to the End of the World

Page 22

by Bryan Walsh


  Scientists have been able to practice for decades what we might think of as basic genetic engineering—knocking out a gene or moving one between species. More recently they have learned to rapidly decode and sequence genes, which makes the book of life readable. But that was just the beginning. Now researchers can edit genomes and even write entirely original DNA. That gives scientists growing control over the basic code that drives all life on Earth, from the most basic bacterium to, well, us. This is the science of synthetic biology. “Genetic engineering was like replacing a red lightbulb with a green lightbulb,” said James Collins, a biological engineer at the Massachusetts Institute of Technology (MIT) and one of the early pioneers of the field. “Synthetic biology is introducing novel circuitry that can control how the bulbs turn off and on.”

  Every bit of living matter operates on the same genetic code, formed in part by the nucleotide bases of DNA: cytosine (C), guanine (G), adenine (A), thymine (T). This is the programming language of life, and it hasn’t changed much since Earth’s primordial beginnings.16 Just as the English language can be used to write both “Baa, Baa Black Sheep” and Ulysses, so DNA in all its combinations can write the genome of a 0.00008-inch long E. coli bacterium17 and an 80-foot-long blue whale.18 “The same DNA in humans is the same DNA in every organism on the planet,” said Jason Kelly, the CEO of Ginkgo Bioworks, a synthetic biology start-up based in Boston. “This is the fundamental insight of synthetic biology.”

  The language of DNA may have first been written billions of years ago, but we only learned to read it in recent years. Sequencing DNA—determining the precise order of the C, G, A, and T—was first performed in the 1970s.19 For years it was laborious and expensive. It took more than a decade and about $2.7 billion for the public and privately funded scientists behind the Human Genome Project to complete their mission: the first full, sequenced draft of the genes that encode a human being.20 But thanks in part to technological advances driven by that effort—the Apollo Project of the life sciences—the price of sequencing DNA has plummeted. It now costs less than $1,000 to sequence a person’s full genome,21 and it can be done in a couple of days.

  But reading a genome is just the beginning. As it has become cheaper and easier to sequence genetic data, the same trends are playing out in the writing of genes, albeit more slowly. This is the synthesis in synthetic biology, the ability to author a genome—or maybe just edit it a little.

  What does that mean for smallpox? Before the synthetic biology revolution, a virus was a thing. Not quite living, not quite dead, but it existed only in the real world, whether in the wild in its human hosts or as archived samples in a lab. But a virus is just genetic data, a certain series of DNA or RNA, much as this book is a collection of letters arranged just so. Like any collection of data, the genetic code of a smallpox virus can be copied and shared. But the method matters. A printed book can be copied and shared by hand, as viruses can be grown in a lab by experienced technicians. Just as it’s faster and easier, however, to share a digital copy of a book than a printed one, it’s faster and easier to share the digital data that makes up a virus. And a biologist with the right and not terribly rare set of skills and tools could take that genetic data and synthesize a sample of their very own smallpox virus. What can be digitized cannot easily be controlled—just ask the record companies that tried to prevent the sharing of digital music after Napster. And now life itself can be pirated.

  One response, already in place, is to make it illegal to download the genetic blueprint of certain agents like smallpox or Ebola or SARS. DNA synthesis companies like Twist Bioscience work with the U.S. government to check customer orders for anything suspicious—and that means both the orders and the customer. “We have a strict protocol so that every sequence that comes in is screened,” Twist’s Leproust told me. “For instance, if it’s a sequence for a flu virus and it’s from a company that is developing a diagnostic test for flu, that’s great. But if someone orders the Ebola sequence to be shipped to a P.O. box in North Korea, we would not do it.”

  There’s a lot of daylight, however, between sending flu virus data to a medical diagnostics company and shipping Ebola blueprints to Kim Jong Un. In 2017 a team of researchers led by David Evans from the University of Alberta stitched together fragments of mail-order DNA to re-create an extinct relative of smallpox called horsepox. The entire experiment cost $100,000 and took about six months.22 Horsepox itself isn’t dangerous to human beings, and Evans—an internationally recognized expert in pox viruses—said he performed the experiment to help create a better vaccine for smallpox. But his work triggered a firestorm of criticism from scientists who fretted that the publication of Evans’s research in the open-access science journal PLOS One23 had shown terror groups how to synthesize a smallpox virus of their own.

  What Evans and his team did wasn’t illegal, in part because it was done with private money, not public funds. Evans has said that he discussed the work with federal agencies in Canada—though doing so wasn’t required—and his university’s lawyers reviewed his paper for legal issues.24 Even as Evans was performing his experiments, experts at the WHO were hammering out the rules around synthesizing potentially dangerous viruses. Yet Evans, without really asking anyone’s permission, went ahead and did the work on his own, presenting the synthesized horsepox virus to the world as a fait accompli—and then published his methods for all to see. “Have I increased the risk by showing how to do this?” Evans told Science in 2017.25 “I don’t know. Maybe yes. But the reality is that the risk was always there.”

  The horsepox case demonstrates two concepts that are key to understanding the existential risk posed by biotechnology, and all existential risks derived from emerging technologies. The first is “information hazards,” a term coined by Nick Bostrom.26 Information hazards are risks that arise from the spread of information—especially new discoveries—that might directly cause harm or enable someone else to cause harm. They are the unwanted children of science. If the genetic sequence of Ebola were put online for anyone to download, that act would represent an information hazard. Information hazards can also include more general ideas such as employing deep learning to build more effective artificial intelligences. The discoveries need not be immediately weaponizable, and they may appear to have benign consequences, at least at first. The same fundamental work in atomic physics that eventually made the Trinity test possible first gave the world invaluable insights into how matter itself was composed, information that seemed largely harmless to most scientists at the time. The point is that we should be aware that there is a hazard to putting many kinds of information out in the public sphere—even though that is exactly what scientists are trained to do. And that’s what makes information hazards so pernicious.

  Every weapon, from the first sharpened stone tool to the latest killer drone, began as a discovery. But where once information was handed down from person to person in an analog chain, information is now digitized data. That makes it infinitely easier to spread, and infinitely more difficult to control. “Information wants to be free”27 goes the old line, coined by our friend Stewart Brand. That’s usually meant as a political posture, or sometimes just a belief that we should be able to download music and videos without paying. But what it really describes is an inescapable fact of the digital age, which is also the age of existential risk. Information wants to be free in the way that water wants to flow downhill—and it’s just as hard to stop.

  Evans didn’t think that his experiment made the world more dangerous. (When he said that “the risk was always there,” he likely meant that he believes the information hazard of piecing together a smallpox virus existed whether or not he did the actual work.) Based on the critical reaction of the virology community to his horsepox paper, however, Evans’s colleagues didn’t agree. But they didn’t—and couldn’t—stop him.

  That brings us to another term, also coined by Bostrom: “the unilateralist’s curse.”28 Imagine a community of biologists
who each have the ability to carry out an experiment that might accidentally show a terrorist how to create a powerful biological weapon. It only takes one person to decide—unilaterally—to carry out that experiment, and thus create an information hazard that exposes everyone to the potential harm of the bioweapon. It doesn’t matter if 99 out of 100 biologists decide not to perform the experiment. If one goes forward, the information hazard is born.

  The curse is the asymmetry. Since any configuration other than all 100 scientists deciding not to do the experiment means that someone will carry it out, there is a bias toward information hazard. The greater the number of people who have to individually make the decision to not perform the experiment, the more likely it is that someone will go ahead and do it.29 (Ben Franklin has a useful quote here: “Three can keep a secret, if two of them are dead.”) Everything about modern science as an institution—the relentless drive for prestige, the rivalry between major scientific publications for landmark papers, the cutthroat competition between scientists to publish new discoveries first—puts more weight on that bias. Perhaps this is what Robert Oppenheimer meant when he said the following, after the Trinity test: “the deep things in science are not found because they are useful; they are found because it was possible to find them.”30 No scientist was ever awarded tenure for not publishing something.

  In November 2018 the world witnessed the unilateralist’s curse in action when the Chinese biophysicist He Jiankui shocked the scientific community by announcing that he had created the first babies genetically edited with CRISPR. That a scientist could edit a human embryo using CRISPR—and bring those babies to term, as He did—wasn’t in doubt. But the mainstream opinion among the mandarins of gene editing was that it shouldn’t be done—at least not until there was much clearer evidence that such editing wouldn’t cause unwanted side effects, and until the public was ready to accept such a fundamental change to what it means to be human. But He showed just how ineffective scientific opinion is in the face of a determined unilateralist. It didn’t matter that 99 out of 100 scientists might have refused to gene-edit an embryo. He was the hundredth—and so the work was done. It’s impossible to say yet what the ultimate consequences will be, and in the aftermath of He’s announcement the gene-editing community mostly reacted in revulsion, calling for a moratorium on similar work.31 But science rarely moves backward, especially now that data can so easily be shared. Information, after all, wants to be free—and that includes information hazards.

  We should fear the possibility that advances in biotechnology will be purposefully weaponized. But we should also be worried about mistakes. It won’t matter if the end of the world is intentional or accidental, the product of terror or error. The end is the end.

  In 2014 USA Today obtained government reports tallying up more than 1,100 laboratory mistakes between 2008 and 2012 involving hazardous biomaterials.32 More than half of these incidents were serious enough that laboratory workers received medical evaluation or treatment for potential infection. The same year, USA Today reported that up to seventy-five scientists at the CDC might have been exposed to live anthrax bacteria after potentially infectious samples were sent to labs that lacked the safety equipment to handle them.33 In another incident, live samples of the smallpox virus were discovered in a storage room at the NIH. Even after the SARS outbreak had been contained there were incidents in Singapore, Taipei, and Beijing where laboratory workers were accidentally infected by the virus.34 Altogether between 2004 and 2010 there were more than 700 incidents of the loss or release of “select agents and toxins” from U.S. labs, and in 11 instances lab workers contracted bacterial or fungal infections.35

  The occasional infection and even death among lab technicians is an occupational hazard of working with virulent pathogens, and it can happen even at laboratories that take the highest precautions. But potentially far more dangerous to the public is the possibility that a lab would willingly create and experiment on an artificially enhanced pathogen. In 2010 and 2011 the respective labs of Yoshihiro Kawaoka at the University of Wisconsin–Madison and Ron Fouchier of Erasmus Medical Center in the Netherlands separately announced that they had succeeded in making the deadly H5N1 avian flu virus more transmissible through genetic engineering. Since it first spilled over from poultry to human beings in Hong Kong in 1997, H5N1 has infected and killed hundreds of people in sporadic outbreaks, mostly in Asia.36 The virus has a roughly 60 percent fatality rate among confirmed cases. On reporting trips to Indonesia in the mid-2000s—where more people have died from H5N1 than in any other country—I witnessed firsthand the damage the virus could do, and the fear it engendered. But the world was fortunate—H5N1 still almost never spreads from person to person; nearly every infection is due to close contact with infected poultry.

  Flu experts, though, worried that H5N1 might mutate—perhaps by swapping genes with a human flu virus in a process called reassortment—and gain the ability to transmit easily from person to person, triggering what could be a disastrous pandemic. That was always possible—but on the other hand, by 2010 H5N1 had been circulating for nearly fifteen years without ever touching off a pandemic. Perhaps, like a thief trying to pick a lock, it hadn’t yet come across the right combination—but would do so eventually. Or perhaps something about the nature of virus meant that those changes would never happen, and that an H5N1 pandemic was an impossibility. All scientists could do was wait and see.

  But biotechnology offered a new strategy. Kawaoka introduced mutations in the hemagglutinin gene of an H5N1 virus—the H in H5N1—and combined it with seven genes from the highly transmissible but not very deadly 2009 H1N1 flu virus. Fouchier and his team took an existing H5N1 virus collected in Indonesia and used reverse genetics to introduce mutations that previous research had shown made H5N1 strains more effective in infecting human beings. Both researchers were trying to do in the laboratory what epidemiologists feared might happen in the wild—a bird H5N1 virus mutating in a way that made it more transmissible to human beings—and then they stepped back and recorded what happened. In both cases, the modified H5N1 flu viruses were able to spread between ferrets in the lab. (Ferrets have long been used as test subjects in flu work because they seem to be infected by influenza in the same manner as humans, so a flu viruse that spreads among ferrets would likely spread between people.) Such work is called “gain of function” research—and it’s both a powerful new tool to understand infectious disease and a potential source of existential risk.

  The results were useful, indicating that H5N1 did indeed have pandemic potential, but in performing the experiments, Kawaoka and Fouchier engineered altered influenza viruses that potentially possessed the worst of both worlds: the virulence of avian flu and the transmissibility of human flu. In the aftermath of their work the National Science Advisory Board for Biosecurity—established in the wake of the 2001 anthrax attacks—for the first time ever asked scientific journals to hold back on publishing the full details of an experiment, lest potential terrorists use the information as a blueprint for a bioweapon. After both Fouchier and Kawaoka revised their work and volunteered further details about their experiments, the two papers were eventually published in Science and Nature, respectively, but the scientific community more broadly was split between the Oppenheimeresque attitude that information should always be open, and fear that what we learned could be misused. In 2014 the U.S. Department of Health and Human Services put a moratorium on such gain-of-function research while regulators tried to sort out the situation.

  Harvard’s Marc Lipsitch, whom we met in the last chapter, believed the experiments should never have been done. “Is the science so compelling and so important to do that it justifies this kind of risk?” he told me. “The answer is no.” Kawaoka and Fouchier—and other respected scientists—obviously disagreed. But Lipsitch and Tom Inglesby of Johns Hopkins pushed further, collaborating on a study in 2014 estimating the chances that a hybrid flu could accidentally infect a lab worker, and from there, spre
ad to the rest of the world.37 Based off past biosafety statistics, they found that each year of working with the hybrid flu carried 0.01 percent to 0.1 percent chance of triggering a pandemic. While it’s impossible to know what the fatality rate would be in a hybrid flu pandemic, let’s assume that the modified H5N1, like its wild cousin, would kill three out of every five people it sickened. If about a third of the global population were infected by the new, far more transmissible virus—not unreasonable, since no one would have immunity—the result could be a death toll as high as 1.4 billion people, even more than in the fictional Clade X scenario.38

  Then Inglesby and Lipsitch did something more. They took that 0.01 to 0.1 percent annual chance of escape and multiplied it by the potential death toll. As we saw in earlier chapters, this is commonly done in risk analysis to try to get a sense for how many deaths we might expect per year from an unusual event, such as an asteroid strike or a supereruption. Inglesby and Lipsitch found that even with the extremely low probability that the hybrid virus would escape the lab, the consequences of that rare outcome would be so awful that we could expect between 2,000 to 1.4 million fatalities per year. This doesn’t mean that thousands of people each year would die from the hybrid flu research. Just as in the case of a major asteroid strike, either it happens and billions die or—far more likely—it doesn’t and no harm is done. But what Lipsitch and Inglesby showed is that the worst-case scenario is so terrible that on a yearly level it could exceed the number of people who die from heart disease in the United States.39

  “There are really big risks,” said Lipsitch. “And do you want to risk that really, really, really low-probability but terrible event?”

  In 2017 the NIH lifted the moratorium on gain-of-function research, putting in place new regulations around the work and restricting it to a handful of labs with the highest levels of biocontainment.40 Any experimenter who wants to boost the virulence of an already dangerous pathogen must prove that the benefits will outweigh the risks and that there is no safer way than gain-of-function methods to answer the questions posed by their study. The proposed experiments must also be reviewed by an independent expert panel. The process, NIH director Francis Collins said when the rules were announced in December 2017, “will help to facilitate the safe, secure, and responsible conduct of this type of research.”41 And in early 2019, Science revealed that both Kawaoka and Fouchier had been given the go-ahead to resume their gain-of-function research on flu after a government review. “We are glad the United States government weighed the risks and benefits… and developed new oversight mechanisms,” Kawaoka told Science. “We know that it carry risks. We also believe it is important work to protect human health.”42

 

‹ Prev