An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.
Which is where advances in genetic engineering come in.
The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct individual members of it.
The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning, scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.
In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of achieving this.
What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.
One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well-preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.
Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty-plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.
The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting-edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.
It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. Just as the dinosaurs in Jurassic Park weren’t quite the same as their ancestors, Church’s woolly mammoths wouldn’t be the same as their forebears. But they would be designed to function within a specific ecological niche, albeit one that’s the result of human-influenced climate change. And this raises an interesting question around de-extinction: If the genetic tools we are now developing give us the ability to improve on nature, why recreate the past, when we could reimagine the future? Why stick to the DNA code that led to animals being weeded out because they couldn’t survive in a changing environment, when we could make them better, stronger, and more likely to survive and thrive in the modern world?
This idea doesn’t sit so well with some people, who argue that we should be dialing down human interference in the environment and turning the clock back on human destruction. And they have a point, especially when we consider the genetic diversity we are hemorrhaging away with the current rate of biodiversity loss. Yet we cannot ignore the possibilities that modern genetic engineering is opening up. These include the ability to rapidly and cheaply read genetic sequences and translate them to digital code, to virtually manipulate them and recode them, and then to download them back into the real world. These are heady capabilities, and for some there is an almost irresistible pull toward using them, so much so that some would argue that not to use them would be verging on the irresponsible.
These tools take us far beyond de-extinction. The reimagining of species like the woolly mammoth is just the tip of the iceberg when it comes to genetic design and engineering. Why stop at recreating old species when you could redesign current ones? Why just redesign existing species when you could create brand-new ones? And why stick to the genetic language of all earth-bound living creatures, when you could invent a new language—a new DNA? In fact, why not go all the way, and create alien life here on earth?
These are all conversations that scientists are having now, spurred on by breakthroughs in DNA sequencing, analysis, and synthesis. Scientists are already developing artificial forms of DNA that contain more than the four DNA building blocks found in nature.11 And some are working on creating completely novel artificial cells that not only are constructed from off-the-shelf chemicals, but also have a genetic heritage that traces back to computer programs, not evolutionary life. In 2016, for instance, scientist and entrepreneur Craig Venter announced that his team had produced a completely artificial living cell.12 Venter’s cell—tagged “JCVI-syn3.0”—is paving the way for designing and creating completely artificial life forms, and the work being done here by many different groups is signaling a possible transition from biological evolution to biology by design.
One of the interesting twists to come out of this research is that scientists are developing the ability to “watermark” their creations by embedding genetic identity codes. As research here progresses, future generations may be able to pinpoint precisely who designed the plants and animals around them, and even parts of their own bodies, including when and where they were designed. This does, of course, raise some rather knotty ethical questions around ownership. If you one day have a JCVI-tagged dog, or a JCVI-watermarked replacement kidney, for instance, who owns them?
This research is pushing us into ethical questions that we’ve never had to face before. But it’s being justified by the tremendous benefits it could bring for current and future generations. These touch on everything from bio-based chemicals production to new medical treatments and ways to stay healthier longer, and even designer organs and body-part replacements at some point. It’s also being driven by our near-insatiable curiosity and our drive to better understand the world we live in and gain mastery over it. And here, just like the scientists in Jurassic Park, we’re deeply caught up in w
hat we can do as we learn to code and recode life.
But, just because we can now resurrect and redesign species, should we?
Could We, Should We?
Perhaps one of the most famous lines from Jurassic Park—at least for people obsessed with the dark side of science—is when Ian Malcolm berates Hammond, saying, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”
Ethics and responsibility in science are complicated. I’ve met remarkably few scientists and engineers who would consider themselves to be unethical or irresponsible. That said, I know plenty of scientists who are so engaged with their work and the amazing things they believe it’ll lead to that they sometimes struggle to appreciate the broader context within which they operate.
The challenges surrounding ethical and responsible research are deeply pertinent to de-extinction. A couple of decades ago, they were largely academic. The imaginations of scientists, back when Jurassic Park hit the screen, far outstripped the techniques they had access to at the time. Things are very different now, though, as research on woolly mammoths and other extinct species is showing. In a very real way, we’re entering a world that very much echoes the “can-do” culture of Hammond’s Jurassic Park, where scientists are increasingly able to do what was once unimaginable. In such a world, where do the lines between “could” and “should” lie, and how do scientists, engineers, and others develop the understanding and ability to do what is socially responsible, while avoiding what is not?
Of course, this is not a new question. The tensions between technological advances and social impacts were glaringly apparent through the Industrial Revolution, as mechanization led to job losses and hardship for some. And the invention of the atomic bomb, followed by its use on Nagasaki and Hiroshima in the second World War, took us into deeply uncharted territory when it came to balancing what we can and should do with powerful technologies. Yet, in some ways, the challenges we’ve faced in the past over the responsible development and use of science and technology were just a rehearsal for what’s coming down the pike, as we enter a new age of technological innovation.
For all its scientific inaccuracies and fantastical scenarios, Jurassic Park does a good job of illuminating the challenges of unintended consequences arising from somewhat naïve and myopic science. Take InGen’s scientists, for instance. They’re portrayed as being so enamored with what they’ve achieved that they lack the ability to see beyond their own brilliance to what they might have missed.13 Of course, they’re not fools. They know that they’re breaking new ground by bringing dinosaurs back to life, and that there are going to be risks. It would be problematic, for instance, if any of the dinosaurs escaped the island and survived, and they recognize this. So the scientists design them to be dependent on a substance it was thought they couldn’t get enough of naturally, the essential amino acid lysine. This was the so-called “lysine contingency,” and, as it turns out, it isn’t too dissimilar from techniques real-world genetic engineers use to control their progeny.
Even though it’s essential to life, lysine isn’t synthesized naturally by animals. As a result, it has to be ingested, either in its raw form or by eating foods that contain it, including plants or bacteria (and their products) that produce it naturally, for instance, or other animals. In their wisdom, InGen’s scientists assume that they can engineer lysine dependency into their dinosaurs, then keep them alive with a diet rich in the substance, thinking that they wouldn’t be able to get enough lysine if they escaped. The trouble is, this contingency turns out to be about as useful as trying to starve someone by locking them in a grocery store.
There’s a pretty high chance that the movie’s scriptwriters didn’t know that this safety feature wouldn’t work, or that they didn’t care. Either way, it’s a salutary tale of scientists who are trying to be responsible—at least their version of “responsible”—but are tripped up by what they don’t know, and what they don’t care to find out.
In the movie, not much is made of the lysine contingency, unlike in Michael Crichton’s book that the movie’s based on, where this basic oversight leads to the eventual escape of the dinosaurs from the island and onto the mainland. There is another oversight, though, that features strongly in the movie, and is a second strike against the short-sightedness of the scientists involved. This is the assumption that InGen’s dinosaurs couldn’t breed.
This is another part of the storyline where scientific plausibility isn’t allowed to stand in the way of a good story. But, as with the lysine, it flags the dangers of thinking you’re smart enough to have every eventuality covered. In the movie, InGen’s scientists design all of their dinosaurs to be females. Their thinking: no males, no breeding, no babies, no problem. Apart from one small issue: When stitching together their fragments of dinosaur DNA with that of living species, they filled some of the holes with frog DNA.
This is where we need to suspend scientific skepticism somewhat, as designing a functional genome isn’t as straightforward as cutting and pasting from one animal to another. In fact, this is so far from how things work that it would be like an architect, on losing a few pages from the plans of a multi-million dollar skyscraper, slipping in a few random pages from a cookie-cutter duplex and hoping for the best. The result would be a disaster. But stick with the story for the moment, because in the world of Jurassic Park, this naïve mistake led to a tipping point that the scientists didn’t anticipate. Just as some species of frog can switch from female to male with the right environmental stimuli, the DNA borrowed from frogs inadvertently gave the dinosaurs the same ability. And this brings us back to the real world, or at least the near-real world, of de-extinction. As scientists and others begin to recreate extinct species, or redesign animals based on long-gone relatives, how do we ensure that, in their cleverness, they’re not missing something important?
Some of this comes down to what responsible science means, which, as we’ll discover in later chapters, is about more than just having good intentions. It also means having the humility to recognize your limitations, and the willingness to listen to and work with others who bring different types of expertise and knowledge to the table.
This possibility of unanticipated outcomes shines a bright spotlight on the question of whether some lines of research or technological development should be pursued, even if they could. Jurassic Park explores this through genetic engineering and de-extinction, but the same questions apply to many other areas of technological advancement, where new knowledge has the potential to have a substantial impact on society. And the more complex the science and technology we begin to play with is, the more pressing this distinction between “could” and “should” becomes.
Unfortunately, there are no easy guidelines or rules of thumb that help decide what is probably okay and what is probably not, although much of this book is devoted to ways of thinking that reduce the chances of making a mess of things. Even when we do have a sense of how to decide between great ideas and really bad ones, though, there’s one aspect of reality we can’t escape from: Complex systems behave in unpredictable ways.
The Butterfly Effect
Michael Crichton started playing with the ideas behind Jurassic Park in the 1980s, when “chaos” was becoming trendy. I was an undergraduate at the time, studying physics, and it was nearly impossible to avoid the world of “strange attractors” and “fractals.” These were the years of the “Mandelbrot Set” and computers that were powerful enough to calculate the numbers it contained and display them as stunningly psychedelic images. The recursive complexity in the resulting fractals became the poster child for a growing field of mathematics that grappled with systems where, beyond certain limits, their behavior was impossible to predict. The field came to be known informally as chaos theory.
Chaos theory grew out of the work of the American meteorologist Edward Lorenz. When he started his career, it was assumed that the solution to more accurate weather prediction wa
s better data and better models. But in the 1950s, Lorenz began to challenge this idea. What he found was that, in some cases, minute changes in atmospheric conditions could lead to dramatically different outcomes down the line, so much so that, in sufficiently complex systems, it was impossible to predict the results of seemingly insignificant changes.
In 1963, when he published the paper that established chaos theory,14 it was a revolutionary idea—at least to scientists who still hung onto the assumption that we live in a predictable world. Much as quantum physics challenged scientists’ ideas of how predictable physical processes are in the invisible world of atoms and subatomic particles, chaos theory challenged their belief that, if we have enough information, we can predict the outcomes of our actions in our everyday lives.
At the core of Lorenz’s ideas was the observation that, in a sufficiently complex system, the smallest variation could lead to profound differences in outcomes. In 1969, he coined the term “the Butterfly Effect,” suggesting that the world’s weather systems are so complex and interconnected that a butterfly flapping its wings on one side of the world could initiate a chain of events that ultimately led to a tornado on the other.
Lorenz wasn’t the first to suggest that small changes in complex systems can have large and unpredictable effects. But he was perhaps the first to pull the idea into mainstream science. And this is where chaos theory might have stayed, were it not for the discovery of the “Mandelbrot Set” by mathematician Benoit Mandelbrot.
In 1979, Mandelbrot demonstrated how a seemingly simple equation could lead to images of infinite complexity. The more you zoomed in to the images his equation produced, the more detail became visible. As with Lorentz’s work, Mandelbrot’s research showed that very simple beginnings could lead to complex, unpredictable, and chaotic outcomes. But Lorentz, Mandelbrot, and others also revealed another intriguing aspect of chaos theory, and this was that complex systems can lead to predictable chaos. This may seem counterintuitive, but what their work showed was that, even where chaotic unpredictability reigns, there are always limits to what the outcomes might be.
Films from the Future Page 4