In other words, it’s good to talk when it comes to developing impactful new technologies. Or rather, it’s good to listen to and engage with each other, and explore mutually beneficial ways of developing technologies that benefit both their investors and society more broadly, and that don’t do more harm than good. Yet this is easier said than done. And there are risks. My AI executive was right to be concerned about engaging with people because sometimes people don’t like what they hear, and they decide to make your life difficult as a result. Yet there’s also a deep risk to holding back and not talking, and in the long run this is usually the larger of the two. Talking’s tough. But not talking is potentially more dangerous.
One way that people have tried to get around this “toughness” is a process called the Danish Consensus Conference. This is an approach that takes a small group of people from different backgrounds and perspectives and provides an environment where they can learn about an issue and its consequences before exploring productive ways forward. The power of the Danish Consensus Conference is that it gets people talking and listening to each other in a constructive and informed way. Done right, it overcomes many of the challenges of people not understanding an issue and reverting to protecting their interests out of ignorance. But it does have its limitations. And one of the biggest is that very few people have the time to go through such a time-consuming process. This gets to the heart of perhaps the biggest challenge in public engagement around emerging technologies: Most people are too busy working all hours to put food on the table and a roof over their heads, or caring for family, or simply surviving, to have the time and energy for somewhat abstract conversations about seemingly esoteric technologies. There’s simply not enough perceived value to them to engage.
So how do we square the circle here? How do we ensure that the relevant people are at the table when deciding how new technologies are developed and used, so we don’t end up in a farcical mess? Especially as we live in a world where everyone’s busy, and the technologies we’re developing, together with their potential impacts, are increasingly complex?
The rather frustrating answer is that that there are no simple answers here. However, a range of approaches is emerging that, together, may be able to move things along at least a bit. Despite being cumbersome, the Danish Consensus Conference remains relevant here, as do similar processes such as Expert & Citizen Assessment of Science & Technology (ECAST).155 But there are many more formal and informal ways in which people with different perspectives and insights can begin to talk and listen and engage around emerging technologies. These include the growing range of opportunities that social media provides for peer-to-peer engagement (with the caveat that social media can shut down engagement as well as opening it up). They also include using venues and opportunities such as science museums, TED talks, science cafes, poetry slams, citizen science, and a whole cornucopia of other platforms.
The good news is that there are more ways than ever for people to engage around developing responsible and beneficial technologies, and to talk with each other about what excites them and what concerns them. And with platforms like Wikipedia, YouTube, and other ways of getting content online, it’s never been easier to come up to speed on what a new technology is and what it might do. All that’s lacking is the will and imagination of experts to use these platforms to facilitate effective engagement around the responsible and beneficial development of new technologies. Here, there are tremendous opportunities for entrepreneurially- and socially-minded innovators to meet people where they’re at, in and on the many venues and platforms they inhabit, and to nudge conversations toward a more inclusive, informed and responsible dialogue around emerging technologies.
Making progress on this front could help foster more constructive discussions around the beneficial and responsible development of new technologies. It would, however, mean people being willing to concede that they don’t have the last word on what’s right, and being open to not only listening to others, but changing their perspectives based on this. This goes for the scientists as well as everyone else, because, while scientists may understand the technical intricacies of what they do, just like Sidney Stratton, they are often not equally knowledgeable about the broader social implications of their work, as we see to chilling effect in our next movie: Inferno.
Chapter Eleven
INFERNO: IMMORAL LOGIC IN AN AGE OF GENETIC MANIPULATION
“If a plague exists, do you know how many governments would want it and what they’d do
to get it?”
—Sienna Brooks
Decoding Make-Believe
In 1969, the celebrated environmentalist Paul Ehrlich made a stark prediction. In a meeting held by the British Institute of Biology, he claimed that, “By the year 2000, the United Kingdom will simply be a small group of impoverished islands, inhabited by some seventy million hungry people, of little concern to the other five to seven billion inhabitants of a sick world.”156
It’s tempting to quip that Ehrlich was predicting the fallout from Brexit and the UK’s departure from Europe, and his crystal ball was simply off by a few years. But what kept him up at night, and motivated the steady stream of dire warnings flowing from him, was his certainty that human overpopulation would lead to unmitigated disaster as we shot past the Earth’s carrying capacity.
I left the UK in 2000 to move to the US, and I’m glad to say that, at the time, the United Kingdom was still some way from becoming that “small group of impoverished islands.” Yet despite the nation’s refusal to bow to Ehrlich’s predictions, his writings on population crashes and control have continued to capture the imaginations of people over the years, including, I suspect, that of author and the brains behind the movie Inferno, Dan Brown.
I don’t know if Brown and Ehrlich have ever met. I’d like to think that they’d get on well. Both have a knack for a turn of phrase that transforms hyperbole into an art form. And both have an interest in taking drastic action to curb an out-of-control global human population.
The movie Inferno is based on the book of the same name by Dan Brown. It’s perhaps not the deepest movie here, but if you’re willing to crack open the popcorn and suspend disbelief, it successfully keeps you on the edge of your seat, as any good mindless thriller should. And it does provide a rather good starting point for examining the darker side of technological innovation—biotechnology in particular—when good intentions lead to seemingly logical, but not necessarily moral, actions.
Inferno revolves around the charismatic scientist and entrepreneur Bertrand Zobrist (played by Ben Foster). Zobrist is a brilliant biotechnologist and genetic engineer who’s devoted to saving the world. But he has a problem. Just like Ehrlich, Zobrist has done the math, and realized that our worst enemy is ourselves. In his genius-eyes, no matter what we do to cure sickness, improve quality of life, and enable people to live longer, all we’re doing is pushing the Earth ever further beyond the point where it can sustain its human population. And like Ehrlich, he sees a pending future of disease and famine and death, with people suffering and dying in their billions, because we cannot control our profligacy.
Zobrist genuinely wants to make the world a better place. But he cannot shake this vision of apocalyptic disaster. And he cannot justify using his science for short-term gains, only for it to lead to long-term devastation. So he makes a terrible decision. To save humanity from itself, he creates a genetically engineered virus that will wipe out much of the world’s population—plunging humanity back into the dark ages, but giving it the opportunity to reset and build a more sustainable future as a result. And because it seems that genius entrepreneurs can’t do anything simply, he arranges for the virus to be elaborately released at a set time in a mysterious location somewhere in Europe.
The problem is, the authorities are onto him—the authorities in this case being an entertainingly fictitious manifestation of the World Health Organization. As the movie starts, Zobrist is being pursued by WHO agents who c
hase him to the top of a bell tower in the Italian city of Florence where, rather than reveal his secrets, Zobrist jumps to his death. But in his pocket, he conveniently has a device that holds the key to where he’s hidden the virus.
This is where Dan Brown brings in his “symbologist” hero, Harvard-based Robert Langdon (Tom Hanks). Langdon, having proven himself to be rather good at decoding devilishly complex puzzles in the past, is the ideal person to follow the trail and save the world. But he quickly finds himself unwittingly wrapped up in a complex subterfuge where he’s led to believe the WHO are the bad actors, and it’s up to him and a young doctor, Sienna Brooks (Felicity Jones), to track down the virus before they get to it.
What follows is a whirlwind of gorgeous locations (Florence, Venice, Istanbul), misdirection, plot twists, and nail-biting cliffhangers. We learn that Sienna is, in fact, Zobrist’s lover, and has been using Langdon to find the virus so she can release it herself. We also learn that she’s fooled a clandestine global security organization (headed up by Harry Simms, who’s played perfectly by Irfan Khan) into helping her, and they set about convincing Langdon he needs to solve the puzzle while evading the WHO agents.
The movie ends rather dramatically with the virus being contained just before it’s released. The bad folks meet a sticky end, Langdon saves the world, and everyone still standing lives happily ever after.
Without doubt, Inferno is an implausible but fun romp. Yet it does raise a number of serious issues around science, technology, and the future. Central to these is the question that Paul Ehrlich and Bertrand Zobrist share in common: Where does the moral responsibility lie for the future of humanity, and if we could act now to avoid future suffering—even though the short-term cost may be hard to stomach—should we? The movie also touches on the dangers of advanced genetic engineering, and it brings us back to a continuing theme in this book: powerful entrepreneurs who not only have the courage of their convictions, but the means to act on what they believe.
Let’s start, though, with the question of genetically engineering biological agents, together with the pros and cons of engineering pathogens to be even more harmful.
Weaponizing the Genome
In 2012, two groups of scientists published parallel papers in the prestigious journals Science157 and Nature158 that described, in some detail, how to genetically engineer an avian influenza virus. What made the papers stand out was that these scientists succeeded in making the virus more infectious, and as a result, far deadlier. The research sparked an intense debate around the ethics of such studies, and it led to questions about the wisdom of scientists publishing details of how to make pathogens harmful in a way that could enable others to replicate their work.
The teams of scientists, led by virologists Ron Fouchier and Yoshihiro Kawaoka, were interested in the likelihood of a highly pathogenic flu virus mutating into something that would present a potentially catastrophic pandemic threat to humans. The unmodified virus, referred to by the code H5N1, is known to cause sickness and death in humans, but it isn’t that easy to transmit from person to person. Thankfully, the virus isn’t readily transmitted by coughs and sneezes, and this in turn limits its spread quite considerably. But this doesn’t mean that the virus couldn’t naturally mutate to the point where it could successfully be transmitted by air. If this were to occur (and it’s certainly plausible), we could be facing a flu pandemic of astronomical proportions.
To get a sense of just how serious such a pandemic could be, we simply need to look back to 1918, when the so-called “Spanish flu” swept the world.159 The outbreak of Spanish flu in the early 1900s is estimated to have killed around fifty million people, or around 3 percent of the world’s population at the time. If an equally virulent infectious disease were unleashed on the world today, this would be equivalent to over 200 million deaths, a mind-numbing number of people. However, the relative death toll would likely be far higher today, as modern global transport systems and the high numbers of people living close to each other in urban areas would likely substantially increase infection rates.
It’s this sort of scenario that keeps virologists and infectious-disease epidemiologists awake at night, and for good reason. It’s highly likely that, one day, we’ll be facing a pandemic of this magnitude. Viruses mutate and adapt, and the ones that thrive are often those that can multiply and spread fast. Here, we know that there are combinations of properties that make viruses especially deadly, including human pathogenicity, lack of natural resistance in people, and airborne transmission. There are plenty of viruses that have one, or possibly two, of these features, yet there are relatively few that combine all three. But because of the way that evolution and biology work, it’s only a matter of time before some lucky virus hits the jackpot, much as we saw back in 1918.
Because of this, it makes sense to do everything we can to be prepared for the inevitable, including working out which viruses are likely to mutate into deadly threats (and how) so we can get our defenses in order before this happens. And this is what drove Fouchier, Kawaoka, and their teams to start experimenting on H5N1.
H5N1 is a virus that is deadly to humans, but it has yet to evolve into a form that is readily transmitted by air. What interested Fouchier and Kawaoka was how likely it was that such a mutation would appear, and what we could do to combat the evolved virus if and when this occurs. To begin to answer this question, they and their teams of scientists intentionally engineered a deadly new version of H5N1 in the lab, so they could study it. And this is where the ethical questions began to get tricky. This type of study is referred to as “gain-of-function” research, as it increases the functionality and potential deadliness of the virus. Maybe not surprisingly, quite a few people were unhappy with what was being done. Questions were asked, for instance, about what would happen if the new virus was accidentally released. This was not an idle question, as it turns out, given a series of incidents where infectious agents ended up being poorly managed in labs.160 But it was the decision to publicly publish the recipe for this gain-of-function research that really got people worried.
Both Science and Nature ended up publishing the research and the methods, but only after an intense international debate about the wisdom of doing so.161 However, the decision was, and remains, controversial. Proponents of the research argue that we need to be ready for highly pathogenic and transmissible strains of flu before they inevitably arise, and this means having the ability to develop a stockpile of vaccines. This in turn depends on having a sample of the virus to be protected against. But this type of research makes many scientists uneasy, especially given the challenges of preventing inadvertent releases.
Concerns like this prompted a group of scientists to release a Consensus Statement on the Creation of Potential Pathogens in 2014, calling for greater responsibility in making such research decisions.162 These largely focused on the unintended consequences of well-meaning research. But there was also a deeper-seated fear here: What if someone took this research and intentionally weaponized a pathogen?
This was one of the issues considered by the US National Science Advisory Board for Biosecurity as it debated drafts of the H5N1 gain-of-function papers in 2011. In a statement released on December 20, 2011, the NSABB proposed that that the papers should not be published in their current form, recommending “the manuscripts not include the methodological and other details that could enable replication of the experiments by those who would seek to do harm.”163 However, this caused something of a furor at the time among scientists. The NSABB is an advisory body in the US and has no real teeth, yet its recommendations drew accusations of “censorship”164 in a scientific community that deeply values academic freedom.
The NSABB eventually capitulated, and supported the publication of both papers as they finally appeared in 2012—including the embedded “how-to” instructions for creating a virulent virus.165 But the question of intentionally harmful use remained. And it’s concerns like this that underpin the plot in Infer
no.
Fouchier, Kawaoka, and their teams showed that it is, in principle, possible to take a potentially dangerous virus and engineer it into something even more deadly. To the NSABB and others, this raised a clear national security issue: What if an enemy nation or a terrorist group used the research to create a weaponized virus? Echoes of this discussion stretched back to the 2001 anthrax attacks in the US, where the idea of “weaponizing” a pathogenic organism became part of our common language. Since then, discussions over whether and how biological agents may be weaponized have become increasingly common.
Intuitively, genetically engineering a virus to weaponize it feels like it should be a serious threat. It’s easy to imagine the mayhem a terrorist group could create by unleashing an enhanced form of smallpox, Ebola, or even the flu. Thankfully, most biosecurity experts believe that the risks are low here. Despite these imagined scenarios, it takes substantial expertise and specialized facilities to engineer a weaponized pathogen, and even then, it’s unclear that the current state of science is good enough to create an effective weapon of terror. More than this, though, most experts agree that there are far easier and cheaper ways of creating terror, or taking out enemy forces, than using advanced biology. And because of this, it’s hard to find compelling reasons why an organization would weaponize a pathogen, rather than using far easier and cheaper ways of causing harm. Why spend millions of dollars and years of research on something that may not work, when you can do more damage with less effort using a cell phone and home-made explosives, or even a rental truck? The economics of weaponized viruses simply don’t work outside of science fiction thrillers and blockbuster movies. At least, not in a conventional sense.
And this is where Inferno gets interesting, as Zobrist is not terrorist in the conventional sense. Zobrist’s aim is not to bring about change through terror, but to be the agent of change. And his mechanism of choice is a gain-of-function genetically engineered virus. Unlike the potential use of genetically modified pathogens by terrorists, or even nation-states, the economics of Zobrist’s decision actually make some sense, warped as they are. In his mind, he envisions a cataclysmic future for humanity, brought about through out-of-control overpopulation. and he sees it as a moral imperative to use his expertise and wealth to help avoid it, albeit by rather drastic means.
Films from the Future Page 27