Films from the Future

Home > Other > Films from the Future > Page 28
Films from the Future Page 28

by Andrew Maynard


  As this is movie make-believe, the technology Zobrist ends up developing is rather implausible. But it’s not that far-fetched. Certainly, we know from the work of Fouchier, Kawaoka, and others that it is possible to engineer viruses to be more deadly than their naturally-occurring counterparts. And we’re not that far from hypothetically being able to precisely design a virus with a specific set of characteristics, an ability that will only accelerate as we increasingly use cyber-based technologies and artificial-intelligence-based methods in genetic design. Because of these converging trends in capabilities, when you strip away the hyperbolic narrative and cliffhanger scenarios from Inferno, there’s a kernel of plausibility buried in the movie that should probably worry us, especially in a world where powerful individuals are able to translate their moral certitude into decisive action.

  Immoral Logic?

  Some years ago, my wife gave me a copy of Daniel Quinn’s book Ishmael. The novel, which won the Turner Tomorrow Award in 1991, has something of a cult following. But I must confess I was rather disturbed by the arguments it promoted. What concerned me most, perhaps, was a seemingly pervasive logic through the book that seemed to depend on “ends,” as defined by a single person, justifying extreme “means” to get there. Echoing both Paul Ehrlich and Dan Brown, Quinn was playing with the idea that seemingly unethical acts in the short term are worth it for long-term prosperity and well being, especially when, over time, the number of people benefitting from a decision far outnumber those who suffered as a consequence.

  Ishmael is a Socratic dialogue between the “pupil”—the narrator—and his “teacher,” a gorilla that has the power of speech and reason. The book uses this narrative device to dissect human history and the alleged rise of tendencies that have led to a global culture of selfish greed, unsustainable waste, and out-of-control population growth. The book is designed to get the reader to think and reflect. In doing so, it questions our rights as humans above those of other organisms, and our obligations to other humans above that to the future of the Earth as a whole. Many of the underlying ideas in the book are relatively common in environmentalist thinking. What Ishmael begins to illuminate, though, is what happens when some of these ideas are taken to their logical conclusions. One of those conclusions is that, if the consequence of a growing human population and indiscriminate abuse of the environment is a sick and dying planet, anything we do now to curb our excesses is justified by the future well-being of the Earth and its many ecosystems. The analogy used by Quinn is that of a surgeon cutting out a malignant cancer to save the patient, except that, in this case, the patient is the planet, and humanity is both the cancer and the surgeon.

  This is a similar philosophy, of taking radical action in the present to save the future, that Ehrlich promoted in his 1968 book, The Population Bomb.166 As a scientist and environmentalist, Ehrlich was appalled by where he saw the future of humanity and Planet Earth heading. As the human population increased exponentially, he believed that, left unchecked, people would soon exceed the carrying capacity of the planet. If this happened, he believed we would be plunged into a catastrophic cycle of famine, disease, and death, that would be far worse than any preventative actions we might take.

  Ehrlich opens his book with a dramatic account of him personally experiencing localized overpopulation in Delhi. This experience impressed on him that, if this level of compressed humanity was to spread across the globe (as he believed it would), we would be responsible for making a living hell for future generations, something he saw as his moral duty to do what he could to prevent.

  In the book, Ehrlich goes on to explore ways in which policies could be established to avoid what he saw as an impending disaster. He also looked at ways in which people might be persuaded to change their habits and beliefs in an attempt to dramatically curb population growth. But he considered the threat too large to stop at political action and persuasion. To him, if these failed, drastic measures were necessary. He lamented, for instance, that India had not implemented a controversial sterilization program for men as a means of population control. And he talked of triaging countries needing aid to avoid famine and disease, by helping only those that could realistically pull themselves around while not wasting resources on “hopeless cases.”

  Ehrlich’s predictions and views were both extreme and challenging. And in turn, they were challenged by others. Many of his predictions have not come to pass, and since publication of The Population Bomb, Ehrlich has pulled back from some of his more extreme proposals. There are many, though, who believe that the sheer horror of his predictions and his proposed remedies scared a generation into taking action before it was too late. Even so, we are still left with a philosophy which, much like the one espoused in Ishmael, suggests that one person’s prediction of pending death and destruction has greater moral weight than the lives of the people they are willing to sacrifice to save future generations.

  It is precisely this philosophy that Dan Brown explores through the character of Zobrist in Inferno. Superficially, Zobrist’s arguments seem to make sense. Using an exponential growth model of global population, he predicts a near future where there is a catastrophic failure of everything we’ve created to support our affluent twenty-first-century lifestyle. Following his arguments, it’s not hard to imagine a future where food and water become increasingly scarce, where power systems fail, leaving people to the mercy of the elements, where failing access to healthcare leads to rampant disease, and where people are dying in the streets because they are starving, sick, and have no hope of rescue.

  As well as being a starkly sobering vision, this is also a plausible one—up to a point. We know that when animal populations get out of balance, they often crash. And research on complex systems indicates that the more complex, interdependent, and resource-constrained a system gets, the more vulnerable it can become to catastrophic failure. It follows that, as we live increasingly at the limits of the resources we need to sustain nearly eight billion people across the planet, it’s not too much of a stretch to imagine that we are building a society that is very vulnerable indeed to failing catastrophically. But if this is the case, what do we do about it?

  Early on in Inferno, Zobrist poses a question: “There’s a switch. If you throw it, half the people on earth will die, but if you don’t, in a hundred years, then the human race will be extinct.” It’s an extreme formulation of the ideas of Quinn and Ehrlich, and not unlike a scaled-up version of the Trolley Problem that philosophers of artificial intelligence and self-driving cars love to grapple with. But it gets to the essence of the issue at hand: Is it better to kill a few people now and save many in the future, or to do nothing, condemning billions to a horrible death, and potentially signing off on the human race?

  Ehrlich and Quinn suggest that it’s moral cowardice to take the “not my problem” approach to this question. In Inferno, though, Brown elevates the question from one of philosophical morality to practical reality. He gives the character of Zobrist the ability to follow through on his convictions, and to get out of his philosophical armchair to quite literally throw the switch, believing he is saving humanity as he does so.

  The trouble is, this whole scenario, while easy to spin into a web of seeming rationality, is deeply flawed. Its flaws lie in the same conceits we see in calls for action based on technological prediction. It assumes that the future can be predicted from the exponential trends of the past (a misconception that was addressed in chapter nine and Transcendence), and it amplifies, rather than moderates, biases in human reasoning and perception. Reasoning like this creates an artificial certainty around the highly uncertain outcomes of what we do, and it justifies actions that are driven by ideology rather than social responsibility. It also assumes that the “enlightened,” whoever they are, have the moral right to act, without consent, on behalf of the “unenlightened.”

  In the cold light of day, what you end up with by following such reasoning is something that looks more like religious terrori
sm, or the warped actions of the Unabomber Ted Kaczynski, than a plan designed to create social good.

  This is not to say we are not facing tough issues here. Both the Earth’s human population and our demands on its finite resources are increasing in an unsustainable way. And this is leading to serious challenges that should, under no circumstances, be trivialized. Yet, as a species, we are also finding ways to adapt and survive, and to overcome what were previously thought to be immovable barriers to what could be achieved. In reality, we are constantly moving the goalposts of what is possible through human ingenuity. The scientific and social understanding of the 1960s was utterly inadequate for predicting how global science and society would develop over the following decades, and as a result, Ehrlich and others badly miscalculated both the consequences of what they saw occurring and the measures needed to address them. These developments included advances in artificial fertilizers and plant breeding that transformed the ability of agriculture to support a growing population. We continue to make strides in developing and using technology to enable a growing number of people to live sustainably on Earth, so much so that we simply don’t know what the upper limit of the planet’s sustainable human population might be. In fact, perhaps the bigger challenge today is not providing people with enough food, water, and energy, but in overcoming social and ideological barriers to implementing technologies in ways that benefit this growing population.

  Imagine now that, in 1968, a real-life Zobrist had decided to act on Ehrlich’s dire predictions and indiscriminately rob people of their dignity, autonomy, and lives, believing that history would vindicate them. It would have been a morally abhorrent tragedy of monumental proportion. This is part of the danger of confusing exponential predictions with reality, and mixing them up with ideologies that adhere religiously to a narrow vision of the future, to the point that its believers are willing to kill for the alleged long-term good of society.

  Yet while such thinking can lead to what I believe is an immoral logic, we cannot afford to dismiss the possibility that inaction in the present may lead to catastrophic failures in the future. If we don’t get our various acts together, there’s still a chance that a growing population, a changing climate, and human greed will lead to future suffering and death. As we develop increasingly sophisticated technologies, these only add to the uncertainty of what lies around the corner. But if we’re going to eschew following an immoral logic, how do we begin to grapple with these challenges?

  The Honest Broker

  Perhaps one of the most difficult challenges scientists (and academics more broadly) face is knowing when to step out of the lab (or office) and into the messy world of politics, advocacy, and activism. The trouble is, we’re taught to question assumptions, to be objective, and to see issues from multiple perspectives. As a result, many scientists see themselves as seekers of truth, but skeptical of the truth. Because of this, many of us are uneasy about using our work to make definitive statements about what people should or should not be doing. To be quite frank, it feels disingenuous to set out to convince people to act as if we know the answers to a problem, when in reality all we know is the limits of our ignorance.

  There’s something else though, that makes many scientists leery about giving advice, and that’s the fear of losing the trust and respect of others. Many of us have an almost pathological paranoia of being caught out in an apparent lie if we make definitive statements in public, and for good reason; there are few problems in today’s society that have cut-and-dried solutions, and to claim that there are smacks of charlatanism. More than this, though, there’s a sense within the culture of science that making definitive statements in public is more about personal ego than professional responsibility.

  The unwritten rule here sometimes seems to be that scientists should stick to what they’re good at—asking interesting questions and discovering interesting things—and leave it to others to decide what this means for society more broadly. This is, I admit, something of an exaggeration. But it does capture a tension that many scientists grapple with as they try to reconcile their primary mission to generate new knowledge with their responsibility as a human being to help people not make a complete and utter mess of their lives.

  Not surprisingly, these lines become blurred in areas where research is driven by social concerns. As a result, there’s a strong tradition in areas like public health of research being used to advocate for socially beneficial behaviors and policies. And scientists focusing on environmental sustainability and climate change are often working in these areas precisely because they want to make a difference. To many of them, their research isn’t worth their time if it doesn’t translate into social impact, and that brings with it a responsibility to advocate for change.

  This is the domain that scientists like Paul Ehrlich and Dan Brown’s Zobrist inhabit. They are engaged in their science because they see social and environmental problems that need to be solved. To many researchers in this position, their science is a means to a bigger end, rather than being an end in itself. In fact, I suspect that many researchers in these areas of study would argue that there is a particular type of immorality associated with scientists who, with their unique perspective, can see an impending disaster coming, and decide to do nothing about it.

  Here, the ethics of the scientist-advocate begin make a lot of sense. Take this thought experiment, for instance. Imagine your research involves predicting volcanic eruptions (just to make a change from population explosions and genetically engineered viruses), and your models strongly indicate that the supervolcano that lies under Yellowstone National Park could erupt sometime in the next decade. What should you do? Do nothing, and you potentially condemn millions of people—maybe more—to famine, poverty, disease, and death. Instinctively, this feels like the wrong choice, and I suspect that few scientists would just ignore the issue. But they might say that, because of the uncertainty in their predictions, more research is needed, including more research funding, and maybe a conference or two to develop the science more and argue over the results. In other words, there’d probably be lots of activity, but very little action that would help those people who would be affected if such an eruption did occur.

  To some scientists, however, this would be ethically untenable, and an abdication of responsibility. To them, the ethical option would be to take positive action: Raise awareness, shock people into taking the risk seriously, hit the headlines, give TED talks, make people sit up and listen and care, and, above all, motivate policy makers to do something. Because—so the thinking would go—even if the chances are only one in a thousand of the eruption happening, it’s better to raise the alarm and be wrong than stay silent and be right.

  This gets to the heart of the ethics of science-activism. It’s what lies behind the work of Paul Ehrlich and others, and it’s what motivates movements and organizations that push for social, political, and environmental change to protect the future of the planet and its inhabitants. And yet, compelling as the calculus of saved future lives is, there is a problem. Pushing for action based on available evidence always comes with consequences. Sadly, there’s no free pass if you make a mistake, or the odds don’t fall in your favor. Going back to the Yellowstone example, a major eruption could well render large swaths of the mid-US uninhabitable. Agriculture would be hit hard, with air pollution and localized climate shifts making living conditions precarious for tens of millions of people. On the other hand, preparing for a potential eruption would most likely involve displacing millions of people, possibly leading to coastal overcrowding, loss of jobs, homelessness, and a deep economic recession. The outcomes of the precautionary actions—irrespective of whether the predictions came true or not—would be devastating for some. They may be seen as worth it in the long run if the eruption takes place. But if it doesn’t, the decision to act will have caused far more harm than inaction would have. Now imagine having the burden of this on your shoulders, because you had the courage of your scientific
convictions, even though you were wrong, and it becomes clearer why it takes a very brave scientist indeed to act on the potential consequences of their work.

  This is, obviously, an extreme and somewhat contrived example. But it gets to the core of the dilemma surrounding individuals acting on their science, and it underlies the tremendous social responsibility that comes with advocating for change based on scientific convictions. To make matters worse, while we all like to think we are rational beings—scientists especially—we are not. We are all at the mercy of our biases and beliefs, and all too often we interpret our science through the lens of these. And this means that when an individual, no matter how smart they are, decides that they have compelling evidence that demands costly and disruptive action, there’s a reasonably good chance that they’ve missed something.

  So how do we get out of this bind, where conscientious scientists seem to be damned if they do, and damned if they don’t? The one point of reasonable certainty here is that it’s dangerous for an individual to push an agenda for change on their own. It’s just too easy for someone to be blinded by what they believe is right and true, and as a result miss ways forward that are more socially responsible. At the same time, it’s irresponsible to suggest that scientists should be seen and not heard, especially when they have valuable insights into emerging risks and ways to avoid them.

 

‹ Prev