This is not a new challenge, of course. Ironically, one of our defining features as a species is an unerring ability to label those we don’t like, or feel threatened by, as “less than human.” Through some of the most sordid episodes in human history, distinctions of convenience between “human” and “not human” have been used to justify acts of atrocity; it’s easier to justify inhuman acts when you claim that the focus of them isn’t fully human in the first place.
We can surely learn from cases of socially unacceptable behavior that have led to slavery, repression, discrimination, and other forms of abuse. If we cannot, cloning and other technologies that blur our biological identity are likely to further reveal the darker side of our “humanity” as we attempt to separate those we consider worthy of the thirty articles of the Universal Declaration of Human Rights from those we don’t.
But in a future where we can design and engineer people in ways that extend beyond our biological origins, how do we define what being “human” means?
As it turns out, this is a surprisingly hard question to answer. However you approach it, and whatever intellectual arguments you use, it’s too easy to come down to an “us versus them” position, and to use motivated reasoning to justify why our particular brand of humanity is the right one. The trouble is, we’re conditioned to recognize humanity as being “of us” (and whoever the “us” is gets to define this). And we have a tendency to use this arbitrary distinction to protect ourselves from those we consider to be “not us.”
The possibility of human reproductive cloning begins to reveal the moral complexities around having the ability to transcend our biological heritage. If we do eventually end up cloning people, the distinction between “like us” (and therefore fully human) and “not like us” (and therefore lacking basic human rights) is likely to become increasingly blurred. But this is only the start.
In 2016, a group of scientists launched a ten-year project to construct a synthetic human genome from scratch. This is a project that ambitiously aims to construct all three billion base pairs of the human genome in the laboratory, from common lab chemicals, and create the complete blueprint for a fully functioning person with no biological parents or heritage. This is the first step in an ambitious enterprise to create a completely synthetic human being within 20 years; a living, breathing person that was designed by computer and grown in the lab.31 If successful (and I must confess that I’d be very surprised if this can be achieved within twenty years), this project will make the moral challenges of cloning seem like child’s play. At least a clone has its origins in a living person. But what will we do if and when we create a being who is like you and me in every single way, apart from where they came from?
This may seem like a rather distant moral dilemma. But it is foreshadowed by smaller steps toward having to rethink what we mean by “human.” As we’ll see in later chapters, mind-enhancing drugs are already beginning to blur the lines between what are considered “normal” human abilities, and what tip us over into technologically-enhanced “abnormal abilities.” Movies like Ghost in the Shell (chapter seven) push this further by questioning the boundaries between machine-enhanced humans and machines with human tendencies. And when we get to the movie Transcendence (chapter nine), we’re looking at a full-blown melding between a human mind and a machine. In each of these cases, using technologies to alter people or to create entities with human-like qualities challenges us with two questions in particular: what does it mean to be “human”? And what are the rights and expectations of entities that don’t fit what we think of as human, yet are capable of thinking and feeling, that have dreams and hopes, and are able to suffer pain and loss?
The seemingly easy way forward here is to try to develop a definition of humanity that encompasses all of our various future creations. But I’m not sure that this will ultimately succeed, if only because this still reflects a way of thinking that mentally divides the world into “human” and “not human.” And with this division comes the temptation to endow the former with all the rights that come with being human and an assumed right to exploit the latter, simply because we don’t think of them as being part of the same privileged club.
Rather, I suspect that, at some point, we will need to transcend the notion of “human” and instead focus on rights, and an understanding of “worth” and “validity” that goes far beyond what we bestow on ourselves as Homo sapiens.
Making this transition will not be easy. But we’ve already begun to make a start in how we think about rights as they apply to other species, and the responsibility we have toward them. Increasingly, there is an awareness that being human does not come with a God-given right to dominate, control, and indiscriminately use other species to our own advantage. But how we translate this into action is difficult, and is often colored by our own ideas of worth and value. In effect, we easily slip into defining what is important by what we think of as being important. For instance, we place greater value on species that are attractive or interesting to us; on animals and plants that inspire awe in us. And we value species more that we believe are important to the sustainability of our world, or what we perhaps arrogantly call “higher” species, meaning those that are closer relatives to us on the evolutionary ladder. And we especially value species that demonstrate human-like intelligence.
In other words, our measures of what has worth inevitably come down to what has worth to us.
This is of course quite understandable. As a species, we are at the top of the food chain, and we’re biologically predisposed to do everything we can to stay there. But this doesn’t help lay down a moral framework for how we behave toward entities that do not fit our ideas of what is worthy.
This will be a substantial challenge if and when we create entities that threaten our humanness, and by implication, the power we currently wield as a species. For instance, if we did at some point produce human clones, they would be our equals in terms of biological form, function, awareness and intellect. But we would know they were different, and would have to decide how to respond to this. We could, of course, grant them rights; we might even declare them to be fully human, or at least honorary members of the human club. But here’s the kicker: What right would we have to do this? What natural authority do we have that allows us to decide the fate of creations such as these? This is a deeply challenging question when it comes to entities that are almost, but not quite, the same as us. But it gets even more challenging when we begin to consider completely artificial entities such as computer- or robot-based artificial intelligence.
We’ll come back to this in movies like Minority Report (chapter four) and Ghost in the Shell (chapter seven). But before we do, there’s one other insight embedded in Never Let Me Go that’s worth exploring, and that’s how easily we fall into justifying technologies that devastate a small number of lives, because we tell ourselves we cannot live without them.
Too Valuable to Fail?
Whichever way you look at it, the society within which Never Let Me Go is situated doesn’t come off that well. To most other people in the movie, the clones are seen as little more than receptacles for growing living organs in, waiting for someone to claim them.
In contrast, the staff at Hailsham are an anomaly, a blip in the social conscience that is ultimately drowned out by the irresistible benefits the Human Donor Program offers. But the morality behind this anomaly is, not to put too fine a point on it, rather insipid. Madame, Miss Emily, and others appear to care for the clones, and want to prove that they have human qualities and are therefore worthy of something closer to “human” dignity. But ultimately, they give way to resignation in a society that sees the donor program as too valuable to end.
As Tommy and Kathy visit Miss Emily to plead for their lives by showing that they are truly in love, we learn that they never had a hope. Miss Emily, Madame, and others were striving to appease their consciences by showing that the clones had a soul, that they were human. Maybe they thought they co
uld somehow use this to change how the clones were treated. But the awful truth is that Miss Emily never believed she could change what society saw the clones as—living caretakers of organs for others. There never was a hope in her mind that the children would be treated as anything other than a commodity. Certainly, she cared for them. But she didn’t care enough to resist an atrocity that was unfolding in front of her eyes.
All of this—the despair, the injustice, the inhumanity, the cruelty—pours out of Tommy as he weeps and rages in the headlights of Kathy’s car. And, standing with him, we know in our hearts that this society has sold itself out to a technology that rips people’s lives and dreams away from them, so that those with the privilege of not being labeled “clone” can live longer and healthier lives.
This, to me, is a message that stays with me long after watching Never Let Me Go—that if we are not careful, technology has the power to rob us of our souls, even as it sustains our bodies, not because it changes who we are, but because it makes us forget the worth of others. It’s a message that’s directly relevant to human cloning, should we ever develop this technology to the point that it’s widely used. But it also applies to other technologies that blur our definitions of “worth,” including the use of technologies that claim to predict how someone will behave, as we’ll see in our next movie: Minority Report.
Chapter Four
MINORITY REPORT: PREDICTING CRIMINAL INTENT
“If there’s a flaw, it’s human—it always is.”
—Danny Witwer
Criminal Intent
There’s something quite enticing about the idea of predicting how people will behave in a given situation. It’s what lies beneath personality profiling and theories of preferred team roles. But it also extends to trying to predict when people will behave badly, and taking steps to prevent this.
In this vein, I recently received an email promoting a free online test that claims to use “‘Minority Report-like’ tech to find out if you are ‘predisposed’ to negative or bad behavior.” The technology I was being encouraged to check out was an online survey being marketed by the company Veris Benchmark under the trademark “Veris Prime.” It claimed that “for the first time ever,” users had an “objective way to measure a prospective employee’s level of trustworthiness.”
Veris’ test is an online survey which, when completed, provides you (or your employer) with a “Trust Index.” If you have a Trust Index of eighty to one hundred, you’re relatively trustworthy, but below twenty or so, you’re definitely in danger of showing felonious tendencies. At the time of writing, the company’s website indicates that the Trust Index is based on research on a wide spectrum of people, although the initial data that led to the test came from 117 white-collar felons. In other words, when the test was conceived, it was assumed that answering a survey in the same way as a bunch of convicted felons is a good way of indicating if you are likely to pursue equally felonious behavior in the future.
Naturally, I took the test. I got a Trust Index of nineteen. This came with a warning that I’m likely to regularly surrender to the temptation of short-term personal gain, including cutting corners, stretching the truth, and failing to consider the consequences of my actions.
Sad to say, I don’t think I have a great track record of any of these traits; the test got it wrong (although you’ll have to trust me on this). But just to be sure that I wasn’t an outlier, I asked a few of my colleagues to also take the survey. Amazingly, it turns out that academics are some of the most felonious people around, according to the test. In fact, if the Veris Prime results are to believed, real white-collar felons have some serious competition on their hands from within the academic community. One of my colleagues even managed to get a Trust Index of two.
One of the many issues with the Veris Prime test is the training set it uses. It seems that many of the traits that are apparently associated with convicted white-collar criminals—at least according to the test—are rather similar to those that characterize curious, independent, and personally-motivated academics. It’s errors like this that can easily lead us into dangerous territory when it comes to attempting to use technology to predict what someone will do. But even before this, there are tough questions around the extent to which we should even be attempting to use science and technology to predict and prevent criminal behavior. And this leads us neatly into the movie Minority Report.
Minority Report is based on the Philip K. Dick short story of the same name, published in 1956. The movie centers on a six-year crime prevention program in Washington, DC, that predicts murders before they occur, and leads to the arrest and incarceration of “murderers” before they can commit their alleged future crime. The “Precrime” program, as it’s aptly called, is so successful that it has all but eliminated murder in the US capital. And as the movie opens, there’s a ballot on the books to take it nationwide.
The Precrime program in the movie is astoundingly successful—at least on the surface. The program is led by Chief John Anderton (played by Tom Cruise). Anderton’s son was abducted six years previously while in his care, and was never found. The abduction destroyed Anderton’s personal life, leaving him estranged from his partner, absorbed in self-pity, and dependent on illegal street narcotics. Yet despite his personal pain, he’s a man driven to ensuring others don’t have to suffer a similar fate. Because of this, he is deeply invested in the Precrime program, and since its inception has worked closely with the program director and founder Lamar Burgess (Max von Sydow) to ensure its success.
The technology behind Precrime in the movie is fanciful, but there’s a level of internal consistency that helps it work effectively within the narrative. The program depends on three “precogs”: genetically modified, isolated, and heavily sedated humans who have the ability to foresee future murders. By monitoring and visualizing their neural activity, the Precrime team can see snatches of the precogs’ thoughts, and use these to piece together where and when a future murder will occur. All they then have to do is swoop in and arrest the pre-perpetrator before they’ve committed the crime. And, because the precogs’ predictions are trusted, those arrested are sentenced and incarcerated without trial. This incarceration involves being fitted with a “halo”—a neural device that plunges the wearer helplessly into their own incapacitating inner world, although whether this is a personal heaven or hell we don’t know.
As the movie opens, we’re led to believe that this breakthrough in crime prevention is a major step forward for society. Murder’s a thing of the past in the country’s capital, its citizens feel safer, and those with murderous tendencies are locked away before they can do any harm. That is, until Chief Anderton is tagged as a pre-perp by the precogs.
Not surprisingly, Anderton doesn’t believe them. He knows he isn’t a murderer, and so he sets out to discover where the flaw in the system is. And, in doing so, he begins to uncover evidence that there’s something rotten in the very program he’s been championing. On his journey, he learns that the precogs are not, as is widely claimed, infallible. Sometimes one of them sees a different sequence of events in the future, a minority report, that is conveniently scrubbed from the records in favor of the majority perspective.
Believing that his minority report—the account that shows he’s innocent of a future murder—is still buried in the mind of the most powerful precog, Agatha (played by Samantha Morton), he breaks into Precrime and abducts her. In order to extract the presumed minority report she’s carrying, he takes her to a seedy pleasure joint that uses recreational brain-computer interfaces to have her mind “read.” And he discovers, to his horror, that there is no minority report; all three precogs saw him committing the same murder in the near future.
Anderton does, however, come across an anomaly: a minority report embedded in Agatha’s memory of a murder that is connected with an earlier inconsistency he discovered in the Precrime records.
Still convinced that he’s not a murderer, Anderton sets about tr
acking down his alleged victim in order to prove his innocence, taking Agatha with him.32 He traces the victim to a hotel, and on entering his room, Anderton discovers the bed littered with photos of the man with young children, including his son. Suddenly it all fits into place. The trail has led Anderton to the one person he would kill without hesitation if he got the chance. Yet, even as Anderton draws his gun on his son’s abductor, Agatha pleads with him to reconsider. Despite her precognition, she tries to convince him that that the future isn’t set, and that he has the ability to change it. And so Anderton overcomes his desire for revenge and lowers his weapon.
It turns out Anderton was being set up. The victim—who wasn’t Anderton’s son’s abductor—was promised a substantial payout for his family if he convinced Anderton to kill him. When Anderton refuses, the victim grabs the gun in Anderton’s hand, presses it against himself, and pulls the trigger. As predicted, Anderton is identified as the killer, and is arrested, fitted with a halo, and put away.
With Anderton’s arrest, though, a darker undercurrent of events begins to emerge around the precog program. It turns out that Lamar Burgess, the program’s creator, has a secret that Anderton was in danger of discovering—an inconvenient truth that, to Lamar, stood in the way of what he believed was a greater social good. And so, to protect himself and the program, Lamar finds a way to use the precogs to silence Anderton.
As the hidden story behind the precog program is revealed, we discover that Agatha was born to a junkie mother, and suffered from being a terminally ill addict from birth. Agatha and other addict-babies became part of an ethically dubious experimental program using advanced genetic engineering to search for a cure. In this program, it’s discovered that, in Agatha’s case, a side effect of the experiments is an uncanny ability to predict future murders. Given their serendipitous powers, Agatha and two other subjects were sedated, sequestered away, wired up, and plugged into to what was to become the precog program. But Agatha’s mother cleaned herself up and demanded her daughter back, threatening the very core of this emerging technology.
Films from the Future Page 7