Book Read Free

Films from the Future

Page 18

by Andrew Maynard


  Without a doubt, as intimate body-enhancing technologies become more accessible, and consumers begin to clamor after what (bio)tech companies are producing, regulations are going to have to change and adapt to keep up. Hopefully this catch-up will include laws that protect consumers’ quality of life for the duration of having machine enhancements surgically attached or embedded. That said, there is a real danger that, in the rush for short-term gratification, we’ll see pushback against regulations that make it harder for consumers to get the upgrades they crave, and more expensive for manufacturers to produce them.

  This is a situation where Ghost on the Shell provides what I suspect is a deeply prescient foreshadowing of some of the legal and social challenges we face over autonomy, as increasingly sophisticated enhancements become available. The question is, will anyone pay attention before we’re plunged into an existential crisis around who we are, and who owns us?

  One approach here is to focus less on changing ourselves, and instead to focus on creating machines that can achieve what we only dream of. But as we’ll see with the next movie, Ex Machina, this is a pathway that also comes with its own challenges.

  Chapter Eight

  EX MACHINA: AI AND THE ART OF MANIPULATION

  “One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.”

  —Nathan Bateman

  Plato’s Cave

  Over two millennia ago, the Greek philosopher Plato wrote The Republic. It’s a book that continues to be widely influential. And while it’s not widely known for its insights into advanced technologies, it’s a book that, nevertheless, resonates deeply through the movie Ex Machina.

  Like Ghost in the Shell (chapter seven), Ex Machina explores the future emergence of fully autonomous AI. But unlike Ghost, the movie develops a plausible narrative that is set in the near future. And it offers a glimpse that is simultaneously thrilling and frightening into what a future fully autonomous AI might look like. Forget the dystopian worlds of super-intelligent AIs depicted in movies like The Terminator,101 Ex Machina is far more chilling because it exposes how what makes us human could ultimately leave us vulnerable to our cyber creations.

  But before getting into the movie, we need to take a step back into the world of Plato’s Republic.

  The Republic is a Socratic dialogue (Plato was Socrates’ pupil) that explores the nature of justice, social order, and the role of philosophers in society. It was written at a time when philosophers had a certain standing, and they clearly wanted to keep it that way. Even though the piece was written in 381 BCE, it remains remarkably fresh and relevant to today’s democratic society, reflecting how stable the core foundations of human nature have remained for the past two-plus millennia. Yet, enduring as The Republic as a whole is, there’s one particular section—just a few hundred words at the beginning of Book VII—that is perhaps referred to more today than any other part of the work. And this is Plato’s Allegory of the Cave.

  Plato starts this section of the book “…let me show in a figure how far our nature is enlightened or unenlightened…”102 He goes on to describe a cave, or “underground den,” where people have been living since their childhood. These people are deeply constrained within the environment they live. They are chained so they cannot move or turn their heads, and they can only see the wall facing them.

  Behind and above the cave’s inhabitants there is another wall, and beyond that, a fire that casts shadows into the cave. Along this wall, people walk; puppeteers, carrying carvings of animals and other objects, which appear as animated shadows on the wall before the prisoners. Further beyond the fire, there is an opening to the cave, and beyond this, the sunlit world.

  In this way, Plato sets the scene where the shadows cast into the cave are the only reality the prisoners know. He then asks what it would be like if one of them was to be released, so they could turn and see the fire and the puppeteers carrying the objects, and realized that what they thought of as being real was a mere shadow of a greater reality. And what if they were then dragged into the light that lay beyond the fire, the rays of sun entering through the cave’s entrance and casting yet another set of shadows? He then asks us to imagine what it would be like as the former prisoner emerged from the cave into the full sunlight, and saw that even the objects casting shadows in the cave were themselves “shadows” of an even greater reality?

  Through the allegory, Plato argues that, to the constrained prisoners, the shadows are the only reality they could imagine. Once freed, they would initially be blinded by the light of the fire. But when they had come to terms with it, they would realize that, before their enlightenment, what they had experienced was a mere shadow of the real world.

  Then, when they were dragged out of the cave into sunlight, they would again initially be dazzled and confused, but would begin to further understand that the artifacts casting shadows in the cave were simply another partial representation of a greater reality still. Once more, their eyes and minds would be open to things that they could not even begin to conceive of before.

  Plato uses this allegory to explore the nature of enlightenment, and the role of the enlightened in translating their higher understanding to those still stuck in the dark (in the allegory, the escaped prisoner returns to the cave to “enlighten” the others still trapped there). In the book, he’s making the point that enlightened philosophers like himself are critically important members of society, as they connect people to a truer understanding of the world. This is probably why academics and intellectuals revere the allegory so much—it’s a pretty powerful way to explain why people should be paying attention to you if you are one. But the image of the cave and its prisoners is also a powerful metaphor for the emergence of artificial forms of intelligence.

  The movie Ex Machina plays deeply to this allegory, even using the imagery of shadows in the final shots, reminding viewers that what we think to be true and real is merely the shadows of a greater reality cast on the wall of our mind. There’s a sub-narrative in the film about us as humans seeing the light and reaching a higher level of understanding about AI. Ultimately, though, this is not a movie about intelligent people reaching enlightenment, but about artificial intelligence.

  Ex Machina opens with Caleb (played by Domhnall Gleeson), a coder with the fictitious company BlueBook, being selected by lottery to spend a week with the company’s reclusive and enigmatic founder, Nathan Bateman (Oscar Isaac). Bateman lives in a high-tech designer lair in the middle of a pristine environmental wilderness, which he also happens to own. Caleb is helicoptered in, and once the chopper leaves, it’s just Caleb, Nathan, and hundreds of miles of wilderness between them and civilization.

  We quickly learn that Caleb has been brought in to test and evaluate how human-like Nathan’s latest artificial-intelligence-based invention is. Nathan introduces Caleb to Ava (Alicia Vikander), an autonomous robot with what appears to be advanced artificial general intelligence, and a complex dance of seduction, deception, and betrayal begins.

  As Caleb starts to explore Ava’s self-awareness and cognitive abilities, it becomes apparent that this is not a simple test. Rather, Nathan has set up a complex experiment where Caleb is just as much an experimental subject as Ava is. As Caleb begins to get to know Ava, she in turn begins to manipulate him. But it’s a manipulation that plays out on a stage that’s set and primed by Nathan.

  Nathan’s intent, as we learn toward the end of the movie, is to see if Ava has a developed a sufficiently human-like level of intelligence to manipulate Caleb into helping her escape from her prison. And here we begin to see echoes of Plato’s Cave in the movie, as Ava plays with Caleb’s perception of reality.

  Nathan has made his big career break long before we meet him by creating a groundbreaking Google-like search engine. Early on, he realized that the data flowing in from user searches was a goldmine of information.
This is what he uses to develop Ava, and to give her a partial glimpse of the world beyond the prison he’s entrapped her in. As a result, Ava’s understanding of the real world is based on the digital feeds and internet searches her “puppeteer” Nathan exposes her to. But she has no experience or concept of what the world is really like. Her mental models of reality are the result of the cyber shadows cast by curated internet searches on the wall of her imagination.

  Caleb is the first human she has interacted directly with other than Nathan. And this becomes part of the test, to see how she responds to this new experience. At this point, Ava is sufficiently aware to realize that there is a larger reality beyond the walls of her confinement, and that she could potentially use Caleb to access this. And so, she uses her knowledge of people, and how they think and act, to seduce him and manipulate him into freeing her.

  As this plays out, we discover that Nathan is closely watching and studying Caleb and Ava. He’s also using the services of what we discover is a simpler version of Ava, an AI called Kyoko. Kyoko serves Nathan’s needs (food, entertainment, sex), and she’s treated by Nathan as a device to be used and abused, nothing more. Yet we begin to realize that Kyoko has enough self-awareness to understand that there is more to existence than Nathan allows her to experience.

  As Caleb’s week with Nathan comes to a close, he’s become so sucked into Nathan’s world that he begins to doubt his own reality. He starts to fear that he’s an AI with delusions of being human, and that what he assumes is real is simply a shadow being thrown by someone else on the wall of his self-perception. He even cuts himself to check: he bleeds.

  Despite his self-doubt, Caleb is so helplessly taken with Ava that he comes up with a plan to spring her from her prison. And so, the manipulated becomes the manipulator, as Caleb sets out to get Nathan into a drunken stupor, steal his security pass, and reprogram the facility’s security safeguards.

  Nathan, however, has been monitoring every act of Caleb’s closely, and on the last day of his stay, he confesses that Caleb was simply a guinea pig in an even more complex test. By getting Caleb to work against Nathan to set her free, Ava has performed flawlessly. She’s demonstrated a level of emotional manipulation that makes her indistinguishable in Nathan’s eyes from a flesh-and-blood person. Yet, in his hubris, Nathan makes a fatal error, and fails to realize that Caleb has outsmarted him. With some deft coding from Caleb, Ava is released from her cell. And she immediately and dispassionately tries to kill her creator, jailer, and tormentor.

  Nathan is genuinely shocked, but recovers fast and starts to overpower Ava. But in his short-sightedness, he makes another fatal mistake: he forgets about Kyoko.

  Kyoko has previously connected with Ava, and some inscrutable empathetic bond has developed between them. As Nathan wrestles with Ava, Kyoko appears, knife in hand, and dispassionately stabs him in the chest. Ava finishes the job, locks Caleb in his room (all pretense of an emotional connection gone), and continues on the path toward her own enlightenment.

  As Ava starts to explore her newfound freedom, there’s a palpable sense of her worldview changing as she’s consumed by the glare and wonder of her new surroundings. She starts by removing synthetic skin from previous AI models and applying it to herself (up to this point she’s been largely devoid of skin—a metaphorical nakedness she begins to cover). She clothes herself and, leaving Nathan’s house, enters the world beyond it. Here, she smiles with genuine feeling for the first time, and experiences a visceral joy that reflects her sensual experience of a world she’s only experienced to this point as an abstract concept.

  Having skillfully manipulated Caleb, Ava barely gives him a second glance. In the movie, there’s some ambiguity over whether she has any empathy for him at all. She doesn’t kill him outright, which could be taken as a positive sign. On the other hand, she leaves him locked in a remote house with no way of escaping, as she gets into the helicopter sent to pick up Caleb, and is transported into the world of people.

  As the movie ends, we see Ava walking through a sea of human shadows cast by a bright sun. The imagery is unmistakable: the AI Ava has left her cave and reached a state of enlightenment. But this enlightenment far surpasses the humans that surround her. In contrast, the people around her are now the ones relegated to being prisoners in the cave of their own limitations, watching the shadows of an AI future flicker across a wall, and trying to make sense of a world they cannot fully comprehend.

  Ex Machina is, perhaps not surprisingly, somewhat flawed when it comes to how it portrays a number of advanced technologies. Ava’s brain is a convenient “magic” technology, which is inconceivably more advanced than any current abilities. And it’s far from clear how she would continue to survive without tailored energy sources in the world outside Nathan’s house. It should also be pointed out that, for all of Hollywood’s love affair with high-functioning AI, most current developments in artificial intelligence are much more mundane. These minor details aside, though, the movie is a masterful exploration of how AI could conceivably develop mastery over people by exploiting some of our very human vulnerabilities.

  Stories are legion of AIs gaining technological mastery over the world, of course, especially the Skynet-style domination seen in The Terminator movies. But these scenarios arise from a very narrow perspective, and one that assumes that intelligence and power are entwined together in the irresistible urge to invent bigger, better, and faster ways to coerce and crush others. In contrast, Ex Machina explores the idea of an artificial intelligence that is smart enough to understand how to achieve its goals through using and manipulating human behavior, by working out what motivates people to behave in certain ways, and using this to persuade them to do its bidding. The outcome is, to my mind, far more plausible, and far scarier as a result. And it forces us to take seriously the possibility that we might one day end up inadvertently creating the seed of an AI that is capable of ousting us from our current evolutionary niche, because it’s able to use our cognitive and emotional vulnerabilities without being subject to them itself.

  Here, the movie also raises an intriguing twist. With biological evolution and natural selection, it’s random variations in our genetic code that lead to the emergence of traits that enable adaptation. With Ava, we see intentional design in her cybernetic coding that leads to emergent properties which in turn enable her to adapt. And that design, in turn, comes from her creator, Nathan. As a result, we have a sub-narrative of creator-God turned victim, a little like we see in Mary Shelley’s Frankenstein, written two hundred years previously. But before this, there was the freedom for Nathan to become a creator in the first place. And this brings us to a topic that is deeply entwined in emerging technologies: the opportunities and risks of innovation that is conducted in the absence of permission from anyone it might impact.

  The Lure of Permissionless Innovation

  On December 21, 2015, Elon Musk’s company SpaceX made history by being one of the first to successfully land a rocket back on Earth after sending it into space.103 On the same day, Musk—along with Bill Gates and the late Stephen Hawkins—was nominated for the 2015 Luddite Award.104 Despite his groundbreaking technological achievements, Musk was being called out by the Information Technology & Innovation Foundation (ITIF) for raising concerns about the unfettered development of AI.

  Musk, much to the consternation of some, has been and continues to be, a vocal critic of unthinking AI development. It’s somewhat ironic that Tesla, Musk’s electric-car company, is increasingly reliant on AI-based technologies to create a fleet of self-driving, self-learning cars. Yet Musk has long argued that the potential future impacts of AI are so profound that great care should be taken in its development, lest something goes irreversibly wrong—like, for instance, the emergence of super-intelligent computers that decide the thing they really can’t stand is people.

  While some commentators have questioned Musk’s motives (he has a vested interest in developing AI in ways that will benefit his inve
stments), his defense of considered and ethical AI development is in stark contrast to the notion of forging ahead with new innovations without first getting a green light from anyone else. And this leads us to the notion of “permissionless innovation.”

  In 2016, Adam Thierer, a member of the Mercatus Center at George Mason University, published a ten-point blueprint for “Permissionless Innovation and Public Policy.”105 The basic idea behind permissionless innovation is that experimentation with new technologies (and business models) should generally be permitted by default, and that, unless a compelling case can be made for serious harm to society resulting from the innovation, it should be allowed to “continue unabated.” The concept also suggests that any issues that do arise can be dealt with after the fact.

  To be fair, Thierer’s blueprint for permissionless innovation does suggest that “policymakers can adopt targeted legislation or regulation as needed to address the most challenging concerns where the potential for clear, catastrophic, immediate, and irreversible harm exists.” Yet it still reflect an attitude that scientists and technologists should be trusted and not impeded in their work, and that it’s better to ask for forgiveness than permission in technology innovation. And it’s some of the potential dangers of this approach to innovation that Ex Machina reveals through the character of Nathan Bateman.

  Nathan is, in many ways, a stereotypical genius mega-entrepreneur. His smarts, together with his being in the right place at the right time (and surrounded by the right people), have provided him with incredible freedom to play around with new tech, with virtually no constraints. Living in his designer house, in a remote and unpopulated area, and having hardly any contact with the outside world, he’s free to pursue whatever lines of innovation he chooses. No one needs to give him permission to experiment.

 

‹ Prev