In the movie, Optimus Prime and the other Autobots attempt to help not just themselves, but humanity. Their selfless sacrifices can be seen in their frequent confrontations with the Deceptions. They preserve and save human lives wherever possible, often injuring themselves in the process. Although their war with the Decepticons predates their entanglement with Earth, they view Earth’s inhabitants as ends in themselves, worthy of life and autonomy, whereas the Decepticons seek to use Earth and its resources as means to their own ends, humanity be damned.
The Autobots, having defeated the Decepticons (apparently) at the end of the Transformer movie, remain on Earth, forgoing their return to their home-world of Cybertron, to help keep humanity safe from harm. Throughout, the Autobots not only co-operate and sacrifice for humans, but with and among themselves. They struggle together, unlike the sniping and contriving Starscream (in the TV show, at least—he appears in the movie, but his bitter rivalry with Megatron is not shown) and exhibiting the valor of a team working together, rather than self-interested individuals, unconnected to one another. The valor and virtue of the Autobots is clear, the cowardice and contrivance of the Decepticons equally so.
In the movie as in the TV show, Bumblebee bonds with his human compatriot, in a way that even seems like affection. In both media, Bumblebee is both transportation for, and protector of his favorite human. Thus, his sacrifice is more particular at times, aimed not only at protecting humanity, but also a particular human, whom we could even surmise he loves. Here, the virtue of Bumblebee is less abstract, and appears to be like that of the best of people, guided by the most supreme form of self-sacrifice, that which is made for the objects of our affections, for our families, and friends. Indeed, even among the Autobots, there is a clear friendship. They joke and play like good-natured colleagues, while the Decepticons bicker and vie for power, their rivalry with one another only overshadowed by the desire to conquer Earth and beyond.
The Decepticons, by any ethical measure, are meant to be bad, and the Autobots are models of good. In this chapter, I’m not going to investigate the very real problem of determining how, theoretically, we are to distinguish among theories of morals or ethics. Nor will I decide here which acts, qualities, or modes of behavior are good or evil. Instead, I’ll just accept that the Autobots are highly moral and the Decepticons highly immoral, and consider the question: How can robots be judged moral or immoral? This question is fascinating, not just as a philosophical problem, but also as an issue relevant to the design of modern machines, and their uses in our present world.
Auto-Morality
While it’s clear that the Autobots are meant to represent good and the Decepticons evil, it remains unclear whether this is even a metaphysical possibility. And this question—whether it’s possible to model moral behavior—is important for us right now, because a concerted and well-funded effort is under-way to automate warfare, and in so doing, take some of the decision-making out of the hands of humans and give it to machines.
War is dangerous, too dangerous for humans. The machines of modern warfare have developed historically along two lines: 1) giving humans the means to be more deadly, and 2) protecting soldiers in the battlefield. Many of these technologies have resulted in creating distance between soldiers and the enemy. Artillery and armor have enabled a longer and more deadly reach across the battlefield, and more safety from counter-attack. The ability to conduct airstrikes to launch missiles has, in some cases, totally removed soldiers from the battlefield, but until recently, the trigger was always pulled by a human, and at the behest of human decision-makers. Now, the military is interested in being able to create some autonomy on the part of our war-machines, allowing for some of the tactical decision-making to be made by the machines themselves. Humans suffer from warfare in ways other than the obvious, and human decision-makers suffer frailties due to the stresses of battle that can make their opponents and innocents suffer in “unnecessary” ways as well. To combat these tendencies, the various militaries of the world are attempting to create automated and autonomous war-fighters.
While Predators and other human-operated drones are robots, or sorts, capable of extending the reach of humans beyond a certain distance, and enabling precision delivery of deadly force without endangering the human controller, these tools still leave open the very real possibility of human error. Moments of uncertainty, emotionally-charged motivations such as vengeance or retribution, and battle-induced stress, can all lead human soldiers, even aided by sophisticated machines, to lash out at both innocents and enemies in tragic ways.
In both the Afghanistan and Iraq wars, there have been numerous incidents of civilian “collateral” damage and excessive or illegal treatment of enemy combatants by overly-stressed or otherwise impaired NATO and Coalition soldiers. War is inherently stressful, so there’s little that can be done to reduce its impacts on humans, but by taking the humans out of the battlefield, and replacing them and their decision-making with autonomous artificial agents, unnecessary injuries and deaths could, theoretically, be avoided.
This vision is fraught with significant technical and moral problems. Supposing we could devise machines capable of making decisions on the battlefield, and preventing collateral damage in the process, is it right to cede both technical and moral responsibility for the deaths of enemies to machines? Would doing so ever really occur, or does it just create further distance between humans and their wartime opponents? Will robots on the battlefield be able to be held morally responsible for their mistakes, just as human soldiers are? Would such a change be simply another step in the evolution from cross-bow to H-bomb to autonomous fighting robot, or would it represent a paradigm shift in the morality of warfare itself? Can machines be “bad” or “good,” and does this matter? These questions matter quite a bit, both for those who are developing new autonomous fighting machines, and for some philosophers and ethicists.
In their new book, Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen consider some of the practical and philosophical issues of designing robots that can make ““ethical”” decisions. Their inquiry looks to the very real potential consequences that increased automation has for human relationships with machines. It isn’t just robots that will be used on battlefields, but also artificial agents used in monitoring and sometimes conducting financial transactions, robots used in assisting the disabled and the elderly, and other potentially beneficial, but autonomous machines meant to improve our lives.
Countless science-fiction scenarios have considered, as long as robots have been envisioned, the practical and ethical implications of conflicts between humans and their creations. Isaac Asimov’s I Robot contains the first fictional attempt at a robot ethical code. It has three simple rules. Asimov’s rules of robotics state:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These rules embody a simple preference for human life over robots, and weigh actions rather than intentions. In some ways, they might be argued to be the basis for the actions of the Autobots in Transformer. In both the movie and TV show, the Autobots repeatedly sacrifice their own safety in an apparent attempt to save humanity from the Decepticons. But is this enough to say that the Autobots are moral? Suppose they followed all the Law of Robotics, would this make them praiseworthy? Would violating the Laws make them blameworthy? Let’s consider some of the arguments regarding this fundamental question of robot ethics.
Good Design, or Designed for Good?
Humans design machines with inherent safeguards—this is just a matter of good design. Good chainsaws have guards that prevent the chain from flying back and decapitating you if the chains break. Cars have airbags to help prev
ent drivers and passengers from flying head first into the dashboard on impact. Elevator doors don’t open in between floors. All of these are matters of good design. Machines ought to be made to avoid harming humans. Why aren’t the Laws of Robotics, or any code that becomes part of a machine’s algorithms, simply analogous to airbags and elevators? I think that, at least where robots are made by humans, “ethical” codes that become part of an intentionally-created robot or other artificially-created intelligent agent, are little more than expressions of the intention of the designer, and not truly responsible for “moral” machines.
Suppose we designed guns that would not fire upon a non-enemy or civilian. The gun has some sort of detector in it that can determine whether the target is a friendly force or a civilian, and can thus prevent firing upon illegal targets (illegal under the conventions of warfare, and “immoral” or unethical under the codes of conduct for most militaries). Is this a moral or ethical machine because it has sophisticated detection equipment? The machine “decides” autonomously from its human user, and so it is in some sense intelligent, overriding decisions of a human for its more-perfect artificial detection and decision-making capacity. In a simpler way, guns with safeties also protect humans by preventing accidents, and forcing more decision-making by their human users. In our “smart-gun” scenario, however, the gun takes decisions away from a human. Is this enough to make the gun moral, even where it may be acting automatically?
Elevators make decisions too. They’re programmed to decide in what order they “ought” to go to floors, not based merely on a first-come-first serve button-press order, but based on complex algorithms meant to keep people waiting a minimal period of time while fulfilling the linear efficiency necessitated by vertically stacked floors and limited numbers of available elevators. Their programming is made to assist humans, keep them happy, and move them safely. Both the smart gun and elevators appear to be making decisions. Their programming makes them capable of action apart from a human user.
But all of that decision-making is according to some algorithm created by a human programmer. In what sense, then, are elevators and fictional smart-guns truly autonomous? Is it enough that a machine is given a set of instructions and tools for detecting its environment, and then acts without further input although according to those rules? I think we’d say that such behavior, while “intelligent” to some degree, escapes both moral praiseworthiness or blameworthiness.
If I program a washing machine not to run if it detects a kitten in it (by sensing a heartbeat, or hearing a “meow” coming from within it, and that washing machine then fails to drown a kitten, I haven’t created a moral washing machine. I have projected my morals onto a machine and designed it to act according to those morals, even where it now has the physical autonomy to turn itself on and off. The values it acts according to are designed into it. Similarly, in A Clockwork Orange, when Alex is unable to commit crimes because his brain has been programmed to make him sick if he tries, he is not acting morally by failing to commit the crimes he once delighted in. He is simply blocked by programming from carrying out a certain range of actions.
These two examples point to the critical missing element that makes the Laws of Robotics just as suspect as a safety-gun or airbags. In none of these cases is there any real “choice” on the part of the machine. Of course, in I, Robot, the robot does make a choice, and in other Asimov tales, not to mention the Terminator movie franchise, robots become truly autonomous, over-riding their programming, and making dubious decisions that cause harms to humans. This is where things become interesting, and it leads us into the realm of the Transformers.
In both the TV series and the movie it is apparent that the Transformers are meant to be not merely automatons, carrying out pre-set instructions, but rather are intended to be autonomous, thinking and feeling beings. Their actions are supposed to be self-guided, and so they are the results of choices of someone other than a programmer. Thus, we are meant to view their actions as either morally good or blameworthy, rather than the mere mechanical results of some predetermined scheme of behavior. Of course, this raises a number of tricky problems that, should we choose to dwell on them, throw into doubt not only the moral value of robot decisions, but our own as well. Let’s take as the starting point for this discussion the example of the kitten-protecting washing machine, and reconsider whether it might be acting autonomously after all.
The Autonomy Straw-Man
It is perhaps too quick a move to argue that a kitten-sensing washing machine does not truly make a choice. It is based upon our self-assured sense that when we refuse to kill a kitten, we too are exercising a choice, just as those who choose to kill kittens also makes choices.
It’s true that before the kitten is dead, we seem to be able to choose to kill it or to save it, but this seeming is no more assuredly true than that a kitten-sensing washing machine, programmed to not kill kittens chooses to save the kitten. In other words, we have a pragmatic sense that our choices are indeed ours, and that our actions reflect free choices, but we cannot substantiate this in any convincing way. We can only point out that at some time we did the thing we did, and that it was preceded by some mental state that appeared to us to be decision-making.
This is essentially the problem outlined by John Searle’s “Chinese Room” criticism of the Turing Test. The Turing Test was proposed by Alan Turing as a means of testing whether an artificial agent is truly intelligent. In Turing’s article entitled “Computing Machinery and Intelligence,” published in 1950, he proposed that the way to test whether a machine really was intelligent would be to carry on a conversation with the machine (for instance by sending text messages). If the machine’s contributions to the conversation would be indistinguishable from those of an intelligent human, in other words if no one could tell that the machine was not a human, then the machine would have to be judged intelligent. The conversation would have to be open to any subject, and the machine taking part in the test (or game, as Turing preferred to call it) would be programmed to convince the judge only that it is human.
John Searle challenges the use of the Turing Test by proposing what has come to be called either the Chinese Room or Chinese Box. In a 1980 article, “Minds, Brains, and Programs,” Searle argues that the Turing Test fails to verify understanding on the part of the computer. Even if the machine gives appropriate answers, this does not show the machine really knows what’s going on. Searle asks us to consider whether a person in a room, given a Chinese dictionary and the grammar and syntax for the Chinese language, when asked to decipher and respond to incoming Chinese messages with Chinese responses, truly understands Chinese. According to Searle, while to an outside observer it would appear that the man in the Chinese Box understands Chinese, he has merely learned the rules for manipulating symbols which are meaningless to him.
While Searle believes that this “Chinese Box” thought experiment shows that robots or other allegedly intelligent machines are not truly intelligent, it poses another problem for us in the context of our discussion of Transformer ethics: Who’s to say that we or other humans exhibit true “choice” in ethical dilemmas? We have no external evidence that any other human, much less a robot, makes a truly autonomous choice when confronted with an ethical dilemma. The internal evidence of our own choices is similarly suspect. Could we be simply providing post hoc justification for our actions? Might our own decision-making be guided by algorithms that are beyond our own control, but designed to appear to us to be the result of what we have come to call “free will?”
It’s a legitimate conclusion of the Chinese Box thought experiment—though not a conclusion drawn by Searle—to doubt that any other mind, or even our own mind, is doing anything more than manipulating symbols. The notion of “understanding,” like the notion of “ethical choice,” may well be an artifact of programming. Philosophers like Daniel Dennett argue that there’s no good reason to assume that other, human “minds” that appear to be understanding
are not simply “zombies,” who act that way without true “understanding” as Searle conceives it. Scientists such as Marvin Minsky argue that the Chinese Box is simply another mind, a virtual one, which processes information just as other minds do. Other, similar criticisms point out the overall problem of philosophical criticisms of artificial intelligence, namely: there will always be an empirical gap, and it is the age-old philosophical problem of “other minds” which must always remain black boxes, of a sort, not accessible to our direct observation.
If we’re concerned with the question of whether or to what degree robots might be truly “ethical” then we are stuck with the problem of the Chinese Box. Given we can never delve into the realm of intentions and beliefs held by other minds, whether human or robotic, we are left with only external behaviors, and pragmatic considerations. Does the system we are exploring appear to be acting autonomously? Does it believe it has a choice, and does it express this belief coherently? Can it engage in ethical argument, justify its positions, and even come to change its “mind” in the face of good, contrary arguments? These are the measures that we use when judging the moral capacities of other humans. Why will they not suffice when it comes to our robots or even alien robots from the planet Cybertron?
Transformers and Philosophy Page 24