Transformers and Philosophy

Home > Other > Transformers and Philosophy > Page 25
Transformers and Philosophy Page 25

by Shook, John, Swan, Liz


  The Promise of Moral Machines

  The Transformers act in ways that are clearly ethically measurable. The Autobots do good, the Decepticons do bad. Even while they entertain, they offered for kids of my generation, and now for an entirely new generation, models of virtue worthy of mimicry. Moral education is, after all, only undertaken with the notion that it is our external acts that serve as the measure of our characters, and even while we often try to infer intentions from those acts, the intentions of others will always be a mystery. But we can take comfort in the fact that this is a problem not just for robots, but for other minds generally. If you appear to be behaving morally to me, and I have no contradictory evidence about your internal mental states, then what other practical reason might I have to doubt your morality or ethics?

  Our kitten-preserving washing machine and our safety gun that won’t kill civilians are well-designed, and they help enable us to do good, or at least do better, so does it ultimately matter whether the good is truly motivated by the machines or by a wise designer? The kitten and the civilian don’t care, as long as their lives are preserved. The tough part is still in devising the algorithms that will conduct our tasks in the real world with the care and precision of Transformers and with the appearances of virtue cast by the Autobots. In the end, whether we build our robots with more than the mere appearance of virtue is an interesting philosophical question, but no more puzzling than the question of whether other humans have the same virtues, rather than merely seeming to have them.

  As long as our machines exhibit behaviors that work, that succeed in furthering our values, and doing so predictably, then the problem of the Chinese Box as applied to robot ethics will remain a question for philosophy students to ponder. It’s worth noting, and the nature of the puzzle must be acknowledged to apply to other minds in general, but engineers continue to design more flexible machines, which will be more and more autonomous. We should take heed of the fictional examples ranging from I, Robot to Transformers, and ensure that even while we cannot guarantee that we engineer good into our machines, we engineer them as well as Optimus Prime and Bumblebee.

  16

  Freedom Is the Right of All Sentient Beings

  GEOFFREY ALLAN PLAUCHÉ

  The Transformers television series and toys were, as far as I can remember, my first encounter with science fiction. In Transformers, the human characters are confronted with the reality of an alien robotic civilization. The 1986 movie, which serves as a bridge between Seasons Two and Three of the series, was set in 2005 and the 2007 live-action movie was set in the present-day.

  We viewers are ourselves confronted with the possibility of encountering an alien civilization based on artificial intelligences. Consider the vast interstellar distances that members of an alien civilization would have to travel to meet us. Consider also the relative slowness and the rigors of space travel. Any alien civilization we meet is likely to be very advanced scientifically and technologically. Unless they find a way to bend, break, or circumvent the limit that the speed of light places on space travel, we might be more likely to encounter artificial intelligences created by another biological species or beings who have traded biological life for artificial life.

  What might such beings be like? We take it for granted that human beings are capable of being moral and that they should be. But will an alien robotic civilization develop or have any need for morality? If so, what kind of moral code might they have? Is it possible for our two or more species to live beside one another in peace, to co-operate and trade with one another rather than to make war and subjugate or destroy one another? If such alien artificial intelligences were superior to us in every way, what benefit could they possibly derive from associating with us? Why not ignore us, or enslave us, or destroy us instead?

  Provided the alien artificial intelligences in question are sufficiently intelligent and possess volition, that is, a relatively human degree of free will not bound by rigid programming as non-human animals are by their instincts, then they will possess a moral code or codes. For all we know, they may even be more consistent in sticking to them than we humans tend to be. I am even hopeful that they can, though not necessarily will, have a moral code that is compatible with living in peaceful and mutually beneficial coexistence with us. Crucial to such an interspecies relationship is the mutual recognition that we each possess equal and absolute individual rights to life, liberty, and property. In the 2007 live-action Transformers film, Optimus Prime, leader of the Autobots, says, “Freedom is the right of all sentient beings.” What reason could he have for believing this?

  I think that the political philosophy of Aristotelian liberalism is best able to answer this and the other questions I have posed. Aristotelian liberalism synthesizes what are arguably the best aspects of Aristotle’s philosophy with what are arguably the best aspects of the political philosophy of liberalism, particularly of its classical liberal roots and its modern libertarian incarnations. From Aristotle we draw on an ethical theory focused on the natural end that all rational beings pursue: a life of well-being or flourishing. Integral to a flourishing life are certain goods and virtues that we must pursue and possess which are determined by the particular kind of being we are. From liberalism we draw on an understanding of natural rights, including their importance both to our own individual flourishing and to bringing about and maintaining a free and flourishing society.

  But, even if these ideas are true for human beings, what would make us think that they are true of all rational beings as well? Even of intelligent alien robots like the Transformers?

  Flourishing, Virtue, and Artificial Life

  Aristotle began his great treatise on ethics by observing that the good is “that at which all things aim.”1 This is an irrefutable conceptual truth. Anyone who attempts to deny it necessarily accepts its truth in so doing, for he who seeks to deny the claim that ‘the good is that at which all things aim’ is himself aiming at an end he necessarily perceives as worth attaining (an apparent good)—proving the claim to be wrong.

  The important question is, what is the good? Or rather, which things are good and which are bad? When we ask whether something is good or bad, we are compelled to ask also: good (bad) for whom? and for what? When we say that food is good, what do we mean? Do we mean that food is good, period? No. What we mean is that food is good for us. And it is good for us for the reason that we need it in order to survive and because, if it is tasty, we usually enjoy eating it. Not all food is equally good for us, however. In fact, what counts as food for us depends on the kind of beings we are. Some animals can digest things that we cannot. What suffices as adequate nutrition for a plant will not suffice for human beings. We require particular amounts of certain kinds of nutrients and minerals not only in order to survive but, more importantly, to flourish. Moreover, while a certain amount of something, such as carbohydrates or certain fats, may be good for us, too much can be unhealthy. What’s more, just as there are differences in food requirements between species, so too are there differences within species between males and females, different body types, different lifestyles, and so forth. Thus, goodness or badness is something that depends both on the thing in question and on universal and particular aspects of individual moral agents, it is not something that just exists independently.

  Transformers also need some source of energy to fuel their bodies. The source of energy they depend upon is called Energon, in its raw form a type of crystal ore. In order to use it for fuel or other purposes the Transformers need to process it. This requires creativity and labor. Like our food, the “food” of the Transformers is scarce and requires effort to obtain and use. We need ethical principles and legal rules to guide and regulate our actions toward each other with respect to food, and all other scarce things we need or want, and so do the Transformers. And so will any rational artificial life-forms.

  Morality is a code of values and principles that serves to guide our choices and actions both when we’re
alone and with respect to other people. Morality pertains only to matters of choice, for we can only rightly be praised or blamed for that which is in our power to control. All of our choices and actions are taken to pursue some end or other. The very fact of life necessitates the employment of scarce means, such as time and resources, to achieve certain ends. All life is conditional, even artificial life. We must act, and act wisely, in order to maintain and further our lives. So life both makes possible the existence of values and makes it necessary that we pursue them. It is our ultimate and natural end, that for the sake of which everything else is done.

  Any code of morality needs a standard for judging what actually counts as a value (or a good) and what the proper means are for pursuing it. There is no better standard of value than that for the sake of which we make every one of our choices and actions. Our natural end, life, is our standard of value. This will be as true for artificial life-forms as it is for biological life. But while there are universal characteristics of life shared by all forms of life, there are also important particular differences between biological species, between biological and artificial life, and between different types of artificial life.

  I have suggested that what I mean by life as one’s natural end and ultimate standard of value is not mere survival but a life of flourishing. It’s not enough, surely, merely to survive. On the face of it mere survival seems an awfully thin reed on which to hang the whole of morality. Everything and everyone would be reduced to being a mere means toward the end of survival. Even if there were a set of rules based on this standard, which one could follow, that would be both conducive toward long-term survival and correlate well with a fulfilling, moral life, the explanation still seems a shallow and unconvincing one. Do we love our friends and family merely because doing so is conducive toward our long-term survival? Moreover, what robust reason, or what reason at all really, could a mere survival standard offer for giving one’s own life to save the life of a loved one, for instance? Even those who do choose death, such as a hero or a suicide, must necessarily see a life ending in the time and manner of their choosing as being preferable to a life ending in a different time and manner. In other words, a mere survival standard is inadequate for explaining the choice to die and even those who choose death must necessarily hold a flourishing life as their standard of value.

  What is meant by a life of flourishing or well-being is health and development to maturity. As Philippa Foot points out, we determine what counts as flourishing the same way for humans as we do for plants and other animals. “The structure of the derivation is the same whether we derive an evaluation of the roots of a particular tree or the action of a particular human being. The meaning of the words ‘good’ and ‘bad’ is not different when used for features of plants on the one hand and humans, on the other, but is rather the same applied, in judgments of natural goodness and defect, in the case of all living things.”2

  When it comes to more complex life, particularly rational life, determining what is good and bad becomes more complicated and fraught with controversy, but it is nevertheless the same procedure in essence. The same can also be said for intelligent artificial life. To clarify further what is meant by a life of flourishing, some shallow goods and virtues can serve as mere means to the end of survival but with the flourishing standard, the various goods and virtues are conceived as being parts of a life of flourishing rather than something external to it. An example of this notion may be how buying a guitar is external to playing Stan Bush’s “The Touch” while playing particular chords in a specific arrangement is part of what it means to play the song.

  Bearing all of the foregoing in mind, let us turn to exploring some of the goods and virtues of which human flourishing consists, and to speculating about the nature of flourishing for artificial life-forms. We can start with the easier stuff first and revisit the point that life is conditional. Physical health is one good, necessary not just to make continued survival more likely but also for well-being—health is a natural state, and we derive enjoyment from it as well as from the things it enables us to do. We’ve discussed the fact that just as humans need fuel for their bodies so too will artificial life-forms. If organic life-forms do not eat enough food their functioning will become impaired and eventually they will die. The same will probably be true of artificial life. However, most if not all organic matter decays quickly; the same will probably not be true of whatever comprises an artificial life-form’s body. Thus it may be possible to revive an artificial life-form after years, decades, or even longer, of its being without power; but even non-organic parts decay over time and so death by starvation still seems possible for artificial life. The ability to produce or acquire energy, as well as the tools useful for this purpose, will therefore be highly valued not only for this aspect of good health but for all the other things for which energy can be used.

  Organic life is generally very fragile, highly vulnerable to injury and disease. While artificial life is likely to be far more durable, it will not be immune to such threats and may possess some weaknesses that organic life does not. Consider one such vulnerability somewhat amusingly dramatized in the Transformers cartoon:

  STARSCREAM: It looks like some kind of . . . rust!

  MEGATRON: Impossible! We are rust-proof!

  STARSCREAM: Perhaps you’re made of shoddy materials, Megatron!

  MEGATRON: That’s ABSURD!

  Even artificial life can suffer from the equivalent of ill-health. Even artificial life can be damaged, sometimes beyond repair, or even destroyed. For these reasons weapons, armor, and shields of some kind as well as tools for the diagnosis and repair of damage, and the skills necessary for employing these things, will be of value.

  As rational beings we observe things in the world and develop abstract ideas, or concepts, that refer to them. These concepts and their interrelations form the basis of our knowledge about the world. Knowledge does not come automatically to us. We must actively seek it out by observing the facts of reality, abstracting from them and integrating them into concepts, theories and stories about the world. We are neither infallible nor omniscient, and so we can make mistakes or even willfully evade the truth.

  Accurate knowledge and good judgment are vital to improving the chances of our continued survival but more importantly also to improving our quality of life. It’s the continual accumulation of knowledge that has enabled our species to develop from a primitive rock-and-spear-wielding, cave-dwelling existence to one that is today marked by an abundance of food, advanced medical care, plentiful clothing and comfortable shelter, instantaneous communication and swift transportation around the globe, and explorations into outer space. In light of this, intellectual ability and intellectual pursuits are valuable, although not everyone need specialize in scientific or other academic disciplines. Any alien artificial life-forms we happen to encounter in the near future will share these limitations and needs even though they will likely be more advanced scientifically and technologically than we are, and so they will probably value these things highly as well.

  We have so far discussed a number of final goods or ends that comprise a life of flourishing—health, wealth, reason, intellectual ability and pursuits—as well as some of the intermediate goods that contribute to them. We can now identify some of the virtues that tend to produce these goods and that, being valuable in themselves, are also a part of flourishing. We can follow Aristotle in distinguishing between intellectual virtues on the one hand and moral virtues (traits of character) on the other. One reason to do so is that it helps to avoid intolerant moralizing about intellectual error. It’s not necessarily a moral failing to make a mistake, hold incorrect ideas, or have poor math skills, for example. Among the intellectual virtues, and here I am not sticking precisely to Aristotle’s list, are technical knowledge and skill, scientific knowledge, philosophical wisdom and knowledge, and practical wisdom (or prudence). Practical wisdom might be considered the master virtue, for it is the integrator o
f all the goods and virtues into a complete life and it guides the proper application of the other virtues. Aristotelian prudence is not pure, calculating prudence, however; while the moral virtues without practical wisdom are blind, practical wisdom without the moral virtues is empty.

  And what moral virtues are central to a flourishing life? Well, assuming that our alien visitors are individual, autonomous beings like us, then the virtue of independence is one they might and should cherish. While many critics of liberalism fear the development of an excessive individualism and of a “me! me! me!” attitude that will lead to the breakdown of social cohesion and cultural norms, the human propensity to fall in with the herd is much more worrisome and prevalent. The virtue of independence recognizes that we are separate persons with our own minds that we must use to make decisions. It means that if we are to live a flourishing life that is our own, we must take the responsibility to think and work for ourselves rather than abdicate this responsibility to others. The virtue of integrity touches on this responsibility too. It means endeavoring to have a consistent set of principles and holding to them whatever temptations one might face, be they other people, unfortunate situations or one’s baser inclinations. In a social context, it also means keeping one’s promises; other people make plans in the expectation that you will do so.

  Another important virtue is honesty. While honesty does in part mean that it is generally right to tell the truth and wrong to lie, it has a more fundamental meaning that is relevant here. Philosopher-novelist Ayn Rand argues that, given the way we as rational beings acquire knowledge, and given our fallibility and facility for engaging in willful evasion, honesty means “one must never attempt to fake reality in any manner” (to himself or to others).3 And this involves not attempting to acquire values via fraud and not shying away from the facts, including one’s proper hierarchy of values. If our alien artificial beings are capable of some analog to human emotion, or if they have an equivalent to a relatively opaque subconscious as do we, where automatic mental processes take place, then they too may be capable of evasion and self-deception. Even if they are not, they may still be capable of acting without principles and of deceiving others. So we have reason to expect that the virtue of honesty can and should be part of their flourishing as well.

 

‹ Prev