Book Read Free

Transformers and Philosophy

Page 10

by Shook, John, Swan, Liz


  Others, though, such as John Searle, think that only biological systems could ever develop minds. For those in Searle’s camp, it’s the organic nature of biological beings (having evolved from and in interaction with their environments), and the very chemistry of the nerve cells themselves, that is necessary for the emergence of mind. So, while on Searle’s own view Transformers couldn’t have minds, on other views similar to his, it might be possible to say that as long as they were in some sense biological beings, they could have minds. The mere fact that they are not organic, not having evolved on our planet, wouldn’t automatically preclude their being biological entities, one might argue. Some physicalists, though, disagree with both these claims, and maintain that neither is there anything magical about biology, nor is function alone sufficient to identify mind. They think that Searle’s biological account is too specific (and thus chauvinistic toward other kinds of beings), while functionalism isn’t specific enough.

  This last group of physicalists, (known as reductive physicalists), think that our scientific research into the way that the brain and nervous system work can eventually provide a complete explanation for our mental life, and that there is nothing else needed, because physical explanations can tell the whole story. They maintain that biology is reducible to physics and chemistry, and functionalism only points to something it could label as mind, but does not explain what makes it function the way that it does. These philosophers lean heavily on developments in the last decade or two in neuroscience, which have been impressive, in the way in which they have come to show how certain processes in the brain allow for perception, memory, association, emotion, and other mental activities.

  Perhaps unsurprisingly, though, some philosophers want to have their cake and eat it, too. They think both that minds depend upon brains, and that mind is irreducible—that it is something purely subjective, something decidedly different from the physical. While there might be neural correlates of mental activities, these philosophers say, those correlates are not to be identified with mind. Nevertheless, these philosophers maintain, there is no second substance, no soul or mental ‘stuff’ in addition to the physical stuff of the universe. So they remain physicalists.

  What the physicalist philosophers of mind all have in common is their faith in the value of physical explanations and in the achievements of science. Science has proven to be tremendously successful at unraveling apparent mysteries and discovering straightforward (although not always simple) explanations for the things being investigated. Research into the functioning of the brain has provided explanations for innumerable questions about how perceptual processing occurs, about how different kinds of perceptual processing is integrated, and even about how consciousness—or mind—can arise.

  If a physicalist view is correct (with the possible exception of the non-reductive physicalists, who will have to determine mind on other grounds), then the question concerning whether Transformers are mental beings is actually just a question concerning the specifics about what constitutes their information systems, how the different mechanisms for perception, self-regulation, and self-movement are organized, how they interact with each other, and how they are tracked by further mechanisms of the same nature. If a ‘brain’ that is sufficiently functionally similar to ours exists in Transformers, then there’s no reason not to credit it with having a mind.

  And so, if an entity made out metal, silicon chips, or something completely unknown to us, is hooked up in the right way to whatever perceptual, visceral, and posture and movement systems it has, and if that entity is capable of processing all this information, as well as the information that it is doing this processing then we might be willing to accept that this entity had a mind. But would it be a mind like ours? Well, that would depend on what you think ‘like ours’ means.

  Many animals on this planet have minds, at least on the views of lots of people, but they differ from our human minds, given, for instance, that they don’t encode information into language and use that extensively in their thinking, as we do. But then many people (look at babies!) don’t do that, either, and yet we think of their minds as being like ours. So, one might say that Transformers, given the conditions described above, do indeed have minds, both as similar to and as different from our minds as our minds are to each other’s, and to those of other animals with which we are familiar.

  So if minds are just some kind of physical information processing systems, then why do Transformers have mental lives and our computers (so far) do not? The answer that many physicalists would give is that the difference is just a matter of complexity and function. Our current computers are nowhere near as complex and capable as the human brain, with its average 1011 (one hundred billion) neurons, each of which in turn possesses an average seven thousand synaptic connections to other neurons. This means that by some estimates, the average human brain at its peak (our brains begin to degenerate after we are about three years old) has about 1015 (1 quadrillion) synapses. Add to that the over four hundred chemical transmitters, peptides, hormones, and large variety of other modulating chemicals that can radically influence the environment in synaptic gaps, and you can see that computers are not even on the same playing field as brains. If we eventually manage to build computers with enough complexity, and with the right kind of structure, they may well turn out to have minds too—according to some of the physicalist views.

  Mind—More than Meets the Eye?

  If some version of physicalism is right, then there may come a day when our scientific understanding has advanced to the point that we have identified which structures in the human brain are responsible for our mental experiences, and it may be that from this knowledge we can identify which attributes a computer or robot would have to possess in order to have a mind. In this case, definitively identifying whether the Transformers have minds may be as simple as a scan by a hand-held gadget. Until that time, however, or if the most straightforward reductivist version of physicalism is wrong about what constitutes the mind, then we must look for other ways to decide which entities have minds and which don’t. So, if you can’t know that an entity has a mind by looking at the way it operates (and we can’t know that even about human beings, with our current state of knowledge about the brain), how can you know whether it does?

  It seems that the best candidate for an approach to making this determination may be what philosophers call an abductive inference, or an inference to the best explanation. While this kind of an inference does not provide us with complete confidence that we have reached the truth, it can provide pretty good reason for believing that something is true, and so could justify our belief that the Transformers have minds. This kind of reasoning generally starts from a set of observed facts about the world, and then tries to identify the best theory or explanation of why they occurred. It is a way of reasoning from effects back to causes. Frequently, many different explanations are possible, and so we tend to try and pick the best of them, using several different criteria. We all do this constantly throughout the day, even if we are not consciously aware of it. If you’re walking along and encounter a big hole in the ground, you will likely start thinking up possible explanations for it. Maybe this is a construction site, where the hole was made in preparation for a building foundation. Or maybe the hole is a crater that resulted from a Transformer who crash-landed following a journey from Cybertron. Or maybe the hole happened because in exactly this spot, gravity completely ceased to function and the dirt in this specific area all floated into space. All three of these scenarios can explain the observed hole in the ground, but some of them seem less probable than others. We tend to think that the building construction hypothesis is a simpler, less outlandish, explanation that fits better with our other observations in the past, and so is more likely to be true.

  Lacking definitive evidence that others have minds, we presumably do something similar to the process illustrated in the example above. If we encounter a robot performing fairly basic tasks, we may f
ormulate several theories about how it does them. We may theorize that the robot is being remote controlled, or that it has been programmed to autonomously perform certain functions, or that it is a robot with a mind. What observations could we make that would lead us to conclude that it actually has a mind? We will have to base our conclusion on the external evidence that we can observe, as well as our past experiences with similar things. Given that all of our past experiences with mechanical devices have led us to believe that they do not have minds, and being somewhat familiar with the current state of technology available on planet Earth, we would sensibly conclude that the robot was either being controlled by someone, or had been programmed by someone to perform specific actions. But what if it engaged in behaviors that were very different from the other kinds of machines that we had encountered? What if the robot started up a casual conversation about the weather, and started to complain about how stiff its robotic joints were? This could just be a programmed speech, or even a recording. But what if we started to ask it questions on a wide variety of topics? As the robot’s behaviors become more and more complex, adaptive, and versatile, it seems that we might become less and less convinced that it was just a preprogrammed machine that was only following the instructions it had been given.

  What if there were some reason to think that the robot had originated on a different planet, in which case there might have been a considerably more advanced level of technology available for its construction? This is one of the important differences in encountering a Transformer versus a human-made system, because, if there is good reason to think that the robot is not limited by the present state of our technology, then the possibility that it could be sufficiently complex to have a mind is no longer effectively ruled out. Observing the robot doing very complex things that are sufficiently different from the machines we routinely encounter (such as smoothly transforming from an automobile into a roughlyhuman shape) will further support this possibility. After an extended period of observing the robot, and seeing it engage in things like learning, attempting to avoid hazards, pursing preferences and goals, making plans to defend against other robots, mourning the loss of its fallen comrades, and making decisions based on what seem to be moral values, we could very easily come to the conclusion that the behaviors it exhibited were too complex to originate from anything without a mind.

  It would be nice if the determination of whether something had a mind could be made simply by comparing its actions and characteristics to a check-list of conditions that are both necessary and sufficient for mind. It seems though, that there is a very wide range of characteristics that beings with minds can have, but don’t necessarily have. Our understanding of mind is a cluster concept, and some but not all of the characteristics included in that concept belong to anything that we would be willing to call a mind. Persons with certain neurological disorders, for example, can navigate and catch a ball, but they insist that they cannot see; others can function normally in most contexts, but cannot process language. They know what they are doing and show purposive action and even normal intelligence, but they cannot communicate via language. Other kinds of brain damage result in some people’s having a complete inability to remember things for more than a minute, while others smell colors or taste sounds. Some people hear music when none is present, while still others can hear individual notes, but cannot put together a melody. In none of these cases do we say that the people in question do not have minds; rather, we say that they have deficiencies or gifts. Identifying the key marks of the mental is thus surprisingly difficult.

  While not so many people these days are likely to insist that having a mind is somehow tied to human form, the other proposed criteria seem to also run into problems. We saw already that the Turing Test and its emphasis on language use does not seem to be as good of a criterion as first thought. And as computer programming becomes more advanced, many other suggested test criteria will likely also be met by unthinking robots. This leaves us in the unfortunate position of having to admit that specifying exactly how we decide if other things have minds or not, may not be possible.

  Perhaps the best we can do is give a list of things that some things with minds do, and if we encounter something that does enough of them, then we should be willing to conclude that it does have a mind, although perhaps of a fairly different kind than ours. But having enough of these properties would not guarantee that the thing in question has a mind and can actually think. Advanced robots could be programmed to mimic human facial expressions, behaviors and actions, and could seem to be thinking, feeling, and acting like humans do, without their ever having minds of their own. Since (at present) we can only observe other people and things “from the outside”, there does not seem to be any way to be sure that even our friends and family have minds of their own!

  It does seem, though, that even if we could never be completely sure that a Transformer had a mind, we could have as much reason to think that the Transformer had a mind as to believe that any other human being had one.

  _________

  1 John R. Searle, Minds, Brains, and Science (Harvard University Press, 1984), pp. 28–41. Searle had earlier stated this argument in “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3:3 (1980), pp. 417–457.

  7

  Will We Meet Optimus Prime in Heaven?

  M.R. EYESTONE

  Is there a chance that, when we die, we’ll meet Optimus Prime in heaven? What about Megatron: will he go to hell for all the bad things he’s done? Even though Megatron and Optimus Prime have never actually existed, these questions about them raise further thought-provoking questions, such as:

  •Are Transformers and other apparently intelligent machines alive?

  •Do they have souls?

  •Can they die, or do they just stop working at some point? (Or are those just two ways of saying the same thing?)

  •Can we rightly say that they’re virtuous or morally good?

  No doubt we might say “No” to some or all of these questions. “Transformers,” we might say, “are just machines, however intelligent and lifelike they might seem. They’re certainly not living persons who have souls, like human beings.” While I’m somewhat sympathetic to this line of thought, I’m also curious enough to wonder if it might not be oversimplifying things a bit. In particular, it seems to run together a few things that could perhaps be distinguished, namely:

  •Being intelligent

  •Being alive

  •Being a person

  •Having a soul

  These different things are easy to mix up, since they so often go together in our everyday experience of human beings. They don’t always go together: plants and amoebas are alive, but they don’t seem to be persons, and dogs and parrots are fairly intelligent, but it’s not easy to say whether or not they have souls. So ask yourself this: when a being possesses all of these features, does that imply that it can only be human? Or could there possibly be nonhuman beings who are virtuous, intelligent, living persons who have souls?

  This is a hard question to answer: the universe is a big place, and the realm of possibility is perhaps bigger still. Or maybe this question isn’t as hard as I think. After all, it seems that angels and extraterrestrial spacemen (if they exist) would be intelligent, living persons who’d be capable of virtue, even though they’re not human. So we’re probably fairly comfortable with the possibility of such beings. The question is whether Transformers and other apparently intelligent machines should be counted among these possible beings—whether my childhood hero Optimus Prime can rightly be called a virtuous, intelligent, living person who has a soul. Is he just a complex but ultimately lifeless, soulless hunk of metal? Or is there more to him than meets the eye?

  To answer this question, we’ll have to look beyond our everyday experiences and engage in some deep, philosophical reflection, asking ourselves what a soul is and what it really means to be alive. This isn’t quite the old question, “What’s the meaning of l
ife?”, but it might very well be related to that oldie. Instead, the question is “What does it mean to say that a being is alive, has a soul, and is truly capable of virtue and intelligence, and what are the defining features of such a being?”

  My answer is that we have pretty good reason to think that apparently intelligent machines like Transformers actually are intelligent, living beings who have souls and can be genuinely virtuous. Or at least, we don’t have any less reason to think that a Transformer is alive and intelligent than we do any other apparently living, intelligent being. Once we’ve taken into consideration how a mechanical being acts and what it can do, I’m not sure that there’s anything about its being mechanical that should automatically disqualify it from having a soul and counting as authentically alive, intelligent, and virtuous. So my answer to the question posed in this paper’s title is a cautious and qualified “Yes,” and not just because (as I’m not ashamed to admit) Optimus Prime is still one of my heroes.

  Signs of Life: What the Transformers’ Universe Says

  Are Transformers alive? Stories about the Transformers certainly treat them as living beings. In particular, early issues of the old Marvel comic series often suggest that, while organic life and mechanical life are obviously different, one is no more or less a form of life than the other. But we’re asking whether Transformers, if they existed, really would count as living beings. Our inquiry, to be truly satisfying, has to go beyond just what the story says, meaning that it has to be something deeper than just an examination of what fiction writers have declared to be the case. Still, a look at what Transformers fiction says about its subjects and why they’re not merely machines, but living machines, will be of some help to us.

 

‹ Prev