Zombies, Vampires, and Philosophy

Home > Other > Zombies, Vampires, and Philosophy > Page 8
Zombies, Vampires, and Philosophy Page 8

by Richard Greene; K. Silem Mohammad


  Descartes, it now seems, dismissed the possibility of humanoid zombies too soon. He underestimated the potential of machines. In 1937 Alan Turing demonstrated the theoretical possibility of fully programmable devices, or “universal machines,” paving the way for the development of digital computers and showing how the infinite variety of behaviors “guided by will,” such as speech, might be, in principle, explained as mechanical effects of computation. Much as the heart is a pump, and biological inheritance is DNA replication, Turing-inspired “functionalists” (or “cognitivists”) hypothesize that the brain is a computer. Thought is computation.

  Zombies Ate My Brain!

  RACHEL [TO DECKARD]: Have you ever “retired” a human by mistake?

  Just when everything seemed to be going materialism’s way—just when it was finally dawning what sort of mechanisms minds might be and what sort of material processes thought might be—along came zombies. At first materialists were in denial. They wanted to say, “A thought experiment—the mere possibility of zombies—proves nothing. Only if there really were NEXUS 6 units like Rachel, and they really weren’t conscious, would it refute behaviorism; only if there really were NEXUS 7 units (say) that computed the same subfunctions as brains, yet unconscious, would it refute functionalism; only if there really were NEXUS 8 units (say) with brains physiologically indistinguishable from human brains, yet unconscious, would it refute mind-brain identity theory. That such units conceivably might not be conscious proves nothing.” It was here that zombies in philosophy turned ugly. Modern materialism says that everything mental is identical (and scientifically identifiable) with something physical, just as water is identical (and scientifically identifiable) with H2O. Such scientific identification, however, entails that water is not just actually H2O (“in this possible world,” philosophers say); water is necessarily H2O (“in all possible worlds,” philosophers say). It has to be H2O, or else it wouldn’t be water. Essential scientific identifications such as that of water with H2O, when true, are not possibly or conceivably otherwise. So the best philosophical opinion, following Saul Kripke, now has it.36

  But to imagine the presence of adaptive behavior, or underlying computational processes, or even brain processes as such, is not ipso facto to imagine conscious experience. Zombie thought experiments seem variously to imagine beings with humanoid behavior, programs, and brains, without subjective experiences. Zombies, so conceived, seem possible in ways inconsistent with the truth of all would-be materialist identifications. So the story goes.

  “And since subjectivity can’t be identified with any sort of material processes—and I am directly aware in my own case that it exists—it must be essentially separate and immaterial, confirming dualism. On the main point, Descartes was right after all!” So, it seems, the story would continue—if zombies really are conceivable. Alas, it seems they too readily are.

  To evoke zombies, John Searle instructs, “always think of [the thought experiment] from the first-person point of view”: you have to imagine yourself as the zombie. As Searle sets the stage, suppose that doctors gradually replace your brain with silicon chips, perhaps to remedy its progressive deterioration. From here, you may conjure yourself a zombie by imagining as follows:. . . as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. . . . [You’ve gone blind but] you hear your voice saying in a way that is completely out of your control, “I see a red object in front of me” . . . imagine that your conscious experience slowly shrinks to nothing while your externally observable behavior remains the same.37

  You have imagined yourself gradually becoming a zombie. And no sooner are such zombies conjured, of course, than they’re off on their rampage. Against behaviorism, “we imagined . . . the behavior was unaffected, but the mental states disappeared.” And there’s no way to stop them before they transmogrify. Suppose the replacement chips implement the same programs, maintaining all the same internal functions as the brain cells replaced. The replacement chips perform exactly the same computations as the brain. There goes the functionalist idea that thought is computation—zombies ate its program.

  Here Searle, himself hoping to spare some vestige of materialism, leaves off. But zombies are not so easily stopped. Suppose silicon-chip-replacement therapy is unavailable. The medical doctors are powerless. You have heard of a certain witch doctor. Desperate, you fly to a remote isle, voodoo rites are enacted, and voilà. The deterioration of your brain is magically reversed. To the amazed medical doctors back at the clinic, your brain is indistinguishable from your very own predeteriorated brain. But wait! As your brain is being magically restored . . . it’s just as before. Your conscious experience slowly shrinks to nothing. You’re a zombie. There goes mind-brain identity. Zombies ate its brain. Though some (including Searle himself) deny that such fully human bio-zombies really are conceivable, none have shown the contradiction in it. Indeed, it is just this that possibility that philosophers who have raised the “other minds problem” seem to be conceiving.

  Plan B from Inner Space: Revenge of the Zombies

  TYRELL [TO DECKARD]: “More human than human” is our motto.

  Since they do seem to be conceivable, bad brain-eating zombies must be stopped before they destroy civilized philosophy of mind and scientific psychology as we know it. I have a plan. Much as Earth enlisted Godzilla to battle Ghidrah in Ghidrah the Three Headed Monster, I propose to create new breeds of zombies to battle their evil cousins. Familiar zombies, conjured, as it were, from the second-person familiar perspective, increase our attachment: there’s I and thou and thou art zombie. Super-smart zombies are not just as smart as us, but smarter. Being in these ways superlatively human, such zombies resist dementalization. Their imagined or stipulated lack of subjective experiences seems not to impugn the genuine-ness of their apparent intellectual endowments. These zombies can battle their evil cousins to a standoff, at least, and perhaps even defeat them.

  To conjure familiar zombies, put yourself in Deckard’s shoes. Imagine your own true love to be a zombie. Imagine your beloved has no subjective experience of sensations, no inwardly felt experiences (what philosophers call “qualia”) whatsoever. Yet your beloved, we’re imagining, behaves in every way just as your real-life beloved (who, presumably, is not a zombie) really behaves: smiles as warm; kisses as sweet; loyalty as steadfast; words as tender; love as true. Only the “light” isn’t on. No qualia.

  I submit that you should not deny your true love’s cognitive abilities and attainments. He or she still wants you to prosper, still knows your preferences, still prefers scotch to bourbon, and so on. To strengthen the intuition, extend the fantasy to your mother, your father, your children, all your friends, siblings, colleagues, teachers, everyone you know. They’re all zombies! Should you conclude that your beloved and the rest don’t think, that they know nothing at all and don’t understand English? It was from Mother, Father, and the rest, in particular, that you got your English, and such words as think, know, and understand. I think you should conclude, “How odd! I alone have these peculiar subjective experiences besides the wants, beliefs, and so forth, that others have.” The thought of mental life without subjective experiences may be horrible, but it’s conceivable. Familiar zombies show this.

  As for super-smartness . . . since the NEXUS 6 Replicants were “at least equal in intelligence to the genetic engineers who created [them],” let us suppose, off-world, they undertake genetically re-engineering themselves or their descendents. Suppose these descendents, NEXUS 9s, are decidedly more intelligent than the human genetic engineers who created their forebears (except, still no qualia). Suppose these NEXUS 9s return to earth. They show us how to make our microwaves synthesize food out of thin air and how to turn our Ford Tempos into time machines. They show us how to achieve peace on ea
rth, with liberty and justice for all. What should we say? That our NEXUS 9 benefactors didn’t really know how to turn Tempos into time machines? That they didn’t really understand the revolutionary physical principles involved, but now we do? I think we’re not as ungrateful and conceited as that. We’d say our subjective-experience-bereft benefactors knew how to turn Tempos into time machines.

  Finally, imagine zombies both smart and familiar—or, rather, venerated. Imagine it is discovered that many, most, or even all of the leading contributors to our human intellectual heritage(s) were zombies (Descartes, especially, included). Nevertheless, I submit, we should not deny their mental attainments. Despite not meeting the would-be dualistic essential condition for thought, despite being bereft of subjective experiences or “qualia,” these famously smart zombies remain paradigm thinkers on the strength of their achievements. Intelligent is as intelligence does, absence of itches, aches, tingles, visual images, and such, notwithstanding. Indeed, the surpassing greatness of their intellectual attainments makes super-smart venerable zombies especially easy to imagine, especially given the well-known antipathy between thought and feeling. No one ever solved an equation or proved a theorem in the throes of agony or orgasm.

  How I Learned to Stop Worrying and Love the Zomb

  DECKARD [TO BRYANT]: And if the machine doesn’t work?

  It seems that intuitions about thoughts (cognitive states like belief and inference) and feelings (sensations such as itches and afterimages) diverge under different zombie-thought-experimental conditions. Brain-eating zombies like Searle’s undermine materialist accounts of sensation; qualia-eating zombies like mine undermine dualistic accounts of thought. Furthermore, the supposition that thought somehow requires feeling—besides being contrary to the well-known antipathy just noted—would be indecisive in its upshot. Even assuming this no-thought-without-feeling “Connection Principle” (as Searle dubs it), the question remains: should we conclude that intelligent-acting androids are not really thinking (given their imagined lack of feelings); or should we rather conclude (given their evidently thoughtful behavior) that they have feelings after all?

  Is it a standoff, then? Does dualism rule the experiential realm and materialism the intellectual? Yet, even this partial “triumph” of dualism would seem to be strangely empty and inconsequential. As pure sensations or experiences, shorn of every concomitant physical (behavioral, functional, neurophysiological) element, qualia are “saved” from materialistic identification precisely by being conceived as ineffectual; as events lacking further effects, what philosophers call “epiphenomena.” This is suspicious. Not everything thought to be conceivable really is so. Materialists who were in denial thought they could conceive of water not being H2O, but they were mistaken. Perhaps it’s the same with the supposed conception of beings “which are physically and functionally identical [to us], but which lack [subjective] experience”: perhaps such “phenomenal zombies,” as David Chalmers calls them, likewise, only seem to be conceivable, but really are not. Chalmers notes parenthetically, “It is not surprising that phenomenal zombies have not been popular in Hollywood, as there would be obvious problems with their depiction.” “Problems” to say the least! They look and act exactly like you and me. In the case of NEXUS 8 bio-zombies (imagined above), neither CAT scans nor MRIs nor any other conceivable objective tests could distinguish phenomenal zombies from ordinary human beings. This being the case, the “problems with their depiction” that Chalmers notes, are likewise problems with their conception. What exactly are we supposed to be imagining when we imagine beings “physically identical to me (or to any other conscious being), but lacking conscious experiences altogether”? 38

  Here we are in a position to appreciate Searle’s observation that the conception of phenomenal zombies has to be done “from the first-person point of view”; but who is that first person? Whose point of view is it? It must be the zombie’s, but the zombie, “lacking conscious experiences altogether,” is supposed to lack a “first-person point of view”! Like mad scientists in the movies who are destroyed by their own creations, it seems that the zombie-spawning thought experiments destroy the experimenters themselves, and with them the experiments. If qualia are required for a first-person point of view, the experiment abolishes the viewpoint on which it depends. It seems that zombies, then, are not coherently conceivable after all.

  So is that the end? Have zombies been defeated? Has Plan A prevailed after all? Has thoroughgoing materialism been saved? I wouldn’t be too sure. First, it seems we can imagine the process of zombification (as in Searle’s evocation) even though we can’t quite see our way to the end. So long as the subject’s awareness hasn’t shrunk to zero, there’s a first-person point of view that can be imaginatively taken. Perhaps, where imagination leaves off, we extrapolate (“and so on”) our way to the end. Furthermore, the incoherence of a first-person narrative minus the first person only arises where qualia are totally absent: for so-called “absent qualia” scenarios. “Inverted qualia” (I-see-red-where-you-see-green type) scenarios would still seem conceivable, and pose similar challenges to materialism. If either green experiences or red experiences might conceivably “supervene” (as philosophers say) on the very same state of the brain, then that state cannot be identified with either experience.

  Fortunately, while good, qualia-eating zombies, do not provide full immunity, they do provide a measure of protection, like flu vaccine: you still get zombies, but a milder case. It is only so far as the mental states in question depend on qualia for their existence, only so far as they are inconceivable without qualia, that bad, brain-eating, zombies show that the mental states cannot be identical with brain states. Good zombies show that this is not that far for mental states involved in cognitive thought. For a good portion of our mental life (the whole cognitive part, it seems), qualia are inessential; so good zombies seem to show.

  As for sensations (itches, aches, afterimages, tastes, and the like), though prospects for materialistic identifications of these remain in jeopardy, even here, good zombies diminish the havoc brain-eaters would otherwise wreak. Computers equipped with sensors, for instance, are “sentient” in a sense. They get information about their surroundings from ambient light (as in vision), vibration (as in hearing), and chemistry (as in taste) even if not in the sense of having subjective visual, auditory, and taste experiences. They can still be said (I think unequivocally) to “see” things they visually detect, “hear” sounds they aurally discern, and “taste” flavors they chemically differentiate.

  What about consciousness? How can zombies behave intelligently if they’re unconscious? Good zombies are conscious in the sense of being cognizant of things (registering their presence) or being cognizant that certain things are or aren’t the case (representing them as so being) despite not being “phenomenally conscious” (possessed of qualia). Conceivably, your imagined zombie lover is in this manner aware of your presence: he or she registers it, and responds just as your real lover would. Conceivably NEXUS 9 astro-zombies are aware that Tempos are gas-powered: they represent that fact and respond accordingly. This much is cognitive. It’s the specifically phenomenal (subjectively felt), not the cognitive (rational representational) aspect of consciousness that’s supposed to be lacking.

  As for self-consciousness and subjectivity, good zombies, it seems, can even be “self-aware” in the sense of having access to their own internal states. Some computer programs, for instance, maintain state variables. Good zombies, it seems, can have points of view both literally (the loci of their visual sensors) and figuratively (in the form of unique overall representations of reality).

  As for the soul, being pure sensations or “raw feels,” qualia seem very far removed from that spiritual concept and from actions “guided by the will.” Even if they do lack raw feels (as we have been imagining), Replicants still realize they are all-too mortal, fear death, and “want more life,” or so they say. And they say it with feeling. Or so it seem
s.

  Do Androids Dream of Electric Sheep?

  DECKARD [TO TYRELL]: How can it not know what it is?

  Much as philosophy zombies are supposed to copy human beings in every way except their feelings, Blade Runner Replicants were “designed to copy human beings in every way except their emotions.” However, “the designers reckoned that in a few years they might develop their own emotional responses. Hate, love, fear, anger, envy” (Bryant to Deckard); and in the film, it seems their designers were right. From hot-headed Leon (“let me tell you about my mother!”) Kowalski, to Blake-misquoting Roy (“tears in rain”) Batty, Blade Runner Replicants seem far from emotionless; indeed, they are almost overwrought. Perhaps they protest too much. The question Leon asks Deckard—“Painful living in fear, isn’t it?”—is repeated by Roy in the climactic scene. Is it just a canned response, simulating emotion? When Rachel says to Deckard, “I love you . . . I trust you,” is that a lie?

  In Blade Runner it takes Deckard more than a hundred questions to determine that Rachel is inhuman. Being inhuman, she is denied moral standing. She can be summarily “retired” without trial or justification. To be morally disenfranchised on the basis of barely discernible differences in involuntary emotional responses—“capillary dilation of the so-called blush response, fluctuation of the pupil, involuntary dilation of the iris”—seems like picky grading, unless these involuntary emotional response differences are indicative of some deeper lack. But how would we know that?

 

‹ Prev