Book Read Free

The Consolations of Mortality

Page 16

by Andrew Stark


  Shelley’s poem “Ozymandias” conjures up the image of an ancient, decapitated, and trunkless statue in the desert. The decaying pedestal reads, “My name is Ozymandias, King of Kings: Look on my works, ye mighty, and despair!” And yet no works, no marks, nothing but the “lone and level sands” remain, stretching endlessly in all directions.

  Imagine, though, that the marks Ozymandias made long ago had not crumbled into sand but had somehow remained with us, weathered but majestic in their desert space. Say that they took the form of a great city of marble, alabaster, and pink granite.13 Yes, visiting tourists would be able to answer the question “Who built this city?” It was Ozymandias. And yes, they could answer the question “Who was Ozymandias?” He was the builder of the city. But in knowing the name and the marks—which simply refer back to each other in an endless arid loop—would they have come any closer to remembering the man, the person, than if they had never heard of him or his marks?

  Suppose we are tourists beholding Ozymandias’s works, clutching our water bottles and snapping pictures with our phones. But now suppose, if I can adapt an old philosophical joke, that our tour guide rushes up to us with some late-breaking news. It’s just been discovered that all of these marks—the city’s buildings in all their splendor—weren’t built by Ozymandias after all. Instead, they were built by some other guy who also happened to be named Ozymandias. That would have no meaning for us. Nothing in our sense of Ozymandias would change because there is nothing to our sense of Ozymandias. The same name, the same marks—they could, for all we can bring to mind, have belonged to anybody.

  By contrast, a similar discovery about Churchill—Winston Churchill did not deliver the “We Shall Never Surrender” speech; it was in fact orated by some other man named Winston Churchill—would carry the shock of a thunderbolt. That’s because while Ozymandias’s name and marks don’t make us recall a particular person, Churchill’s do: even though we never met either one.

  For someone to place his imprint on the minds of future generations on into time—for someone to remain alive in memory—simply engraving his name and his marks in various spaces, physical and cyber, won’t suffice, even if those marks are great in number. Those gazing at them must have spent time with them. But how much time? Whatever time it takes for them to no longer feel that the marks could have been made by anyone—by any other person who happened to have been named Ozymandias—because those marks have solidified and jelled into one particular person in their mind.

  This is, of course, a psychological and not a logical process. The person we have in mind when we think of Churchill might turn out to bear only a partial resemblance to the real Churchill. But then we invariably get our recollections of people terribly wrong all the time, profoundly misunderstanding them even if we actually have met them, even if we actually have spent time with them. Couple that with the likelihood that few of those who actually did meet Churchill in person spent nearly as much time in his actual presence as others, like his biographer Martin Gilbert, have spent thinking about, pondering, and steeping themselves in Churchill’s marks, his words or actions, even though they never encountered him. There’s no reason to think that our memory of a person we have met is necessarily more accurate than those of others who may have never met him. Especially if they have spent much more time with his marks than we ever spent with him, the person.

  Saul Kripke devised his theory of reference in opposition to a rival theory (or set of theories) called descriptivism. Descriptivism, to simplify dramatically, says that a name attaches to a particular set of marks, even if in principle the person who made those marks could have been anyone. According to descriptivism, for example, we have come to use the name “Kurt Gödel” to refer to the person who discovered the Incompleteness Theorem. But suppose one day we learn that the person who discovered the Incompleteness Theorem was not in fact the man at Princeton we always thought was the one who accomplished that feat, but a woman at Duke. Then, according to descriptivism, she would be the person to whom we were always referring with the name “Kurt Gödel.” And, Kripke said, that just doesn’t make any sense.

  So descriptivism—which says that the name straightforwardly attaches to the marks, even though the person who made them could in theory be anyone—doesn’t seem to capture what it is to use a name to refer to someone. Descriptivism, though, does seem to capture a central truth about what it is to remember someone—and, in particular, someone we have never met. All we can ever know is that a name attaches to a given set of marks. Beyond that, as far as we are concerned in the vast majority of cases, that person could have been anyone. Perhaps descriptivism was always a better fit as a theory of remembrance than of reference.

  Likewise, what Kripke offered is a better theory of reference than remembrance. Kripke said that a name refers to a particular person, even though that person could have made any set of marks. So yes, when people on into the future speak the name “Nat Bailey” as they encounter it on a plaque, they will indeed be referring to the one particular person Nat Bailey—to him and no other—even if they have never heard of him before. But Nat Bailey would have been mistaken if he thought that those people would be remembering him: that he would be living on in their memory. While the name “Nat Bailey” certainly refers to the particular person Nat Bailey, it can’t make us remember the person Nat Bailey, call him in particular to mind. He could have been anyone. What Kripke offered was simply a theory of reference. It was not—as so many of us implicitly seem to think when we post our name and marks on buildings or benches—a theory of remembrance.14

  In his great book Naming and Necessity, Kripke himself says something quite revealing. While he can imagine Aristotle never actually doing “any of the things”—making any of the marks—“commonly attributed to him today,” such as composing the Ethics and the Politics, he can’t imagine Hitler ever having done anything other than evil. But then recognizing that such a statement sits uneasily with his theory that a name attaches to a person regardless of what marks he might have made, Kripke immediately backtracks. Hitler, Kripke acknowledges, “might have spent all his days [quietly] in Linz.” And had he done so, Kripke says, he still would have been Adolf Hitler, because “Adolf Hitler” is the name his parents gave to that particular person, not to a particular set of horrific marks left upon the world.15

  I think what Kripke was saying, with his little slip, was not that the person to whom the name “Hitler” refers couldn’t have spent a quiet life in Linz, leaving none of his evil marks on the world. It’s that the evil marks in question could have been made by no one else but the person in Kripke’s mind, in all of our minds, whom we remember as Hitler. We have spent enough time with the name, and the marks, that the name is no longer just a phrase for whomever it was who made those marks. We know who that person is, even though we never met him. He—not just his name and deeds—will live in infamy.

  Remembrance of someone we have never met is based on an ongoing temporal process. It’s based on the repeated acts of thinking about, and dwelling on, the marks attached to a name. That makes sense. After all, the self, as we bundles of ego and anxiety think of it, is an entity that needs to move ever forward in time if it’s to continue living. And so, once it dies, it can continue to move forward in time—in memory, of course—only if at least some people continue spending enough time with the name and the associated marks that a distinct self emerges and continues to abide in their minds over the years, decades, and centuries. Otherwise, for them, that name and those marks could have belonged to anyone, “whoever she may be.”

  But for most of us mortals, our selves will not live on through our name and whatever marks we leave for the future, even if others, on into the ages, read them or see them. On into the future, those others will not spend the time necessary for a sense of our self to emerge in their minds and move forward with them. A name and a set of marks recorded on a spatial or cyberspatial object—a plaque or a website—might refer to us. But t
hat won’t suffice to enable others to remember us, to make our self live on in their memory. They won’t be thinking of anyone in particular. The consolation for mortality that I have identified with Nat Bailey, on which our mere name and marks imprinted on the future can keep our mortal selves alive in a way that intimates immortality, won’t work. Not for almost all of us.

  *

  Can mortality, in any meaningful way, intimate immortality? Can I realize, within the confines of my mortal life, the various good things that immortality seems to promise?

  Yes, says Gordon Bell. All I have to do is record the moments of my life, the entire contents of my memory, in real-time audio, video, and textual files and then post them online. Those moments will then remain alive indefinitely as long as others, on into the unending eons, view them in my digitized life-log. And so I myself don’t have to live on to keep my cherished memories, my own precious trove of knowledge of the past, alive forever. Anyone else can do that for me.

  And yes, mortality might be as good as immortality if I can, in my twilight years, attain “closure.” Suppose, as death nears, that I am able to “close the door” or “turn the latch” on the moments of my life, leaving them pristine ever after as if they occupied some kind of sealed room or cell. Then I wouldn’t have to live on to fight with others over what my life meant, over the main lines of its narrative, over the significance of its events. My imprint on my own past, the meaning I gave my own life, would be the definitive one.

  And yes, mortality might be as good as immortality if it makes sense for me to equate my self with what the philosopher Mark Johnston calls a bare “quasi-spatial arena.” Then, even though that arena will disappear with my death, I could still access whatever precious knowledge others might amass—about the secrets of God or consciousness or the universe—on into the future. All I would need to do while alive is place all future humans, instead of Andrew Stark, at the center of my arena, leading a life dedicated to their interests. And then whatever knowledge they accumulated during their lives on into the millennia would belong to me no less than it would belong to those future humans. After all, my claim to their lives would be just as strong as theirs. That’s because all there is to making any given life one’s own is choosing to place it at the focus of one’s arena.

  And finally, yes, mortality might be as good as immortality if it makes sense for me to view my self as if it were nothing more than the mere referent of my name. Then, as long as people are reading that name on into the future, whether on a wall plaque or in a book dedication, they will be keeping me—the person I was—alive in memory. I myself don’t have to live on for my self to live on. My imprint on the future, the marks I made during my mortal life, will keep me alive. In the minds of others, if not in the world.

  Unfortunately none of these ingenious ideas speaks to our central psychological reality. Certainly not for us bundles of ego and anxiety who seek consolation for our mortality. Our self, as we see it, is something that must move relentlessly forward into the future if it is to survive. It’s hardly the mere referent of a name. Or a bare arena. For us, too, the moments of our lives must flow back ever further into the past as soon as they happen. They can never be permanently freeze-dried into mere files on a server. Or preserved as pristine items in a locked room. And so they are fated to become forever irrecoverable in the intimate ways in which we ourselves knew them, yet ever vulnerable to the foreign interpretations others on into the future will place on them.

  The self as the mere referent of a name. Or a bare arena. The moments of our lives as files on a server. Or items in a locked room. Referents, files, arenas, and rooms are dry, static husks. In order to believe that our mortal selves and our mortal lives could even begin to give us the good things that their immortal versions would, we have to pretend that those selves and lives are bare shadows of what they actually are. We have to pretend that they are already halfway dead.

  And so mortality cannot intimate, cannot give us, the good things that immortality would. We shall have to look elsewhere for consolation. Perhaps if we view matters in the right way, we will see that immortality—real immortality, not the intimated sort—would be terrible for us. I turn to this consolatory idea in the book’s next part.

  PART 3

  Immortality Would Be Malignant

  nine

  IS THIS ALL THERE IS?

  It’s evening in Lake Como. The young music professor Shawmut, in Saul Bellow’s story “Him with His Foot in His Mouth,” reads his conference paper to Kippenberg, an older and far more prominent musicologist, author of the definitive work on Rossini and bearer of “eyebrows like caterpillars from the Tree of Knowledge.” Worried that his prose is failing to impress the great man, Shawmut sheepishly remarks: “I’m afraid I’m putting you to sleep, Professor.” The master replies: “No, no—on the contrary, you’re keeping me awake.”1

  As if it were thumbing its nose at its own reputation for monotony and uniformity, boredom has gone out into the world and amassed for itself an impressive variety of classifications and categorizations. Stendhal distinguished between “still” versus “bustling” boredoms.2 For the philosopher Sean Healy, what’s key is the dichotomy between the boredom of “restlessness” and the boredom of “torpor.”3 Heidegger offered discriminations between the limbo boredom of waiting for a train, the empty boredom of attending a cocktail party, and the deeper boredom that comes from personal inauthenticity—from leading a life that isn’t your own.

  But I like Bellow’s distinction the best. The exchange between Shawmut and Kippenberg captures two immediately recognizable strains of boredom. If you are experiencing the one, then however much you would prefer to be in a state of unconsciousness than continue listening to (say) Professor X’s tedious lecture on associative learning in sea slugs, you can’t nod off because the rattling of his voice and clanking of the air conditioner are keeping you awake. If you are undergoing the other, then however much you might like to listen to Professor Y’s informative lecture on consolations for mortality, you can’t because his droning delivery and the murmur of the heating system are putting you to sleep.

  Now think of the two types of endless boredom that different writers, seeking to console us for our mortality, have predicted would be our sorry fate if we were immortal. Each simply extends one of these dual themes. On the first scenario, the problem with immortality would be that over enough time, we humans would experience every thing that a person possibly can experience. But we would no more be able to seek relief in death from the world’s now-wearisome noise than you, sitting in Professor X’s class, can seek relief in sleep from his continued tedious natterings. Call this the boredom of exquisite ennui. In the other scenario, the problem with immortality is that, almost immediately, a deep inertia would descend upon immortals, preventing them from experiencing anything at all. No matter how worthwhile or tempting any experience might be, immortals would feel no more urgency about seeking it out—knowing that they could always get around to it later—than you would feel about staying awake during Professor Y’s informative lecture, if you knew that you could always view it tomorrow online or read his book. Call this the boredom of profound lethargy.4

  So: for those who believe that immortality would be cosmically boring, there are two scenarios. After enough time has passed, immortals would eventually feel that they have seen absolutely everything, and so suffer profound ennui. Or else right from the outset, they would do absolutely nothing and so suffer profound lethargy. Yet in fact, these seemingly opposite possibilities are simply two sides of the same coin.

  Think of a trip to the Eiffel Tower. At the most abstract level, with all the specific details bleached out, one trip is like any other. At the most concrete level, by contrast, with all the details factored in—the precise angle of the sun, the haze in the air, the acidity of the rain the previous night, and the way the Tower accordingly glints and glistens—no one trip is like any other.

  Now: The more abstra
ctly we view the events of our life absent any of the differentiating details—the more this year’s visit to the Tower, for example, seems just like last year’s—the sooner we will conclude that we have seen everything and thus begin to experience the boredom of ennui. But equally, the more abstractly we view the events of our life absent any of the differentiating details—the more we expect that a visit to the Tower next year will be just like one we might take this year, so why rush?—the more likely we also are to do nothing, put things off, and experience the boredom of lethargy.

  The two boredoms—the twin boredoms that those who would console us for our mortality foresee in immortality—can thus coexist, because they emerge from the same worldview. It’s a view of the world shorn of specificities, understood by immortals in terms of simple abstract universals. And the issue is not simply that one trip to the Eiffel Tower becomes like any other, but that one trip no matter where becomes just like any other trip no matter where. Then one activity becomes like any other. And finally, at an abstract enough level, life itself comes to be a single undifferentiated and unending event, an unwavering gray haze, “the humming of a single sound in the ear,” as the poet Anthony Hecht puts it. Or a “gnawingly hypnotic rotary hum so total it might have been silence itself,” as David Foster Wallace says.5 At this abstract level, “all things” become, in the words of Lucretius, “the same forever.”6 Once they have come to look at the world this way, bored immortals will conclude that they have seen everything and find that they have the motivation to do nothing. Ennui and lethargy converge.

  In this way our experience of immortality would strangely mirror God’s experience of eternity, at least as imagined by medieval church fathers.7 Since God exists outside of time, all of time is spread out before Him in a single vista. He can see everything that has ever happened and that ever will happen. And yet, theologians have argued, a timeless God would actually do nothing—at least, nothing that resembles human action. After all, actions can take place only in time, and He abides outside of it.8 Doing nothing, while having seen everything there is to see: not much suspense or excitement in God’s experience of eternity. No more, perhaps, than would result from human immortality, in which we too, sooner or later, would do nothing while having seen everything there is to see. As long as God can’t do much better with eternity than we could with immortality, maybe—so the boredom consolationists might suggest—mortality isn’t so bad after all.

 

‹ Prev