I imagine an ancient Egyptian woman, say, who catches a man looking tenderly into her eyes, up at the far extreme of her body near her useless, good-for-nothing brains, and chastises him, hand at her chest. Hey. I’m down here.
A Brief History of the Soul
The meaning and usage of the word “soul” in ancient Greece (—written as “psyche”4) changes dramatically from century to century, and from philosopher to philosopher. It’s fairly difficult to sort it all out. Of course people don’t speak in twenty-first-century America the way they did in nineteenth-century America, but scholars of the next millennium will have a hard time becoming as sensitive to those differences as we are. Even differences of four hundred years are sometimes tricky to keep in mind: when Shakespeare writes of his beloved that “black wires grow on her head,” it’s easy to forget that electricity was still several centuries away. He’s not likening his lover’s hair to the shelves of RadioShack. And smaller and more nuanced distinctions are gnarlier by far. “Hah, that’s so ’80s,” we sometimes said to our friends’ jokes, as early as the ’90s … Can you imagine looking at a text from 460 B.C. and realizing that the author is talking ironically like someone from 470 B.C.?
Back to “soul”: the full story runs long, but a number of fascinating points are raised at various moments in history. In Plato’s Phaedo (360 B.C.), Socrates, facing his impending execution, argues that the soul is (in scholar Hendrik Lorenz’s words) “less subject to dissolution and destruction than the body, rather than, as the popular view has it, more so.” More so! This fascinated me to read. Socrates was arguing that the soul somehow transcended matter, whereas his countrymen, it would seem, tended to believe that the soul was made of a supremely gossamer, delicate, fine form of matter5—this was Heraclitus’s view6—and was therefore more vulnerable than the meatier, hardier tissues of the body. Though at first the notion of a fragile, material soul seems ludicrously out of line with everything we traditionally imagine about the soul, it makes more sense of, if offers less consolation for, things like head injury and Alzheimer’s. Likewise, part of the debate over abortion involves the question of when, exactly, a person becomes a person. The human body, Greeks of the fourth century B.C. believed, can both pre- and postdate the soul.
Along with questions of the composition and durability of the soul came questions of who and what had them. It’s not just the psychologists who have been invested in The Sentence: philosophers, too, seem oddly riveted on staking out just exactly what makes Homo sapiens different and unique. Though Homer only used the word “psyche” in the context of humans, many of the thinkers and writers that followed him began to apply it considerably more liberally. Empedocles, Anaxagoras, and Democritus referred to plants and animals with the same word; Empedocles believed he was a bush in a previous life; Thales of Miletus suspected that magnets, because they had the power to move other objects, might have souls.
Oddly, the word appears to have been used both more broadly and more narrowly than it tends to be used in our culture today. It’s used to describe a general kind of “life force” that animates everything from humans to grasses, but it’s also construed specifically quite intellectually. In the Phaedo, the earlier of Plato’s two major works on the soul, Socrates ascribes beliefs, pleasures, desires, and fears to the body, while the soul is in charge of regulating these and of “grasping truth.”
In Plato’s later work, The Republic, he describes the soul as having three distinct parts—“appetite,” “spirit,” and “reason”—with those first two “lower” parts taking those duties (hunger, fear, and the like) from the body.
Like Plato, Aristotle didn’t believe that people had a soul—he believed we had three. His three were somewhat different from Plato’s, but they match up fairly well. For Aristotle, all plants and animals have a “nutritive” soul, which arises from biological nourishment and growth, and all animals additionally have an “appetitive” soul, which arises from movement and action. But humans alone had a third, “rational” soul.
I say “arises from” as opposed to “governs” or something along those lines; Aristotle was quite interesting in this regard. For him the soul was the effect of behavior, not the cause. Questions like this continue to haunt the Turing test, which ascribes intelligence purely on the basis of behavior.
After Plato and Aristotle came a school of Greek philosophy called Stoicism. Stoics placed the mind at the heart, and appear to have taken a dramatic step of severing the notion of the “soul” from the notion of life in general: for them, unlike for Plato and Aristotle, plants did not have souls. Thus, as Stoicism ascended to popularity in Greece, the soul became no longer responsible for life function in general, but specifically for its mental and psychological aspects.7
No Dogs Go to Heaven
Stoicism appears to have been among the tributary philosophies that fed into Christianity, and which also led to the seminal philosophical theories of mind of René Descartes. For the monotheistic Descartes, presumably the (Platonic) notion of multiple souls crowding around was a bit unsavory (although who could deny the Christian appeal of the three-in-one-ness?), and so he looked to draw that us-and-them line using just a single soul, the soul. He went remarkably further than Aristotle, saying, in effect, that all animals besides humans don’t have any kind of soul at all.
Now, any kid who grows up going to Sunday school knows that this is a touchy point of Christian theology. All kids ask uncomfortable questions once their pets start to die, and tend to get relatively awkward or ad hoc answers. It comes up all over the place in mainstream culture too, from the deliberately provocative title of All Dogs Go to Heaven to the wonderful moment in Chocolat when the new priest, tongue-tied and flummoxed by a parishioner’s asking whether it was sinful for his (soulless) dog to enter a sweet shop during Lent, summarily prescribes some Hail Marys and Our Fathers and slams the confessional window. End of discussion.
Where some of the Greeks had imagined animals and even plants as “ensouled”—Empedocles thinking he’d lived as a bush in a past life—Descartes, in contrast, was firm and unapologetic. Even Aristotle’s idea of multiple souls, or Plato’s of partial souls, didn’t satisfy him. Our proprietary, uniquely human soul was the only one. No dogs go to heaven.
The End to End All Ends: Eudaimonia
Where is all this soul talk going, though? To describe our animating force is to describe our nature, and our place in the world, which is to describe how we ought to live.
Aristotle, in the fourth century B.C., tackled the issue in The Nicomachean Ethics. The main argument of The Nicomachean Ethics, one of his most famous works, goes a little something like this. In life there are means and ends: we do x so that y. But most “ends” are just, themselves, means to other ends. We gas up our car to go to the store, go to the store to buy printer paper, buy printer paper to send out our résumé, send out our résumé to get a job, get a job to make money, make money to buy food, buy food to stay alive, stay alive to … well, what, exactly, is the goal of living?
There’s one end, only one, Aristotle says, which doesn’t give way to some other end behind it. The name for this end, εuδauovia in Greek—we write it “eudaimonia”—has various translations: “happiness” is the most common, and “success” and “flourishing” are others. Etymologically, it means something along the lines of “well-being of spirit.” I like “flourishing” best as a translation—it doesn’t allow for the superficially hedonistic or passive pleasures that can sometimes sneak in under the umbrella of “happiness” (eating Fritos often makes me “happy,” but it’s not clear that I “flourish” by doing so), nor the superficially competitive and potentially cutthroat aspects of “success” (I might “succeed” by beating my middle school classmate at paper football, or by getting away with massive investor fraud, or by killing a rival in a duel, but again, none of these seems to have much to do with “flourishing”). Like the botanical metaphor underneath it, “flourishing” suggests transience,
ephemerality, a kind of process-over-product emphasis, as well as the sense—which is crucial in Aristotle—of doing what one is meant to do, fulfilling one’s promise and potential.
Another critical strike against “happiness”—and a reason that it’s slightly closer to “success”—is that the Greeks don’t appear to care about what you actually feel. Eudaimonia is eudaimonia, whether you recognize and experience it or not. You can think you have it and be wrong; you can think you don’t have it and be wrong.8
Crucial to eudaimonia is —“arete”—translated as “excellence” and “fulfillment of purpose.” Arete applies equally to the organic and the inorganic: a blossoming tree in the spring has arete, and a sharp kitchen knife chopping a carrot has it.
To borrow from a radically different philosopher—Nietzsche—“There is nothing better than what is good! and that is: to have a certain kind of capacity and to use it.” In a gentler, slightly more botanical sense, this is Aristotle’s point too. And so the task he sets out for himself is to figure out the capacity of humans. Flowers are meant to bloom; knives are meant to cut; what are we meant to do?
Aristotle’s Sentence; Aristotle’s Sentence Fails
Aristotle took what I think is a pretty reasonable approach and decided to address the question of humans’ purpose by looking at what capacities they had that animals lacked. Plants could derive nourishment and thrive physically; animals seemed to have wills and desires, and could move and run and hunt and create basic social structures; but only humans, it seemed, could reason.
Thus, says Aristotle, the human arete lies in contemplation—“perfect happiness is a kind of contemplative activity,” he says, adding for good measure that “the activity of the gods … must be a form of contemplation.” We can only imagine how unbelievably convenient a conclusion this is for a professional philosopher to draw—and we may rightly suspect a conflict of interest. Then again, it’s hard to say whether his conclusions derived from his lifestyle or his lifestyle derived from his conclusions, and so we shouldn’t be so quick to judge. Plus, who among us wouldn’t have some self-interest in describing their notion of “the most human human”? Still, despite the grain of salt that “thinkers’ praise of thinking” should have been taken with, the emphasis they placed on reason seemed to stick.
The Cogito
The emphasis on reason has its backers in Greek thought, not just with Aristotle. The Stoics, as we saw, also shrank the soul’s domain to that of reason. But Aristotle’s view on reason is tempered by his belief that sensory impressions are the currency, or language, of thought. (The Epicureans, the rivals of the Stoics, believed sensory experience—what contemporary philosophers call qualia—rather than intellectual thought, to be the distinguishing feature of beings with souls.) But Plato seemed to want as little to do with the actual, raw experience of the world as possible, preferring the relative perfection and clarity of abstraction, and, before him, Socrates spoke of how a mind that focused too much on sense experience was “drunk,” “distracted,” and “blinded.”9
Descartes, in the seventeenth century, picks up these threads and leverages the mistrust of the senses toward a kind of radical skepticism: How do I know my hands are really in front of me? How do I know the world actually exists? How do I know that I exist?
His answer becomes the most famous sentence in all of philosophy. Cogito ergo sum. I think, therefore I am.
I think, therefore I am—not “I register the world” (as Epicurus might have put it), or “I experience,” or “I feel,” or “I desire,” or “I recognize,” or “I sense.” No. I think. The capacity furthest away from lived reality is that which assures us of lived reality—at least, so says Descartes.
This is one of the most interesting subplots, and ironies, in the story of AI, because it was deductive logic, a field that Aristotle helped invent, that was the very first domino to fall.
Logic Gates
It begins, you might say, in the nineteenth century, when the English mathematician and philosopher George Boole works out and publishes a system for describing logic in terms of conjunctions of three basic operations: AND, OR,10 and NOT. The idea is that you begin with any number of simple statements, and by passing them through a kind of flowchart of ANDs, ORs, and NOTs, you can build up and break down statements of essentially endless complexity. For the most part, Boole’s system is ignored, read only by academic logicians and considered of little practical use, until in the mid-1930s an undergraduate at the University of Michigan by the name of Claude Shannon runs into Boole’s ideas in a logic course, en route to a mathematics and electrical engineering dual degree. In 1937, as a twenty-one-year-old graduate student at MIT, something clicks in his mind; the two disciplines bridge and merge like a deck of cards. You can implement Boolean logic electrically, he realizes, and in what has been called “the most important master’s thesis of all time,” he explains how. Thus is born the electronic “logic gate”—and soon enough, the processor.
Shannon notes, also, that you might be able to think of numbers in terms of Boolean logic, namely, by thinking of each number as a series of true-or-false statements about the numbers that it contains—specifically, which powers of 2 (1, 2, 4, 8, 16 …) it contains, because every integer can be made from adding up at most one of each. For instance, 3 contains 1 and 2 but not 4, 8, 16, and so on; 5 contains 4 and 1 but not 2; and 15 contains 1, 2, 4, and 8. Thus a set of Boolean logic gates could treat them as bundles of logic, true and false, yeses and noes. This system of representing numbers is familiar to even those of us who have never heard of Shannon or Boole—it is, of course, binary.
Thus, in one fell swoop, the master’s thesis of twenty-one-year-old Claude Shannon will break the ground for the processor and for digital mathematics. And it will make his future wife’s profession—although he hasn’t met her yet—obsolete.
And it does more than that. It forms a major part of the recent history—from the mechanical logic gates of Charles Babbage through the integrated circuits in our computers today—that ends up amounting to a huge blow to humans’ unique claim to and dominance of the area of “reasoning.” Computers, lacking almost everything else that makes humans humans, have our unique piece in spades. They have more of it than we do. So what do we make of this? How has this affected and been affected by our sense of self? How should it?
First, let’s have a closer look at the philosophy surrounding, and migration of, the self in times a little closer to home: the twentieth century.
Death Goes to the Head
Like our reprimanded ogler, like philosophy between Aristotle and Descartes, the gaze (if you will) of the medical community and the legal community moves upward too, abandoning the cardiopulmonary region as the brain becomes the center not only of life but of death. For most of human history, breath and heartbeat were the factors considered relevant for determining if a person was “dead” or not. But in the twentieth century, the determination of death became less and less clear, and so did its definition, which seemed to have less and less to do with the heart and lungs. This shift was brought on both by the rapidly increasing medical understanding of the brain, and by the newfound ability to restart and/or sustain the cardiopulmonary system through CPR, defibrillators, respirators, and pacemakers. Along with these changes, the increasing viability of organ donation added an interesting pressure to the debate: to declare certain people with a breath and a pulse “dead,” and thus available for organ donation, could save the lives of others.11 The “President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research” presented Ronald Reagan in the summer of 1981 with a 177-page report called “Defining Death” wherein the American legal definition of death would be expanded, following the decision in 1968 of an ad hoc committee of the Harvard Medical School to include those with cardiopulmonary function (be it artificial or natural) who had sufficiently irreparable and severe brain damage. The Uniform Determination of Death Act, passed in 1981,
specifies “irreversible cessation of all functions of the entire brain, including the brain stem.”
Our legal and medical definitions of death—like our sense of what it means to live—move to the brain. We look for death where we look for life.
The bulk of this definitional shift is by now long over, but certain nuances and more-than-nuances remain. For instance: Will damage to certain specific areas of the brain be enough to count? If so, which areas? The Uniform Determination of Death Act explicitly sidestepped questions of “neocortical death” and “persistent vegetative state”—questions that, remaining unanswered, have left huge medical, legal, and philosophical problems in their wake, as evidenced by the nearly decade-long legal controversy over Terri Schiavo (in a sense, over whether or not Terri Schiavo was legally “alive”).
It’s not my intention here to get into the whole legal and ethical and neurological scrum over death, per se—nor to get into the theological one about where exactly the soul-to-body downlink has been thought to take place. Nor to get into the metaphysical one about Cartesian “dualism”—the question of whether “mental events” and “physical events” are made up of one and the same, or two different, kinds of stuff. Those questions go deep, and they take us too far off our course. The question that interests me is how this anatomical shift affects and is affected by our sense of what it means to be alive and to be human.
The Most Human Human Page 5