The Most Human Human
Page 25
Criticism as Compression
You can think of criticism as compression too: a work of literature must strain to survive and outlast its own marketing and its own reviews, which threaten, in a sense, to deliver a lossy compression of the book itself. Anything said about a piece of art enters into competition with the art itself.
People complain from time to time about folks who read the Cliffs-Notes to a book, or reviews or essays about a book, but don’t read the book itself. Hey, if the information density of Anna Karenina is low enough that a review 1 percent as long conveys 60 percent of the form and content “gist” of the book, then it’s Tolstoy’s fault. His readers are human beings with only twenty-eight thousand days or so separating birth and death. If they want to read the lossy gloss and move on, who can blame them?
Likewise for conceptual art: who needs to see a Duchamp toilet when you can hear about one so much faster and extract most of the experience from that? Conceptual art might be, for better or worse, (definable as) the art most susceptible to lossy compression.
Showing vs. Telling
“Show, don’t tell” is the maxim of many a creative writing workshop. Why is that? Well, for one, it’s information entropy. When we talk about a missing tooth, we can be led by that single image, in the right context, to imagine an entire bygone childhood era, an entire history of spousal abuse, or—as is the case in the chilling C. D. Wright poem “Tours”14—both at once. Whereas being told that a spouse has long been abused, or that a daughter is growing up, might not get us to imagine something as specific and vivid as the missing tooth.
But, as an argument for showing over telling, this line of thinking shouldn’t be allowed to become dogma; it’s an empirical question, ultimately. There are indeed times when the information entropy of telling exceeds that of showing. When we as writers or as speakers encounter them, we need to bend to the higher rule.
An author who has mastered this is Milan Kundera. When he needs to “say” something to the reader in one of his novels, he doesn’t construct an elaborate pantomime in which his characters, interacting with each other, subtly convey it: rather, he, Kundera, just steps in and says it. (“As I pointed out in Part One …”) How sublime! Imagine a street mime giving up on the exasperating charades and saying, simply, “I’m trapped in a box.”
Entropy and Genre
David Shields writes, “As soon as a book can be generically located, it seems to me for all intents and purposes dead … When I’m constrained within a form, my mind shuts down, goes on a sitdown strike, saying, ‘This is boring, so I refuse to try very hard.’ ” Generic might just be another term for low-entropy. In fact, low entropy may be what genre is—a kind of prototype or paradigm, a rutted wagon road through the Shannon Game. Roger Ebert observes that when an action hero comes under machine-gun fire, there is a drastically lower chance of him coming to harm than, say, if he’s attacked by knife. Most viewers subconsciously understand this. Indeed, any piece of art seems to invoke with its inaugural gestures a rather elaborate framework of expectations—by means of which its later gestures tend, on the whole, to be less and less surprising. The mind gradually sits down.
You might have noticed in my Shannon Game attempts that the beginnings of words tend to have higher entropy scores than the latter parts. Matt Mahoney’s research at the Florida Institute of Technology has shown that the best text-compression software appears to do better on the second half of a novel than the first. Does this suggest, I wonder, that entropy may be fractal? Do novels and films display the same spike-and-decline pattern that words do?
And for that matter—considering how comparatively bewildered infants are, how comparatively awestruck young children tend to be—does life?
Annie Dillard, in An American Childhood, explains her childhood thoughts about literature: “In fact, it was a plain truth that most books fell apart halfway through. They fell apart as their protagonists quit, without any apparent reluctance, like idiots diving voluntarily into buckets, the most interesting part of their lives, and entered upon decades of unrelieved tedium. I was forewarned, and would not so bobble my adult life; when things got dull, I would go to sea.”
I think our fairy tales prepare our children for this kind of existential panic about growing up. Nothing is more dispiriting than “And they all lived happily ever after,” which means, in information entropy terms, “And then nothing interesting or noteworthy ever happened to them again for the rest of their lives.” Or at the very least, “And then you can pretty much imagine what their forties, fifties, and sixties were like, blah, blah, blah, the end.” I don’t think it would be going too far to argue that these fairy tales sow the seeds of divorce. No one knows what to do after the wedding! Like an entrepreneur who assumed his company would have been bought by now, like an actor out of lines but aware that the cameras are still rolling … marriage, for people raised on Western fairy tales, has that same kind of eerie “Um … now what?” quality. “We just, ah, keep on being married, I guess?”
“No one ever asks, ‘How did you two stay together?’ Everyone always asks, ‘How did you two meet?’ ” a husband, Eric Hayot, laments on an episode of NPR’s This American Life. The answer to how they stayed together, Hayot has explained, “is the story of like struggle, and, pain, sort of passed through and fought through and overcome. And that’s—that’s a story you don’t tell in public.” Nor, it would seem, do you ask about it; even this very segment, ending on these very words, focuses on how he and his wife met. How will we learn?
As for art, the rare work that manages to keep up its entropy for its entire duration can be electrifying. Krzysztof Kieślowski’s Three Colors: White is a great example of a generically un-locatable film: it’s part comedy, part tragedy, part political movie, part detective story, part romance, part anti-romance. At no point do you sense the shape of what’s to come. This is the subtlest sort of radicalism—not to push or break the envelope, necessarily, but to force a sort of three-card monte where one never becomes sure which envelope one’s in.15
Douglas Hofstadter muses in Gödel, Escher, Bach, “Perhaps works of art are trying to convey their style more than anything else.” I think that when we’re reading a book or watching a film, we wonder maybe not so much “Will our hero be rescued?” as “Is this the kind of story where our hero will be rescued?” Perhaps we’re interested not so much in the future—what will happen, what letter comes next—as in the present (perfect progressive): what has been happening, what word have I been spelling.
Excerpt
Movie previews—I love watching movie previews. Highest entropy you’ll get in the whole night. Each clip gives you a whole world.
The way that “ragged claws” are synecdoche for a crustacean, so are anecdotes synecdoche for a life. Poetry reviewers never hesitate to include quotations, samples, but fiction reviewers seem to prefer plot synopsis as a way to give the reader a lossy “thumbnail” of what to expect from the book. Two different lossy compression strategies, each with its own compression artifacts. Try it yourself, as an experiment: try a week of saying to your friends, “Tell me what you did this week,” and then a week of saying, “Tell me a story of something that happened to you this week.” Experiment with which lossy methods work better.
Entropy isn’t all about such emotionally detached things as hard-drive space and bandwidth. Data transfer is communication. Surprisal is experience. In the near-paradoxical space between the size and capacity of a hard disk lies information entropy; in the space between the size and capacity of a lifetime lies your life.
The Entropy of Counsel
Entropy suggests that we gain the most insight on a question when we take it to the friend, colleague, or mentor of whose reaction and response we’re least certain.
The Entropy of the Interview
And it suggests, perhaps, reversing the equation, that if we want to gain the most insight into a person, we should ask the question of whose answer we’re least certai
n.
I remember watching Oprah on September 11, 2007; her guests were a group of children who had each lost a parent on September 11, 2001:
OPRAH: I’m really happy that you all could join us at this time of remembrance. Does it ever get easier—can I ask anybody? Does it ever—
She asks, but the question contains its own response. (Who would dare volunteer, “Yeah, maybe a little,” or, “Very, very gradually”?) The question itself creates a kind of moral norm, suggesting—despite evidence to the contrary, in fact16—that for a normal person the grief could not have diminished. The coin she’s flipping feels two-headed. I grew agitated as the interview went on:
OPRAH: Do you feel like children of 9/11? Do you feel like that? Do you feel like when somebody knows, Shalisha, that you lost a loved one, that you now have suddenly become a 9/11 kid?
SHALISHA: I do. I do believe that.
OPRAH: Well, you know, I said and I’ve said many times on my show over the years, there isn’t a day that goes by that I don’t, at some point, think about what happened that day, although I didn’t lose anybody that I knew. And opening this show, I said, you all live with it every day. It never goes away, does it?
AYLEEN: No.
What else can you possibly say to a question like that? First of all, I sincerely doubt that Oprah literally thought about the September 11 attacks every day for six years. Second, how do you expect someone to give an honest answer when you’ve prefaced the question like that? The guests are being told what they feel, not asked.
OPRAH: And isn’t it harder this time of the year?
KIRSTEN: It’s more difficult around this time, I think.
Disappointed, I clicked away. You could practically edit the children’s responses out and have the same interview.
Truth be told, I know from reading the transcripts that Oprah’s questions do become a little more flexible, and the children do start to open up (the transcripts alone choked me up), but it frustrated me, as a viewer, to see her setting up such rigid containers for their responses. I want to withhold judgment in this particular case: maybe it was a way to ease a group of young, grieving, nervous guests into the conversation—maybe that’s even the best tactic for that kind of interview. But on the other hand, or at least in another situation, it could come off as an unwillingness to really get to know a person—asking precisely that to which one is most confident of the answer. As a viewer, I felt as if my ability to understand these children was being held back by the questions—as was, I thought, Oprah’s. Did she even want to know what the kids really felt?
When we think interview, we think of a formalized situation, a kind of assessment or sizing up. But etymologically the word means reciprocal seeing. And isn’t that the aim of all meaningful conversation?
I remember registering a shock upon hitting the passage in Zen and the Art of Motorcycle Maintenance where Robert Pirsig says, “ ‘What’s new?’ is an interesting and broadening eternal question, but one which, if pursued exclusively, results only in an endless parade of trivia and fashion, the silt of tomorrow. I would like, instead, to be concerned with the question ‘What is best?,’ a question which cuts deeply rather than broadly, a question whose answers tend to move the silt downstream.” I realized: even the basic patterns of conversation can be interrogated. And they can be improved. Information entropy gives us one way in.
Just a few months ago I fell into this trap; recalling the Pirsig quotation got me out. I was detachedly roaming the Internet, but there was nothing interesting happening in the news, nothing interesting happening on Facebook … I grew despondent, depressed—the world used to seem so interesting … But all of a sudden it dawned on me, as if the thought had just occurred to me, that much of what is interesting and amazing about the world did not happen in the past twenty-four hours. How had this fact slipped away from me? (Goethe: “He who cannot draw on three thousand years is living hand to mouth.”) Somehow I think the Internet is making this very critical point lost on an entire demographic. Anyway, I read some Thoreau and some Keats and was much happier.
Ditto for the personal sphere. Don’t make the mistake of thinking that when “So, what else is new?” runs out of steam you’re fully “caught up” with someone. Most of what you don’t know about them has little if anything to do with the period between this conversation and your previous one.
Whether in speed dating, political debate, a phone call home, or dinner table conversation, I think information entropy applies. Questions as wide and flat open as the uniform distribution. We learn someone through little surprises. We can learn to talk in a way that elicits them.
Pleasantries are low entropy, biased so far that they stop being an earnest inquiry and become ritual. Ritual has its virtues, of course, and I don’t quibble with them in the slightest. But if we really want to start fathoming someone, we need to get them speaking in sentences we can’t finish.17
Lempel-Ziv; the Infant Brain; Redefining “Word”
In many compression procedures—most famously, one called the Lempel-Ziv algorithm—bits that occur together frequently get chunked together into single units, which are called words. There might be more to that label than it would seem.
It’s widely held by contemporary cognitive scientists that infants learn the words of their native language by intuiting which sounds tend, statistically, to occur together most often. I mentioned earlier that Shannon Game values tend to be highest at the starts of words, and lower at the ends: meaning that intra-word letter or syllable pairs have significantly lower entropy than inter-word pairs. This pattern may be infants’ first toehold on English, what enables them to start chunking their parents’ sound streams into discrete segments—words—that can be manipulated independently. Infants are hip to information entropy before they’re hip to their own names. In fact, it’s the very thing that gets them there. Remember that oral speech has no pauses or gaps in it—looking at a sound-pressure diagram of speech for the first time, I was shocked to see no inter-word silences—and for much of human history neither did writing. (The space was apparently introduced in the seventh century for the benefit of medieval Irish monks not quite up to snuff on their Latin.) This Shannon entropy spike-and-decay pattern (incidentally, this is also what a musical note looks like on a spectrograph), this downward sloping ramp, may be closer to the root of what a word is than anything having to do with the spacebar.18
And we see the Lempel-Ziv chunking process not just in language acquisition but in language evolution as well. From “bullpen” to “breadbox” to “spacebar” to “motherfucker,” pairings that occur frequently enough fuse into single words.19 (“In general, permanent compounds begin as temporary compounds that become used so frequently they become established as permanent compounds. Likewise many solid compounds begin as separate words, evolve into hyphenated compounds, and later become solid compounds.”20) And even when the fusion isn’t powerful enough to close the spacebar gap between the two words, or even to solder it with a hyphen, it can often be powerful enough to render the phrase impervious to the twists of grammar. Certain phrases imported into English from the Norman French, for example, have stuck so closely together that their inverted syntax never ironed out: “attorney general,” “body politic,” “court martial.” It would seem that these phrases, owing to the frequency of their use, simply came to be taken, subliminally, as atomic, as—internal space be damned!—single words.
So, language learning works like Lempel-Ziv; language evolution works like Lempel-Ziv—what to make of this strange analogue? I put the question to Brown University cognitive scientist Eugene Charniak: “Oh, it’s much stronger than just an analogue. It’s probably what’s actually going on.”
The Shannon Game vs. Your Thumbs: The Hegemony of T9
I’m guessing that if you’ve ever used a phone to write words—and that is ever closer to being all of us now21—you’ve run up against information entropy. Note how the phone keeps trying to predict what you’re
saying, what you’ll say next. Sound familiar? It’s the Shannon Game.
So we have an empirical measure, if we wanted one, of entropy (and maybe, by extension, “literary” value): how often you disappoint your phone. How long it takes you to write. The longer, arguably, and the more frustrating, the more interesting the message might be.
As much as I rely on predictive text capabilities—sending an average of fifty iPhone texts a month, and now even taking down writing ideas on it22—I also see them as dangerous: information entropy turned hegemonic. Why hegemonic? Because every time you type a word that isn’t the predicted word, you have to (at least on the iPhone) explicitly reject their suggestion or else it’s (automatically) substituted. Most of the time this happens, I’m grateful: it smoothes out typos made by mis-hitting the keyboard, which allows for incredibly rapid, reckless texting. But there’s the sinister underbelly—and this was just as true too on my previous phone, a standard numerical keypad phone with the T9 prediction algorithm on it. You’re gently and sometimes less-than-gently pushed, nudged, bumped into using the language the way the original test group did. (This is particularly true when the algorithm doesn’t adapt to your behavior, and many of them, especially the older ones, don’t.) As a result, you start unconsciously changing your lexicon to match the words closest to hand. Like the surreal word market in Norton Juster’s Phantom Tollbooth, certain words become too dear, too pricey, too scarce. That’s crazy. That’s no way to treat a language. When I type on my laptop keyboard into my word processor, no such text prediction takes place, so my typos don’t fix themselves, and I have to type the whole word to say what I intend, not just the start. But I can write what I want. Perhaps I have to type more keystrokes on the average than if I were using text prediction, but there’s no disincentive standing between me and the language’s more uncommon possibilities. It’s worth it.