The End of Absence: Reclaiming What We've Lost in a World of Constant Connection
Page 15
As the Web grew, my browsers began to bloat with bookmarked Web sites. And as search engines matured, I stopped bothering even with bookmarks; I soon relied on AltaVista, HotBot, and then Google to help me find—and recall—ideas. My meta-memories, my pointers to ideas, started being replaced by meta-meta-memories, by pointers to pointers to data. Each day, my brain fills with these quasi-memories, with pointers, and with pointers to pointers to pointers, each one a dusty IOU sitting where a fact or idea should reside.
As for me, I’ve grown tired of using a brain that’s full of signposts only, a head full of bookmarks and tags and arrows that direct me to external sources of information but never to the information itself. I’d like to know for myself when La Bohème was composed and what Jung actually said about dreams and where exactly Uzbekistan may be. I want a brain that can think on its own, produce its own connections from a personalized assortment of facts. Because it seems that the largest database in the world—stuffed with catalog upon catalog of information—still lacks the honed narrative impulse of a single human mind.
• • • • •
In ancient Rome and Athens, individuals began employing the highly personalized “method of loci” to draw more external information inside their heads and keep it ordered there. Essentially an elaborate mnemonic device, the method involves the construction in one’s mind of a detailed building—sometimes called a “memory palace”—inside which memories can then be “placed.” If you have a thorough memory of the house you grew up in, for example, you can place a series of memories on the doorstop, along the staircase, and inside your bedroom. As you mentally travel through the fixed image of your childhood home, pieces of furniture or other details will then trigger the stored memory. Even in antiquity they knew the value of pointers. Memory champions today still swear by the method.
The attempt to collect a lifetime’s worth of information into an organized and manageable interior space appeared again in a Renaissance endeavor—the cabinet of curiosities. Ferdinand II, Archduke of Austria, kept an elaborate collection of painted portraits depicting people with bizarre physical deformities in his Wunderkammer; the cabinet of Russian czar Peter the Great housed deformed human and animal fetuses and other biological rarities. Cabinets of curiosities were for the corporeal world what memory palaces were for the mind.
What’s interesting to me about cabinets of curiosities and the method of loci is that they are both attempts—devised when the idea of memory existing in “the cloud” would have seemed preposterous—to pull a world’s worth of material into a small, navigable space, one that is privately owned. One can imagine the necessary memory palaces growing larger and larger with each generation, wings and turrets getting stapled onto the sides as we attempt to hold ever more preposterous loads of information. Similarly, the cabinets of curiosities buckle beneath the weight of our discoveries. Both endeavors, though, are very different from the dematerialized and unholdable “cloud” memories championed by Wikipedia and Google. To remember, goes the earlier assumption, you must first digest the outside world and carry it around with you.
This assumption pervaded our thinking until very recently. Consider the case of Sherlock Holmes, who described his own prodigious (and pre-Internet) memory in his debut appearance, an 1887 novel called A Study in Scarlet.
I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has a difficulty in laying his hands upon it. Now the skillful workman is very careful indeed as to what he takes into his brain-attic.
Holmes may have taken this approach too far in his own life. (We learn in the first few chapters that the genius sleuth remains willfully ignorant of fine literature and even the Copernican revelation that the earth goes around the sun—he feels that both subjects would “clutter” his mind.) But the point is that Sherlock Holmes curates his memory. He’s describing something very like the method of loci here. In both cases, memory is seen as a physical, aesthetically defined space. And in both cases, it is assumed that our job is to choose, to select what is worthy of placement in the palace of our memory. Human minds (Sherlock Holmes excepted) may be messy places and full of error, but it’s the honing, the selection of what’s worth remembering, that makes a mind great. Our sense of self is derived in part from all the material we carve away. The limits of the human body, and the human mind, too, are the borders that define us.
• • • • •
Today, the urge to outdo human memory is expressed in our abiding love of computer records.
“Lifeloggers,” who account for their comings and goings in online reports, now find that they can enjoy “total recall” thanks to programs like Timehop, an app that mines information from one year prior to today’s date and tells users where they went, what they were listening to, and how they were doing on this, the anniversary of “anytime.” Data is culled from users’ Facebook accounts, Twitter accounts, and Instagram accounts, among others, to create a digital reminiscence that rough and fuzzy human memory simply can’t compare with. It is, in the company’s own (vaguely morbid) words, “a time capsule of you.”
I spoke with the start-up’s young founder and CEO, Jonathan Wegener, a Columbia grad (double major in sociology and neuroscience); he lives in Brooklyn, where Timehop’s HQ is located. Wegener made short work of my skepticism. But he did so by defining memory in terms of maximal recall potential: “If we could remember everything, we wouldn’t have books. Technology is always about helping us out.” And, quickly, the miracle of such enormous computer recall becomes a miracle of computer organization. As Wegener put it, “If I’ve got thirty thousand digital photos that I’ve taken, there’s no way I’m going to sort through them without some help.”
There have been negative responses to the way his invention marshals human memory. Search Twitter for “Timehop” and you’ll find people asserting that “Timehop makes me hate myself” or “Timehop made me cry” because users’ pasts are constantly thrown up at them with a glaring level of fidelity that human memory might have softened. Oh God, moans the unsuspecting user, I really wore that? I said that? It’s common for Wegener to receive requests for an algorithm that would weed out negative content from Timehop’s capsules, but thus far he hasn’t gotten around to it.
The cringe effect was most pronounced in Timehop’s (since abolished) text message feature. For a brief stint, the app would regurgitate year-old text messages for users, in addition to the photos and tweets. These proved too personal, however. Only 2 to 3 percent of users made use of the text message software, and those who did, says Wegener, often hadn’t thought about whom they were texting one year ago: horrible ex-boyfriends and horrible ex-girlfriends. “People just weren’t comfortable with it,” he told me. “They’d contact us in a hurry and want the feature disconnected.”
Wegener himself is deeply committed to lifelogging and feels that “at a deeper level it makes us feel we’re getting more out of life. We’re fighting mortality. If we write everything down, it’ll stay fresh, you know? I mean, we’re being pulled through time against our will toward death. But this can make us feel like we lived.”
He also sees his creation as a potential bonding agent for friends and families. “There’s a subtlety to Timehop that a lot of people don’t pick up on,” he told me. “All our hundreds of thousands of users are reliving the same day at the same time. It’s a movie theater experience—a very collective experience where the record is playing and you can’t stop it. So if your family had a barbecue a year ago, you’re all going to relive that experience at the same time.” But only, of course, if one’s entire family is signed into Timehop. As a terminally forgetful person, I accept that part of me desires the assured and algorithmic narrative such software promises. How much of my ho
le-filled, personally generated narrative would I do away with if I could replace it with such a happily agreed-upon history?
It would be a wonderful thing if our minds could source such information for us and synchronize our histories so effortlessly. But without our gadgets, the vast majority of our lives actually slips away, never to be heard from again. Sometimes this is a deeply frustrating fact of life. Think of how much we live and how much we lose.
An app like Timehop, meanwhile, doesn’t just remind us what we’ve done, it encourages us to step out of the present and devote more time and energy toward the recording process. Wegener’s team wanted to see what happened to social media activity after users signed up for Timehop, so they monitored usage of Foursquare, an app that lets people “check in” to physical locations around town (a Starbucks, a department store, a restaurant), creating a record of one’s whereabouts over time. The behavior of twenty-two thousand Foursquare users was mapped out—incorporating three months of activity before signing onto Timehop and three months of activity afterward. Fourteen percent of users began checking in twice as often; 39 percent more users began adding comments and photos to their check-ins; check-ins overall bumped up 9 percent. The company’s conclusion: “Timehop makes users better.” When users understood that they were creating not just abstract records but fodder for future reminiscences that would be automatically retrieved in a year’s time, they became more involved and invested in the lifelogging process. Wegener had tapped into a major social media truth: We do it because we’re thinking of our own future as a bundle of anticipated memories. When he and I spoke about his own usage of Timehop, Wegener managed to boil things down to a simple core: “It reaffirms me.”
Is there a nobler reason to reminisce? When I consider the state of my brain’s dusty mechanisms, by contrast, my supposedly miraculous neurons feel like a broken machine, incapable of “reaffirming me” the way Wegener’s app can. I’m a wimp on the beach and my own phone is the jock kicking sand in my face. Is there value, still, in a human memory when a computer’s can surpass it so effortlessly?
How much abler, how much more proficient, seems the miracle of computerized recall. What a relief to rely on the unchanging memory of our machines.
Albert Einstein said we should never memorize anything that we could look up. That’s practical and seemingly good advice. When I off-load my memory to a computer system, I am freed up, I cast off a certain mental drudgery. But what would Einstein have said if he knew how much of our lives, how much of everything, can be looked up now? Should we ever bother to memorize poetry, or names, or historical facts? What utility would there be in the hazy results? Fifty years from now, if you have an expansive, old-fashioned memory—if you can recite The Epic of Gilgamesh, say—are you a wizard or a dinosaur?
• • • • •
The more we learn about human memory, the less it looks like a computer’s. As Henry Molaison’s experience first showed us, our memories are not straightforward “recall” systems, but strange and morphing webs.17 Take, for example, your memory of this word:
Inglenook
An “inglenook” is a cozy corner by the fire. It calls up a pleasant scenario, and the word itself is one of the more beautiful words in the English language, which is why I’ve selected it as something we may want to have stored in our heads. How might the brain accomplish this?
Assuming you are looking at this text (and not listening to it), the first step will be light bouncing off the paper or tablet that you’re holding and traveling through your optic nerves, out the back of your eyeballs, and onto the primary visual cortex at the rear of your head. There, individual neurons will fire (like dots of color in a pointillist painting or perhaps a Lite-Brite toy) to correspond with the specific look of the word: “Inglenook.” Many neurons, firing together, create a composite image of the word. This sensory information (the composite image) then travels through a series of cortical regions toward the frontal part of the brain and from there to the hippocampus, which integrates that image and various other sensory inputs into a single idea: “Inglenook.” The original firing of neurons associated with “Inglenook” may result in a moment of fluttering understanding in your consciousness, an idea that, in itself, lives for only a matter of seconds—this is the now of thought, the working memory that does our active thinking. But that brief firing seems to leave behind a chemical change, which has been termed “long-term potentiation.” The neurons that have recently fired remain primed to fire again for a matter of minutes. So if a writer decides to fire your “Inglenook” neurons six times in a row (as I now have), the neurons your brain has associated with that word will have become more and more likely to produce real synaptic growth—in other words, you might remember it for longer than it takes you to read this page. (Literal repetition isn’t necessary for memories to be formed, of course; if a singular event is important enough, you’ll rehearse it to yourself several times and burn it into your mind that way.)
If your hippocampus, along with other parts of your frontal cortex, decides that “Inglenook” is worth holding on to (and I hope it will), then the word and its meaning will become part of your long-term memory. But the various components of “Inglenook” (its sound, its look, and all the associations you have already made with the experience of reading about “Inglenook”) will be stored in a complex series of systems around your brain, not in a single folder.
Next week you may find yourself with an accidental time snack, waiting for the kettle to boil, and in that moment, perhaps the word Inglenook will float back into your consciousness (because you’ll be thinking about a cozy place by the fire in which to enjoy that tea). But when it does so, the sound—“Inglenook”—will come from one part of your brain, while the look of the word—“Inglenook”—will float in from another; the way you feel about this book will be recalled from some other region; and so on. These various scraps of information will be reassembled—by what means, we know not—to create the complete idea of “Inglenook.” And (with so many moving parts, it’s inevitable) each time you reconstruct “Inglenook,” its meaning will have altered slightly; something will be added, something taken away.18 Our memories, as the psychologist Charles Fernyhough recently wrote in Time magazine, “are created in the present, rather than being faithful records of the past.” Or as one of the world’s leading memory experts, Eric Kandel, has put it: “Every time you recall a memory, it becomes sensitive to disruption. Often that is used to incorporate new information into it.”
The same notion came up again when I had a conversation with Nelson Cowan, Curators’ Professor of Psychology at the University of Missouri, and a specialist in memory, who quoted Jorge Luis Borges for me:
Memory changes things. Every time we remember something, after the first time, we’re not remembering the event, but the first memory of the event. Then the experience of the second memory and so on.
“He got that basically right,” Cowan told me. “There’s a process called ‘reconsolidation,’ whereby every retrieval of memory involves thinking about it in a new way. We edit the past in light of what we know now. But we remain utterly unaware that we’ve changed it.”
Memory is a lived, morphing experience, then, not some static file from which we withdraw the same data time and again. Static memories are the domain of computers and phone books, which, says Cowan, “really bear no similarity to the kind of memory that humans have.” He seemed provoked by the comparison, in fact, and this struck me because so many other academics I’d spoken with had happily called their computers “my off-loaded memory,” without considering in the moment how very different the two systems are. Perhaps we’re keen to associate ourselves with computer memories because the computer’s genius is so evident to us, while the genius of our own brain’s construction remains so shrouded. I complained to Cowan that current descriptions of human memory—all those electrical impulses traveling about, “creating” impressions—hardly explain what’s actually happening in
my head. And he said only, “You’d be surprised how little we know.”
What we do know is that human memory appears to be a deeply inventive act. Every time you encounter the word Inglenook from now on, you may think that you recall this moment. But you will not.
• • • • •
Charlie Kaufman’s film Eternal Sunshine of the Spotless Mind—a fantasy in which heartbroken lovers erase each other from their memories—was based on very real research by McGill University’s star neuroscientist Karim Nader, whose work on the nature of “reconsolidation” has shown us how dramatically vulnerable our memories become each time we call them up. As far back as 2000 (four years before the Sunshine film came out), Nader was able to show that reactivated fear-based memories (i.e., memories we’re actively thinking about) can be altered and even blocked from being “re-stored” in our memory banks with the introduction of certain protein synthesis inhibitors, which disrupt the process of memory consolidation. In other words, it’s the content that’s pulled into our working memory (the material we actively are ruminating on) that’s dynamic and changeable. Today, this understanding grounds our treatment of post-traumatic stress victims (rape survivors, war veterans); like the lovers in Sunshine, victims of trauma have the chance to rewire their brains.
I wonder if such measures may become more and more appealing as the high fidelity of computers keeps us from forgetting that which our minds might have otherwise dropped into the abyss. Steve Whittaker, a psychology professor at the University of California, Santa Cruz, has written on the problem of forgetting in a digital age. Interviews with Sunshine-esque lonely hearts convinced him that the omnipresent digital residue of today’s relationships—a forgotten e-mail from three years ago, a tagged Facebook photo on someone else’s wall—could make the standard “putting her out of your mind” quite impossible. In a 2013 paper (coauthored with Corina Sas of Lancaster University), he proposes a piece of “Pandora’s box” software that would automatically scoop up all digital records of a relationship and wipe them from the tablet of human and computer memories both.19 Again, we find ourselves so enmeshed that we must lean on more technology to aid us through a technologically derived problem.