The real crime here is not simply the indignity suffered by our rejected talk partner, but the way we so easily allow the sanctity of our moment to be undermined. It assumes that kairos has no value—that if there is a moment of opportunity to be seized, that moment will break into our flow from the outside, like a pop-up ad on the Web. We lose the ability to imagine opportunities emerging and excitement arising from pursuing whatever we are currently doing, as we compulsively anticipate the next decision point.
Clay Shirky correctly distinguishes this problem from the overused term “information overload,” preferring instead to call it “filter failure.” In a scarce mediaspace dominated by books, printing a text meant taking a financial risk. The amount of information out there was limited by the amount of money publishers and advertisers were willing to spend. Now that information can be generated and distributed essentially for free, the onus is upon the receiver to filter out or even stop the incoming traffic. Even though each one seems to offer more choices and opportunities, its very presence demands a response, ultimately reducing our autonomy.
Once we make the leap toward valuing the experience of the now and the possibilities of kairos, we can begin to apply some simple filters and mechanisms to defend it. We can set up any cell phone to ring or interrupt calls only for family members; we can configure our computers to alert us to only the incoming emails of particular people; and we can turn off all the extraneous alerts from everything we subscribe to. Unless we want our entire day guided by the remote possibility that a plane may crash into our office building, we need to trust that we can safely proceed on the assumption that it won’t.
While there is tremendous value in group thinking, shared platforms, and networked collaboration, there is also value in a single mind contemplating a problem. We can defend our access to our personal kairos by letting the digital care for the chronos. Email lives outside time and can sit in the inbox until we are ready to read it. This should not be guilt provoking. The sender of the email is the one who relegated this missive to the timeless universe. Once sent, it is no longer part of our living, breathing cycling world of kairos but of the sequential realm of chronos. Email will form a stack, just like the stacks of processes in a computer program, and wait until we open it.
When I visit companies looking to improve their digital practices, I often suggest office workers attempt not to answer or check on email for an entire hour—or even two hours in a row. They usually reel in shock. Why is this so hard for so many of us? It’s not because we need the email for our productivity, but because we are addicted to the possibility that there’s a great tidbit in there somewhere. Like compulsive gamblers at a slot machine rewarded with a few quarters every dozen tries, we are trained to keep opening emails in the hope of a little shot of serotonin—a pleasant ping from the world of chronos. We must retrain ourselves instead to see the reward in the amount of time we get to spend in the reverie of solo contemplation or live engagement with another human being. Whatever is vibrating on the iPhone just isn’t as valuable as the eye contact you are making right now.
A friend of mine makes her living selling homemade candles through the craft site etsy.com. As her business got more successful, more orders would come through. By habit, she stopped whatever she was doing and checked her email every time a new message dinged through. If it was an order, she opened it, printed it out, and filled it before returning to her work melting wax, mixing scents, and dipping her wicks. Her routine became broken up and entirely un-fun, until it occurred to her to let the emails stack up all day, and then process them all at once. She used an automatic reply to all incoming orders, giving customers the instant feedback they have come to expect, but kept her flesh-and-blood candle maker self insulated from the staccato flow of orders. At 3 p.m. each day, her reward was to go to the computer and see how many orders had come in since morning. She had enough time to pack them all at once while listening to music, and then take them to the post office before closing. She achieved greater efficiency while also granting herself greater flow.
The digital can be stacked; the human gets to live in real time. This experience is what makes us creative, intelligent, and capable of learning. As science and innovation writer Steven Johnson has shown, great ideas don’t really come out of sudden eureka moments, but after long, steady slogs through problems.31 They are slow, iterative processes. Great ideas, as Johnson explained it to a TED audience, “fade into view over long periods of time.” For instance, Charles Darwin described his discovery of evolution as a eureka moment that occurred while he was reading Malthus on a particular night in October of 1838. But Darwin’s notebooks reveal that he had the entire theory of evolution long before this moment; he simply hadn’t fully articulated it yet.
As Johnson argues it, “If you go back and look at the historical record, it turns out that a lot of important ideas have very long incubation periods. I call this the ‘slow hunch.’ We’ve heard a lot recently about hunch and instinct and blink-like sudden moments of clarity, but in fact, a lot of great ideas linger on, sometimes for decades, in the back of people’s minds. They have a feeling that there’s an interesting problem, but they don’t quite have the tools yet to discover them.” Solving the problem means being in the right place at the right time—available to the propitious moment, the kairos. Perhaps counterintuitively, protecting what is left of this flow from the pressing obligation of new choices gives us a leg up on innovation.
Extracting digital processes from our organic flow not only creates the space for opportune timing to occur, it also helps prevent us from making inopportune gaffes. Like the famous television sketch where Lucy frantically boxes chocolates on the assembly line, we attempt to answer messages and perform tasks at the rate they come at us. And like Lucy, we end up jamming the wrong things in the wrong places. Gmail gives us a few minutes to recall messages we may have sent in error, but this doesn’t stop us from replying hastily to messages that deserve our time, and creating more noise than signal.
Comments sections are filled with responses from people who type faster than they think and who post something simply because they know they will probably never have time to find the discussion again. The possibility that someone might actually link to or re-Tweet a comment leads people to make still more, turning posting into just as much a compulsion as opening email messages. As our respected leaders post their most inane random thoughts to our Twitter streams, we begin to wonder why they have nothing better to do with their time, or with ours. When an actor with the pop culture simplicity of Ashton Kutcher begins to realize that his unthinking posts can reflect negatively on his public image,32 it should give the rest of us pause. But we would need to inhabit kairos for at least a moment or two in order to do that.
The result is a mess in which we race to make our world conform to the forced yes-or-no choices of the digiphrenic.
I once received an email from a college student in Tennessee who had been recruited by a political group to protest against leftist professors. Since I was scheduled to speak at the university in a month, she had studied my website for over ten minutes, trying to figure out if I was a leftist. After perusing several of my articles, she was still unable to determine exactly which side of the political spectrum I was on. Could I just make this easier for her and tell her whether or not I was a leftist, so that she knew whether to protest my upcoming speech? I pointed her to some of my writing on economics and explained that the Left/Right categorization may be a bit overdetermined in my case. She thanked me but asked me to please just give her a yes-or-no answer.
“Yes and no,” I replied, breaking the binary conventions of digital choice making. I didn’t hear back.
DO DRONE PILOTS DREAM OF ELECTRIC KILLS?
I was working on a story about Predator drones, the unmanned aircraft that the US Air Force flies over war zones in the Middle East and Central Asia collecting reconnaissance, taking out targets, and occasionally launching a few Hellfire missiles.
The operators—pilots, as they’re called—sit behind computer terminals outfitted with joysticks, monitors, interactive maps, and other cockpit gear, remotely controlling drones on the other side of the world.
I was most interested in the ethical challenges posed by remote control warfare. Air Force commanders told me that within just a few years a majority if not all American fighter planes would be flown remotely by pilots who were thousands of miles away from the war zone. The simulation of flight, the resolution of the cameras, the instantaneousness of feedback, and the accuracy of the controls have rendered the pilot working through virtual reality just as effective as if he or she were in the cockpit. So why risk human troops along with the hardware? There’s no good reason, other than the nagging sense that on some level it’s not fair. What does it mean to fight a war where only one side’s troops are in jeopardy, and the other side may as well be playing a video game? Will our troops and our public become even more disconnected from the human consequences and collateral damage of our actions?
To my great surprise, I found out that the levels of clinical distress in drone crews were as high, and sometimes even higher, than those of crews flying in real planes.33 These were not desensitized video-game players wantonly dropping ordinance on digital-screen pixels that may as well have been bugs. They were soul-searching, confused, guilt-ridden young men, painfully aware of the lives they were taking. Thirty-four percent experienced burnout, and over 25 percent exhibited clinical levels of distress. These responses occurred in spite of the Air Force’s efforts to select the most well adjusted pilots for the job.
Air Force researchers blamed the high stress levels on the fact that the drone pilots all had combat experience and that the drone missions probably caused them to re-experience the stress of their real-world missions. After observing the way these pilots work and live, however, I’m not so sure it’s their prior combat experience that makes drone missions so emotionally taxing, as it is these young men’s concurrent life experience. Combat is extraordinarily stressful, but at least battlefield combat pilots have one another when they get out of their planes. They are far from home, living the war 24/7 from their military base or aircraft carrier. By contrast, after the drone pilot finishes his mission, he gets in his car and drives home to his family in suburban Las Vegas. He passes the mashed potatoes to his wife and tries to talk about the challenges of elementary school with his second grader, while the video images of the Afghan targets he neutralized that afternoon still dance on his retinas.
In one respect, this is good news. It means that we can remain emotionally connected to the effects of our virtual activities. We can achieve a sort of sync with things that are very far away. In fact, the stress experienced by drone pilots was highest when they either killed or witnessed the killing of people they had observed over long periods of time. Even if they were blowing up a notorious terrorist, simply having spied on the person going through his daily routine, day after day, generated a kind of sympathetic response—a version of sync.
The stress, depression, and anxiety experienced by these soldiers, however, came from living two lives at once: the life of a soldier making kills by day and the one of a daddy hugging his toddler at night. Technology allows for this dual life, this ability to live in two different places—as two very different people—at the same time. The inability to reconcile these two identities and activities results in digiphrenia.
Drone pilots offer a stark example of the same present shock most of us experience to a lesser degree as we try to negotiate the contrast between the multiple identities and activities our digital technologies demand of us. Our computers have no problem functioning like this. When a computer gets a problem, or a series of problems, it allocates a portion of its memory to each part. More accurately, it breaks down all the tasks it needs to accomplish into buckets of similar tasks, and then allocates a portion of its memory—its processing resources—to each bucket. The different portions of memory then report back with their answers, and the chip puts them back together or outputs them or does whatever it needs to next.
People do not work like this. Yes, we do line up our tasks in a similar fashion. A great waiter may scan the dining room in order to strategize the most efficient way to serve everyone. So, instead of walking from the kitchen to the floor four separate times, he will take an order from one table, check on another’s meal while removing a plate, and then clear the dessert from a third table and return to the kitchen with the order and the dirty dishes. The human waiter strategizes a linear sequence.
The computer chip would break down the tasks differently, lifting the plates from both tables simultaneously with one part of its memory, taking the order from each person with another part (broken down into as many simultaneous order-taking sections as there are people), and check on the meal with another. The human figures out the best order to do things one after the other, while the chip divides itself into separate waiters ideally suited for each separate task. The mistake so many of us make with digital technology is to imitate rather than simply exploit its multitasking capabilities. We try to maximize our efficiency by distributing our resources instead of lining them up. Because we can’t actually be more than one person at the same time, we experience digiphrenia instead of sync.
The first place we feel this is in our capacity to think and perform effectively. There have been more than enough studies done and books written about distraction and multitasking for us to accept—however begrudgingly—the basic fact that human beings cannot do more than one thing at a time.34 As Stanford cognitive scientist Clifford Nass has shown pretty conclusively, even the smartest university students who believe they are terrific at multitasking actually perform much worse than when they do one thing at a time. Their subjective experience is that they got more done even when they accomplished much less, and with much less accuracy. Other studies show that multitasking and interruptions hurt our ability to remember.
We do have the ability to do more than one thing at a time. For instance, we each have parts of our brain that deal with automatic functions like breathing and beating while our conscious attention focuses on a task like reading or writing. But the kind of plate spinning we associate with multitasking doesn’t really happen. We can’t be on the phone while watching TV; rather, we can hold the phone to our ear while our eyes look at the TV set and then switch our awareness back and forth between the two activities. This allows us to enjoy the many multifaceted, multisensory pleasures of life—from listening to a baseball game while washing the car to sitting in the tub while enjoying a book. In either case, though, we stop focusing on washing the car in order to hear about the grand slam and pause reading in order to decide whether to make the bath water hotter.
It’s much more difficult, and counterproductive, to attempt to engage in two active tasks at once. We cannot write a letter while reconciling the checkbook or—as the rising accident toll indicates—drive while sending text messages. Yet the more we use the Internet to conduct our work and lives, the more compelled we are to adopt its processors’ underlying strategy. The more choices are on offer, the more windows remain open, and the more options lie waiting. Each open program is another mouth for our attention to feed.
This competition for our attention is fierce. Back in the mid-1990s, Wired magazine announced to the world that although digital real estate was infinite, human attention was finite; there are only so many “eyeball hours” in a day per human. This meant that the new market—the new scarcity—would be over human attention itself. Sticky websites were designed to keep eyeballs glued to particular spots on the Internet, while compelling sounds were composed to draw us to check on incoming-message traffic. In a world where attention is the new commodity, it is no surprise that diagnoses of the formerly obscure attention deficit disorder are now so commonplace as to be conferred by school guidance counselors. Since that Wired cover in 1997, Ritalin prescriptions have gone up tenfold.
Kids aren’t the only
ones drugging up. College students and younger professionals now use Ritalin and another form of speed, Adderall, as “cognitive enhancers.”35 Just as professional athletes may use steroids to boost their performance, stockbrokers and finals takers can gain an edge over their competitors and move higher up on the curve. More than just keeping a person awake, these drugs are cognitive accelerators; in a sense, they speed up the pace at which a person can move between the windows. They push on the gas pedal of the mind, creating momentum to carry a person through from task to task, barreling over what may seem like gaps or discontinuities at a slower pace.
The deliberate style of cognition we normally associate with reading or contemplation gives way to the more superficial, rapid-fire, and compulsive activities of the net. If we get good enough at this, we may even become what James G. March calls a “fast learner,” capable of grasping the gist of ideas and processes almost instantaneously. The downside is that “fast learners tend to track noisy signals too closely and to confuse themselves by making changes before the effects of previous actions are clear.”36 It’s an approach that works better for bouncing from a Twitter stream to a blog comments field in order to parse the latest comments from a celebrity on his way to rehab than it does for us to try to solve a genetic algorithm.
But it’s also the quiz-show approach now favored by public schools, whose classrooms reward the first hand up. Intelligence is equated with speed, and accomplishment with the volume of work finished. The elementary school where I live puts a leaf on the wall for every book a child reads, yet has no way of measuring or rewarding the depth of understanding or thought that took place—or didn’t. More, faster, is better. Kids compete with the clock when they take their tests, as if preparing for a workplace in which their boss will tell them “pencils down.” The test results, in turn, are used to determine school funding and teacher salaries. All children left behind.
Present Shock: When Everything Happens Now Page 13