To Ann
CONTENTS
INTRODUCTION
ALERT FOR OPERATORS
CHAPTER ONE
PASSENGERS
CHAPTER TWO
THE ROBOT AT THE GATE
CHAPTER THREE
ON AUTOPILOT
CHAPTER FOUR
THE DEGENERATION EFFECT
INTERLUDE, WITH DANCING MICE
CHAPTER FIVE
WHITE-COLLAR COMPUTER
CHAPTER SIX
WORLD AND SCREEN
CHAPTER SEVEN
AUTOMATION FOR THE PEOPLE
INTERLUDE, WITH GRAVE ROBBER
CHAPTER EIGHT
YOUR INNER DRONE
CHAPTER NINE
THE LOVE THAT LAYS THE SWALE IN ROWS
NOTES
ACKNOWLEDGMENTS
INDEX
No one
to witness
and adjust, no one to drive the car
—William Carlos Williams
THE GLASS CAGE
INTRODUCTION
ALERT FOR OPERATORS
ON JANUARY 4, 2013, the first Friday of a new year, a dead day newswise, the Federal Aviation Administration released a one-page notice. It had no title. It was identified only as a “safety alert for operators,” or SAFO. Its wording was terse and cryptic. In addition to being posted on the FAA’s website, it was sent to all U.S. airlines and other commercial air carriers. “This SAFO,” the document read, “encourages operators to promote manual flight operations when appropriate.” The FAA had collected evidence, from crash investigations, incident reports, and cockpit studies, indicating that pilots had become too dependent on autopilots and other computerized systems. Overuse of flight automation, the agency warned, could “lead to degradation of the pilot’s ability to quickly recover the aircraft from an undesired state.” It could, in blunter terms, put a plane and its passengers in jeopardy. The alert concluded with a recommendation that airlines, as a matter of operational policy, instruct pilots to spend less time flying on autopilot and more time flying by hand.1
This is a book about automation, about the use of computers and software to do things we used to do ourselves. It’s not about the technology or the economics of automation, nor is it about the future of robots and cyborgs and gadgetry, though all those things enter into the story. It’s about automation’s human consequences. Pilots have been out in front of a wave that is now engulfing us. We’re looking to computers to shoulder more of our work, on the job and off, and to guide us through more of our everyday routines. When we need to get something done today, more often than not we sit down in front of a monitor, or open a laptop, or pull out a smartphone, or strap a net-connected accessory to our forehead or wrist. We run apps. We consult screens. We take advice from digitally simulated voices. We defer to the wisdom of algorithms.
Computer automation makes our lives easier, our chores less burdensome. We’re often able to accomplish more in less time—or to do things we simply couldn’t do before. But automation also has deeper, hidden effects. As aviators have learned, not all of them are beneficial. Automation can take a toll on our work, our talents, and our lives. It can narrow our perspectives and limit our choices. It can open us to surveillance and manipulation. As computers become our constant companions, our familiar, obliging helpmates, it seems wise to take a closer look at exactly how they’re changing what we do and who we are.
CHAPTER ONE
PASSENGERS
AMONG THE HUMILIATIONS OF MY TEENAGE YEARS WAS ONE that might be termed psycho-mechanical: my very public struggle to master a manual transmission. I got my driver’s license early in 1975, not long after I turned sixteen. The previous fall, I had taken a driver’s ed course with a group of my high-school classmates. The instructor’s Oldsmobile, which we used for our on-the-road lessons and then for our driving tests at the dread Department of Motor Vehicles, was an automatic. You pressed the gas pedal, you turned the wheel, you hit the brakes. There were a few tricky maneuvers—making a three-point turn, backing up in a straight line, parallel parking—but with a little practice among pylons in the school parking lot, even they became routine.
License in hand, I was ready to roll. There was just one last roadblock. The only car available to me at home was a Subaru sedan with a stick shift. My dad, not the most hands-on of parents, granted me a single lesson. He led me out to the garage one Saturday morning, plopped himself down behind the wheel, and had me climb into the passenger seat beside him. He placed my left palm over the shift knob and guided my hand through the gears: “That’s first.” Brief pause. “Second.” Brief pause. “Third.” Brief pause. “Fourth.” Brief pause. “Down over here”—a pain shot through my wrist as it twisted into an unnatural position—“is Reverse.” He glanced at me to confirm I had it all down. I nodded helplessly. “And that”—wiggling my hand back and forth—“that’s Neutral.” He gave me a few tips about the speed ranges of the four forward gears. Then he pointed to the clutch pedal he had pinned beneath his loafer. “Make sure you push that in while you shift.”
I proceeded to make a spectacle of myself on the roads of the small New England town where we lived. The car would buck as I tried to find the correct gear, then lurch forward as I mistimed the release of the clutch. I’d stall at every red light, then stall again halfway out into the intersection. Hills were a horror. I’d let the clutch out too quickly, or too slowly, and the car would roll backward until it came to rest against the bumper of the vehicle behind me. Horns were honked, curses cursed, birds flipped. What made the experience all the more excruciating was the Subaru’s yellow paint job—the kind of yellow you get with a kid’s rain slicker or a randy male goldfinch. The car was an eye magnet, my flailing impossible to miss.
From my putative friends, I received no sympathy. They found my struggles a source of endless, uproarious amusement. “Grind me a pound!” one of them would yell with glee from the backseat whenever I’d muff a shift and set off a metallic gnashing of gear teeth. “Smooth move,” another would snigger as the engine rattled to a stall. The word “spaz”—this was well before anyone had heard of political correctness—was frequently lobbed my way. I had a suspicion that my incompetence with the stick was something my buddies laughed about behind my back. The metaphorical implications were not lost on me. My manhood, such as it was at sixteen, felt deflated.
But I persisted—what choice did I have?—and after a week or two I began to get the hang of it. The gearbox loosened up and became more forgiving. My arms and legs stopped working at cross-purposes and started cooperating. Soon, I was shifting without thinking about it. It just happened. The car no longer stalled or bucked or lurched. I no longer had to sweat the hills or the intersections. The transmission and I had become a team. We meshed. I took a quiet pride in my accomplishment.
Still, I coveted an automatic. Although stick shifts were fairly common back then, at least in the econoboxes and junkers that kids drove, they had already taken on a behind-the-times, hand-me-down quality. They seemed fusty, a little yesterday. Who wanted to be “manual” when you could be “automatic”? It was like the difference between scrubbing dishes by hand and sticking them in a dishwasher. As it turned out, I didn’t have to wait long for my wish to be granted. Two years after I got my license, I managed to total the Subaru during a late-night misadventure, and not long afterward I took stewardship of a used, cream-colored, two-door Ford Pinto. The car was a piece of crap—some now see the Pinto as marking the nadir of American manufacturing in the twentieth century—but to me it was redeemed by its automatic transmission.
I was a new man. My left foot, freed from the demands of the clutch, became an ap
pendage of leisure. As I tooled around town, it would sometimes tap along jauntily to the thwacks of Charlie Watts or the thuds of John Bonham—the Pinto also had a built-in eight-track deck, another touch of modernity—but more often than not it just stretched out in its little nook under the left side of the dash and napped. My right hand became a beverage holder. I not only felt renewed and up-to-date. I felt liberated.
It didn’t last. The pleasures of having less to do were real, but they faded. A new emotion set in: boredom. I didn’t admit it to anyone, hardly to myself even, but I began to miss the gear stick and the clutch pedal. I missed the sense of control and involvement they had given me—the ability to rev the engine as high as I wanted, the feel of the clutch releasing and the gears grabbing, the tiny thrill that came with a downshift at speed. The automatic made me feel a little less like a driver and a little more like a passenger. I came to resent it.
MOTOR AHEAD thirty-five years, to the morning of October 9, 2010. One of Google’s in-house inventors, the German-born roboticist Sebastian Thrun, makes an extraordinary announcement in a blog post. The company has developed “cars that can drive themselves.” These aren’t some gawky, gearhead prototypes puttering around the Googleplex’s parking lot. These are honest-to-goodness street-legal vehicles—Priuses, to be precise—and, Thrun reveals, they’ve already logged more than a hundred thousand miles on roads and highways in California and Nevada. They’ve cruised down Hollywood Boulevard and the Pacific Coast Highway, gone back and forth over the Golden Gate Bridge, circled Lake Tahoe. They’ve merged into freeway traffic, crossed busy intersections, and inched through rush-hour gridlock. They’ve swerved to avoid collisions. They’ve done all this by themselves. Without human help. “We think this is a first in robotics research,” Thrun writes, with sly humility.1
Building a car that can drive itself is no big deal. Engineers and tinkerers have been constructing robotic and remote-controlled automobiles since at least the 1980s. But most of them were crude jalopies. Their use was restricted to test-drives on closed tracks or to races and rallies in deserts and other remote areas, far away from pedestrians and police. The Googlemobile, Thrun’s announcement made clear, is different. What makes it such a breakthrough, in the history of both transport and automation, is its ability to navigate the real world in all its chaotic, turbulent complexity. Outfitted with laser range-finders, radar and sonar transmitters, motion detectors, video cameras, and GPS receivers, the car can sense its surroundings in minute detail. It can see where it’s going. And by processing all the streams of incoming information instantaneously—in “real time”—its onboard computers are able to work the accelerator, the steering wheel, and the brakes with the speed and sensitivity required to drive on actual roads and respond fluidly to the unexpected events that drivers always encounter. Google’s fleet of self-driving cars has now racked up close to a million miles, and the vehicles have caused just one serious accident. That was a five-car pileup near the company’s Silicon Valley headquarters in 2011, and it doesn’t really count. It happened, as Google was quick to announce, “while a person was manually driving the car.”2
Autonomous automobiles have a ways to go before they start chauffeuring us to work or ferrying our kids to soccer games. Although Google has said it expects commercial versions of its car to be on sale by the end of the decade, that’s probably wishful thinking. The vehicle’s sensor systems remain prohibitively expensive, with the roof-mounted laser apparatus alone going for eighty thousand dollars. Many technical challenges remain to be met, such as navigating snowy or leaf-covered roads, dealing with unexpected detours, and interpreting the hand signals of traffic cops and road workers. Even the most powerful computers still have a hard time distinguishing a bit of harmless road debris (a flattened cardboard box, say) from a dangerous obstacle (a nail-studded chunk of plywood). Most daunting of all are the many legal, cultural, and ethical hurdles a driverless car faces. Where, for instance, will culpability and liability reside should a computer-driven automobile cause an accident that kills or injures someone? With the car’s owner? With the manufacturer that installed the self-driving system? With the programmers who wrote the software? Until such thorny questions get sorted out, fully automated cars are unlikely to grace dealer showrooms.
Progress will sprint forward nonetheless. Much of the Google test cars’ hardware and software will come to be incorporated into future generations of cars and trucks. Since the company went public with its autonomous vehicle program, most of the world’s major carmakers have let it be known that they have similar efforts under way. The goal, for the time being, is not so much to create an immaculate robot-on-wheels as to continue to invent and refine automated features that enhance safety and convenience in ways that get people to buy new cars. Since I first turned the key in my Subaru’s ignition, the automation of driving has already come a long way. Today’s automobiles are stuffed with electronic gadgetry. Microchips and sensors govern the workings of the cruise control, the antilock brakes, the traction and stability mechanisms, and, in higher-end models, the variable-speed transmission, parking-assist system, collision-avoidance system, adaptive headlights, and dashboard displays. Software already provides a buffer between us and the road. We’re not so much controlling our cars as sending electronic inputs to the computers that control them.
In coming years, we’ll see responsibility for many more aspects of driving shift from people to software. Luxury-car makers like Infiniti, Mercedes, and Volvo are rolling out models that combine radar-assisted adaptive cruise control, which works even in stop-and-go traffic, with computerized steering systems that keep a car centered in its lane and brakes that slam themselves on in emergencies. Other manufacturers are rushing to introduce even more advanced controls. Tesla Motors, the electric car pioneer, is developing an automotive autopilot that “should be able to [handle] 90 percent of miles driven,” according to the company’s ambitious chief executive, Elon Musk.3
The arrival of Google’s self-driving car shakes up more than our conception of driving. It forces us to change our thinking about what computers and robots can and can’t do. Up until that fateful October day, it was taken for granted that many important skills lay beyond the reach of automation. Computers could do a lot of things, but they couldn’t do everything. In an influential 2004 book, The New Division of Labor: How Computers Are Creating the Next Job Market, economists Frank Levy and Richard Murnane argued, convincingly, that there were practical limits to the ability of software programmers to replicate human talents, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed specifically to the example of driving a car on the open road, a talent that requires the instantaneous interpretation of a welter of visual signals and an ability to adapt seamlessly to shifting and often unanticipated situations. We hardly know how we pull off such a feat ourselves, so the idea that programmers could reduce all of driving’s intricacies, intangibilities, and contingencies to a set of instructions, to lines of software code, seemed ludicrous. “Executing a left turn across oncoming traffic,” Levy and Murnane wrote, “involves so many factors that it is hard to imagine the set of rules that can replicate a driver’s behavior.” It seemed a sure bet, to them and to pretty much everyone else, that steering wheels would remain firmly in the grip of human hands.4
In assessing computers’ capabilities, economists and psychologists have long drawn on a basic distinction between two kinds of knowledge: tacit and explicit. Tacit knowledge, which is also sometimes called procedural knowledge, refers to all the stuff we do without thinking about it: riding a bike, snagging a fly ball, reading a book, driving a car. These aren’t innate skills—we have to learn them, and some people are better at them than others—but they can’t be expressed as a simple recipe. When you make a turn through a busy intersection in your car, neurological studies show, many areas of your brain are hard at work, processing sensory stimuli, making estimates of time and distance, an
d coordinating your arms and legs.5 But if someone asked you to document everything involved in making that turn, you wouldn’t be able to, at least not without resorting to generalizations and abstractions. The ability resides deep in your nervous system, outside the ambit of your conscious mind. The mental processing goes on without your awareness.
Much of our ability to size up situations and make quick judgments about them stems from the fuzzy realm of tacit knowledge. Most of our creative and artistic skills reside there too. Explicit knowledge, which is also known as declarative knowledge, is the stuff you can actually write down: how to change a flat tire, how to fold an origami crane, how to solve a quadratic equation. These are processes that can be broken down into well-defined steps. One person can explain them to another person through written or oral instructions: do this, then this, then this.
Because a software program is essentially a set of precise, written instructions—do this, then this, then this—we’ve assumed that while computers can replicate skills that depend on explicit knowledge, they’re not so good when it comes to skills that flow from tacit knowledge. How do you translate the ineffable into lines of code, into the rigid, step-by-step instructions of an algorithm? The boundary between the explicit and the tacit has always been a rough one—a lot of our talents straddle the line—but it seemed to offer a good way to define the limits of automation and, in turn, to mark out the exclusive precincts of the human. The sophisticated jobs Levy and Murnane identified as lying beyond the reach of computers—in addition to driving, they pointed to teaching and medical diagnosis—were a mix of the mental and the manual, but they all drew on tacit knowledge.
Google’s car resets the boundary between human and computer, and it does so more dramatically, more decisively, than have earlier breakthroughs in programming. It tells us that our idea of the limits of automation has always been something of a fiction. We’re not as special as we think we are. While the distinction between tacit and explicit knowledge remains a useful one in the realm of human psychology, it has lost much of its relevance to discussions of automation.
The Glass Cage: Automation and Us Page 1