Whether it’s Wiley Post in a cockpit, Serena Williams on a tennis court, or Magnus Carlsen at a chessboard, the otherworldly talent of the virtuoso springs from automaticity. What looks like instinct is hard-won skill. Those changes in the brain don’t happen through passive observation. They’re generated through repeated confrontations with the unexpected. They require what the philosopher of mind Hubert Dreyfus terms “experience in a variety of situations, all seen from the same perspective but requiring different tactical decisions.”28 Without lots of practice, lots of repetition and rehearsal of a skill in different circumstances, you and your brain will never get really good at anything, at least not anything complicated. And without continuing practice, any talent you do achieve will get rusty.
It’s popular now to suggest that practice is all you need. Work at a skill for ten thousand hours or so, and you’ll be blessed with expertise—you’ll become the next great pastry chef or power forward. That, unhappily, is an exaggeration. Genetic traits, both physical and intellectual, do play an important role in the development of talent, particularly at the highest levels of achievement. Nature matters. Even our desire and aptitude for practice has, as Marcus points out, a genetic component: “How we respond to experience, and even what type of experience we seek, are themselves in part functions of the genes we are born with.”29 But if genes establish, at least roughly, the upper bounds of individual talent, it’s only through practice that a person will ever reach those limits and fulfill his or her potential. While innate abilities make a big difference, write psychology professors David Hambrick and Elizabeth Meinz, “research has left no doubt that one of the largest sources of individual differences in performance on complex tasks is simply what and how much people know: declarative, procedural, and strategic knowledge acquired through years of training and practice in a domain.”30
Automaticity, as its name makes clear, can be thought of as a kind of internalized automation. It’s the body’s way of making difficult but repetitive work routine. Physical movements and procedures get programmed into muscle memory; interpretations and judgments are made through the instant recognition of environmental patterns apprehended by the senses. The conscious mind, scientists discovered long ago, is surprisingly cramped, its capacity for taking in and processing information limited. Without automaticity, our consciousness would be perpetually overloaded. Even very simple acts, such as reading a sentence in a book or cutting a piece of steak with a knife and fork, would strain our cognitive capabilities. Automaticity gives us more headroom. It increases, to put a different spin on Alfred North Whitehead’s observation, “the number of important operations which we can perform without thinking about them.”
Tools and other technologies, at their best, do something similar, as Whitehead appreciated. The brain’s capacity for automaticity has limits of its own. Our unconscious mind can perform a lot of functions quickly and efficiently, but it can’t do everything. You might be able to memorize the times table up to twelve or even twenty, but you would probably have trouble memorizing it much beyond that. Even if your brain didn’t run out of memory, it would probably run out of patience. With a simple pocket calculator, though, you can automate even very complicated mathematical procedures, ones that would tax your unaided brain, and free up your conscious mind to consider what all that math adds up to. But that only works if you’ve already mastered basic arithmetic through study and practice. If you use the calculator to bypass learning, to carry out procedures that you haven’t learned and don’t understand, the tool will not open up new horizons. It won’t help you gain new mathematical knowledge and skills. It will simply be a black box, a mysterious number-producing mechanism. It will be a barrier to higher thought rather than a spur to it.
That’s what computer automation often does today, and it’s why Whitehead’s observation has become misleading as a guide to technology’s consequences. Rather than extending the brain’s innate capacity for automaticity, automation too often becomes an impediment to automatization. In relieving us of repetitive mental exercise, it also relieves us of deep learning. Both complacency and bias are symptoms of a mind that is not being challenged, that is not fully engaged in the kind of real-world practice that generates knowledge, enriches memory, and builds skill. The problem is compounded by the way computer systems distance us from direct and immediate feedback about our actions. As the psychologist K. Anders Ericsson, an expert on talent development, points out, regular feedback is essential to skill building. It’s what lets us learn from our mistakes and our successes. “In the absence of adequate feedback,” Ericsson explains, “efficient learning is impossible and improvement only minimal even for highly motivated subjects.”31
Automaticity, generation, flow: these mental phenomena are diverse, they’re complicated, and their biological underpinnings are understood only fuzzily. But they are all related, and they tell us something important about ourselves. The kinds of effort that give rise to talent—characterized by challenging tasks, clear goals, and direct feedback—are very similar to those that provide us with a sense of flow. They’re immersive experiences. They also describe the kinds of work that force us to actively generate knowledge rather than passively take in information. Honing our skills, enlarging our understanding, and achieving personal satisfaction and fulfillment are all of a piece. And they all require tight connections, physical and mental, between the individual and the world. They all require, to quote the American philosopher Robert Talisse, “getting your hands dirty with the world and letting the world kick back in a certain way.”32 Automaticity is the inscription the world leaves on the active mind and the active self. Know-how is the evidence of the richness of that inscription.
From rock climbers to surgeons to pianists, Mihaly Csikszentmihalyi explains, people who “routinely find deep enjoyment in an activity illustrate how an organized set of challenges and a corresponding set of skills result in optimal experience.” The jobs or hobbies they engage in “afford rich opportunities for action,” while the skills they develop allow them to make the most of those opportunities. The ability to act with aplomb in the world turns all of us into artists. “The effortless absorption experienced by the practiced artist at work on a difficult project always is premised upon earlier mastery of a complex body of skills.”33 When automation distances us from our work, when it gets between us and the world, it erases the artistry from our lives.
Interlude, with Dancing Mice
“SINCE 1903 I HAVE HAD UNDER OBSERVATION CONSTANTLY from two to one hundred dancing mice.” So confessed the Harvard psychologist Robert M. Yerkes in the opening chapter of his 1907 book The Dancing Mouse, a 290-page paean to a rodent. But not just any rodent. The dancing mouse, Yerkes predicted, would prove as important to the behavioralist as the frog was to the anatomist.
When a local Cambridge doctor presented a pair of Japanese dancing mice to the Harvard Psychological Laboratory as a gift, Yerkes was underwhelmed. It seemed “an unimportant incident in the course of my scientific work.” But in short order he became infatuated with the tiny creatures and their habit of “whirling around on the same spot with incredible rapidity.” He bred scores of them, assigning each a number and keeping a meticulous log of its markings, gender, birth date, and ancestry. A “really admirable animal,” the dancing mouse was, he wrote, smaller and weaker than the average mouse—it was barely able to hold itself upright or “cling to an object”—but it proved “an ideal subject for the experimental study of many of the problems of animal behavior.” The breed was “easily cared for, readily tamed, harmless, incessantly active, and it lends itself satisfactorily to a large number of experimental situations.”1
At the time, psychological research using animals was still new. Ivan Pavlov had only begun his experiments on salivating dogs in the 1890s, and it wasn’t until 1900 that an American graduate student named Willard Small dropped a rat into a maze and watched it scurry about. With his dancing mice, Yerkes greatly expanded the
scope of animal studies. As he catalogued in The Dancing Mouse, he used the rodents as test subjects in the exploration of, among other things, balance and equilibrium, vision and perception, learning and memory, and the inheritance of behavioral traits. The mice were “experiment-impelling,” he reported. “The longer I observed and experimented with them, the more numerous became the problems which the dancers presented to me for solution.”2
Early in 1906, Yerkes began what would turn out to be his most important and influential experiments on the dancers. Working with his student John Dillingham Dodson, he put, one by one, forty of the mice into a wooden box. At the far end of the box were two passageways, one painted white, the other black. If a mouse tried to enter the black passageway, it received, as Yerkes and Dodson later wrote, “a disagreeable electric shock.” The intensity of the jolt varied. Some mice were given a weak shock, others were given a strong one, and still others were given a moderate one. The researchers wanted to see if the strength of the stimulus would influence the speed with which the mice learned to avoid the black passage and go into the white one. What they discovered surprised them. The mice receiving the weak shock were relatively slow to distinguish the white and the black passageways, as might be expected. But the mice receiving the strong shock exhibited equally slow learning. The rodents quickest to understand their situation and modify their behavior were the ones given a moderate shock. “Contrary to our expectations,” the scientists reported, “this set of experiments did not prove that the rate of habit-formation increases with increase in the strength of the electric stimulus up to the point at which the shock becomes positively injurious. Instead an intermediate range of intensity of stimulation proved to be most favorable to the acquisition of a habit.”3
A subsequent series of tests brought another surprise. The scientists put a new group of mice through the same drill, but this time they increased the brightness of the light in the white passageway and dimmed the light in the black one, strengthening the visual contrast between the two. Under this condition, the mice receiving the strongest shock were the quickest to avoid the black doorway. Learning didn’t fall off as it had in the first go-round. Yerkes and Dodson traced the difference in the rodents’ behavior to the fact that the setup of the second experiment had made things easier for the animals. Thanks to the greater visual contrast, the mice didn’t have to think as hard in distinguishing the passageways and associating the shock with the dark corridor. “The relation of the strength of electrical stimulus to rapidity of learning or habit-formation depends upon the difficultness of the habit,” they explained.4 As a task becomes harder, the optimum amount of stimulation decreases. In other words, when the mice faced a really tough challenge, both an unusually weak stimulus and an unusually strong stimulus impeded their learning. In something of a Goldilocks effect, a moderate stimulus inspired the best performance.
Since its publication in 1908, the paper that Yerkes and Dodson wrote about their experiments, “The Relation of Strength of Stimulus to Rapidity of Habit-Formation,” has come to be recognized as a landmark in the history of psychology. The phenomenon they discovered, known as the Yerkes-Dodson law, has been observed, in various forms, far beyond the world of dancing mice and differently colored doorways. It affects people as well as rodents. In its human manifestation, the law is usually depicted as a bell curve that plots the relation of a person’s performance at a difficult task to the level of mental stimulation, or arousal, the person is experiencing.
At very low levels of stimulation, the person is so disengaged and uninspired as to be moribund; performance flat-lines. As stimulation picks up, performance strengthens, rising steadily along the left side of the bell curve until it reaches a peak. Then, as stimulation continues to intensify, performance drops off, descending steadily down the right side of the bell. When stimulation reaches its most intense level, the person essentially becomes paralyzed with stress; performance again flat-lines. Like dancing mice, we humans learn and perform best when we’re at the peak of the Yerkes-Dodson curve, where we’re challenged but not overwhelmed. At the top of the bell is where we enter the state of flow.
The Yerkes-Dodson law has turned out to have particular pertinence to the study of automation. It helps explain many of the unexpected consequences of introducing computers into work places and processes. In automation’s early days, it was thought that software, by handling routine chores, would reduce people’s workload and enhance their performance. The assumption was that workload and performance were inversely correlated. Ease a person’s mental strain, and she’ll be smarter and sharper on the job. The reality has turned out to be more complicated. Sometimes, computers succeed in moderating workload in a way that allows a person to excel at her work, devoting her full attention to the most pressing tasks. In other cases, automation ends up reducing workload too much. The worker’s performance suffers as she drifts to the left side of the Yerkes-Dodson curve.
We all know about the ill effects of information overload. It turns out that information underload can be equally debilitating. However well intentioned, making things easy for people can backfire. Human-factors scholars Mark Young and Neville Stanton have found evidence that a person’s “attentional capacity” actually “shrinks to accommodate reductions in mental workload.” In the operation of automated systems, they argue, “underload is possibly of greater concern [than overload], as it is more difficult to detect.”5 Researchers worry that the lassitude produced by information underload is going to be a particular danger with coming generations of automotive automation. As software takes over more steering and braking chores, the person behind the wheel won’t have enough to do and will tune out. Making matters worse, the driver will likely have received little or no training in the use and risks of automation. Some routine accidents may be avoided, but we’re going to end up with even more bad drivers on the road.
In the worst cases, automation actually places added and unexpected demands on people, burdening them with extra work and pushing them to the right side of the Yerkes-Dodson curve. Researchers refer to this as the “automation paradox.” As Mark Scerbo, a human-factors expert at Virginia’s Old Dominion University, explains, “The irony behind automation arises from a growing body of research demonstrating that automated systems often increase workload and create unsafe working conditions.” 6 If, for example, the operator of a highly automated chemical plant is suddenly plunged into a fast-moving crisis, he may be overwhelmed by the need to monitor information displays and manipulate various computer controls while also following checklists, responding to alerts and alarms, and taking other emergency measures. Instead of relieving him of distractions and stress, computerization forces him to deal with all sorts of additional tasks and stimuli. Similar problems crop up during cockpit emergencies, when pilots are required to input data into their flight computers and scan information displays even as they’re struggling to take manual control of the plane. Anyone who’s gone off course while following directions from a mapping app knows firsthand how computer automation can cause sudden spikes in workload. It’s not easy to fiddle with a smartphone while driving a car.
What we’ve learned is that automation has a sometimes-tragic tendency to increase the complexity of a job at the worst possible moment—when workers already have too much to handle. The computer, introduced as an aid to reduce the chances of human error, ends up making it more likely that people, like shocked mice, will make the wrong move.
CHAPTER FIVE
WHITE-COLLAR COMPUTER
LATE IN THE SUMMER OF 2005, researchers at the venerable RAND Corporation in California made a stirring prediction about the future of American medicine. Having completed what they called “the most detailed analysis ever conducted of the potential benefits of electronic medical records,” they declared that the U.S. health-care system “could save more than $81 billion annually and improve the quality of care” if hospitals and physicians automated their record keeping. The savings and ot
her benefits, which RAND had estimated “using computer simulation models,” made it clear, one of the think tank’s top scientists said, “that it is time for the government and others who pay for health care to aggressively promote health information technology.”1 The last sentence in a subsequent report detailing the research underscored the sense of urgency: “The time to act is now.”2
When the RAND study appeared, excitement about the computerization of medicine was already running high. Early in 2004, George W. Bush had issued a presidential order establishing the Health Information Technology Adoption Initiative with the goal of digitizing most U.S. medical records within ten years. By the end of 2004, the federal government was handing out millions of dollars in grants to encourage the purchase of automated systems by doctors and hospitals. In June of 2005, the Department of Health and Human Services established a task force of government officials and industry executives, the American Health Information Community, to help spur the adoption of electronic medical records. The RAND research, by putting the anticipated benefits of electronic records into hard and seemingly reliable numbers, stoked both the excitement and the spending. As the New York Times would later report, the study “helped drive explosive growth in the electronic records industry and encouraged the federal government to give billions of dollars in financial incentives to hospitals and doctors that put the systems in place.”3 Shortly after being sworn in as president in 2009, Barack Obama cited the RAND numbers when he announced a program to dole out an additional $30 billion in government funds to subsidize purchases of electronic medical record (EMR) systems. A frenzy of investment ensued, as some three hundred thousand doctors and four thousand hospitals availed themselves of Washington’s largesse.4
The Glass Cage: Automation and Us Page 9