Book Read Free

The Design of Future Things

Page 4

by Don Norman


  The technologists will try to reassure us that all technologies start off as weak and underpowered, that eventually their deficits are overcome and they become safe and trustworthy. At one level they are correct. Steam engines and steamships used to explode; they seldom do anymore. Early aircraft crashed frequently. Today, they hardly ever do. Remember Jim’s problem with the cruise control that regained speed in an inappropriate location? I am certain that this particular situation can be avoided in future designs by coupling the speed control with the navigation system, or perhaps by developing systems in which the roads themselves transmit the allowable speeds to the cars (hence, no more ability to exceed speed limits), or better yet, by having the car itself determine safe speeds given the road, its curvature, slipperiness, and the presence of other traffic or people.

  I am a technologist. I believe in making lives richer and more rewarding through the use of science and technology. But that is not where our present path is taking us. Today we are confronting a new breed of machines with intelligence and autonomy, machines that can indeed take over for us in many situations. In many cases, they will make our lives more effective, more fun, and safer. In others, however, they will frustrate us, get in our way, and even increase danger. For the first time, we have machines that are attempting to interact with us socially.

  The problems that we face with technology are fundamental. They cannot be overcome by following old pathways. We need a calmer, more reliable, more humane approach. We need augmentation, not automation.

  CHAPTER TWO

  The Psychology of

  People & Machines

  Three scenarios are possible now:

  • “Pull up! Pull up!” cries the airplane to the pilots when it decides that the airplane is too low for safety.

  • “Beep, beep,” signals the automobile, trying to get the driver’s attention, while tightening the seat belts, straightening the seat backs, and pretensing the brakes. It is watching the driver with its video camera, and because the driver is not paying attention to the road, it applies the brakes.

  • “Bing, bing,” goes the dishwasher, signaling that the dishes are clean, even if it is 3 a.m. and the message serves no purpose except to wake you up.

  Three scenarios likely to be possible in the future:

  • “No,” says the refrigerator. “Not eggs again. Not until your weight comes down, and your cholesterol levels are lower. Scale tells me you still have to lose about five pounds, and the clinic keeps pinging me about your cholesterol. This is for your own good, you know.”

  • “I just checked your appointments diary in your smart phone,” says the automobile as you get into the car after a day’s work. “You have free time, so I’ve programmed that scenic route with those curves you like so much instead of the highway—I know you’ll enjoy driving it. Oh, and I’ve picked your favorite music to go with it.”

  • “Hey,” says your house one morning as you prepare to leave. “What’s the rush? I took out the garbage. Won’t you even say thank you? And can we talk about that nice new controller I’ve been showing you pictures of? It would make me much more efficient, and you know, the Jones’s house already has one.”

  Some machines are obstinate. Others are temperamental. Some are delicate, some rugged. We commonly apply human attributes to our machines, and often these terms are fittingly descriptive, even though we use them as metaphors or similes. The new kinds of intelligent machines, however, are autonomous or semiautonomous: they create their own assessments, make their own decisions. They no longer need people to authorize their actions. As a result, these descriptions no longer are metaphors—they have become legitimate characterizations.

  The first three scenarios I’ve depicted are already real. Airplane warning systems do indeed cry out, “Pull up!” (usually with a female voice). At least one automobile company has announced a system that monitors the driver with its video camera. If the driver does not appear to be watching the road when its forward-looking radar system senses a potential collision, it sounds an alarm—not with a voice (at least, not yet), but with buzzers and vibration. If the driver still does not respond, the system automatically applies the brakes and prepares the car for a crash. And I have already been awakened in the middle of the night by my dishwasher’s beeps, anxious to tell me that the dishes have been cleaned.

  Much is known about the design of automated systems. Slightly less is known about the interaction between people and these systems, although this too has been a topic of deep study for the past several decades. But these studies have dealt with industrial and military settings, where people were using the machines as part of their jobs. What about everyday people who might have no training, who might only use any particular machine occasionally? We know almost nothing of this situation, but this is what concerns me: untrained, everyday people, you and me, using our household appliances, our entertainment systems, and our automobiles.

  How do everyday people learn how to use the new generation of intelligent devices? Hah! In bits and pieces, by trial and error, with endless feelings of frustration. The designers seem to believe that these devices are so intelligent, so perfect in their operation, that no learning is required. Just tell them what to do and get out of the way. Yes, the devices always come with instruction manuals, often big, thick, heavy ones, but these manuals are neither explanatory nor intelligible. Most do not even attempt to explain how the devices work. Instead, they give magical, mystical names to the mechanisms, oftentimes using nonsensical marketing terms, stringing the words together as in “SmartHomeSensor,” as if naming something explains it.

  The scientific community calls this approach “automagical”: automatic plus magical. The manufacturer wants us to believe in—and trust—the magic. Even when things work well, it is somewhat discomforting to have no idea of how or why. The real problems begin when things go wrong, for then we have no idea how to respond. We are in the horrors of the in-between world. On the one hand, we are far from the science fiction, movieland world populated by autonomous, intelligent robots that always work perfectly. On the other hand, we are moving rapidly away from the world of manual control, one with no automation, where people operate equipment and get the task done.

  “We are just making your life easier,” the companies tell me, “healthier, safer, and more enjoyable. All those good things.” Yes, if the intelligent, automatic devices worked perfectly, we would be happy. If they really were completely reliable, we wouldn’t have to know how they work: automagic would then be just fine. If we had manual control over a task with manual devices that we understood, we would be happy. When, however, we get stuck in the in-between world of automatic devices we don’t understand and that don’t work as expected, not doing the task we wish to have done, well, then our lives are not made easier, and certainly not more enjoyable.

  A Brief Introduction to the Psychology of People and Machines

  The history of intelligent machines starts with early attempts to develop mechanical automatons, including clockworks and chess-playing machines. The most successful early chess-playing automaton was Wolfgang von Kempelen’s “Turk,” introduced with much fanfare and publicity to the royalty of Europe in 1769. In reality, it was a hoax, with an expert chess player cleverly concealed inside the mechanism, but the fact that the hoax succeeded so well indicates people’s willingness to believe that mechanical devices could indeed be intelligent. The real growth in the development of smart machines didn’t start until the mid 1900s with the development of control theory, servomechanisms and feedback, cybernetics, and information and automata theory. This occurred along with the rapid development of electronic circuits and computers, whose power has doubled roughly every two years. Because we’ve been doing this for more than forty years, today’s circuits are one million times more powerful than those first, early “giant brains.” Think of what will happen in twenty years, when machines are a thousand times more powerful than they are today—or in forty years, w
hen they will be a million times more powerful.

  The first attempts to develop a science of artificial intelligence (AI) also began in the mid 1900s. AI researchers moved the development of intelligent devices from the world of cold, hard, mathematical logic and decision making into the world of soft, ill-defined, human-centered reasoning that uses commonsense reasoning, fuzzy logic, probabilities, qualitative reasoning, and heuristics (“rules of thumb”) rather than precise algorithms. As a result, today’s AI systems can see and recognize objects, understand some spoken and written language, speak, move about the environment, and do complex reasoning.

  Perhaps the most successful use of AI today for everyday activities is in computer games, developing intelligent characters who play against people, creating those intelligent, exasperating personalities in simulation games that seem to enjoy doing things to frustrate their creator, the game player. AI is also used successfully to catch bank and credit card fraud and other suspicious activities. Automobiles use AI for braking, stability control, lane keeping, automatic parking, and other features. In the home, simple AI controls the washing machines and driers, sensing the type of clothing and how dirty the load, adjusting things appropriately. In the microwave oven, AI can sense when food is cooked. Simple circuits in digital cameras and camcorders help control focus and exposure, including detecting faces, the better to track them even if they are moving and to adjust the exposure and focus to them appropriately. With time, the power and reliability of these AI circuits will increase, while their cost will decrease, so they will show up in a wide variety of devices, not just the most expensive ones. Remember, computer power has a thousandfold increase every twenty years, a million every forty.

  Machine hardware is, of course, very different from that of animals. Machines are mostly made of parts with lots of straight lines, right angles, and arcs. There are motors and displays, control linkages and wires. Biology prefers flexibility: tissue, ligaments, and muscles. The brain works through massively parallel computational mechanisms, probably both chemical and electrical, and by settling into stable states. Machine brains, or, more accurately, machine information processing, operates much more quickly than biological neurons but also much less parallel in operation. Human brains are robust, reliable, and creative, marvelously adept at recognizing patterns. We humans tend to be creative, imaginative, and very adaptable to changing circumstances. We find similarities among events, and we use metaphorical expansion of concepts to develop whole new realms of knowledge. Furthermore, human memory, although imprecise, finds relationships and similarities among items that machines would not think of as similar at all. And, finally, human common sense is fast and powerful, whereas machine common sense does not exist.

  The evolution of technology is very different from the natural evolution of animals. With mechanical systems, the evolution is entirely up to the designer who analyzes existing systems and makes modifications. Machines have evolved over the centuries, in part because our understanding and ability to invent and develop technology has continually improved, in part because the sciences of the artificial have developed, and in part because human needs, and the environment itself, have changed.

  There is, however, one interesting parallel between the evolution of humans and that of intelligent, autonomous machines. Both must function effectively, reliably, and safely in the real world. The world itself, therefore, imposes the same demands and requirements upon all creatures: animal, human, and artificial. Animals and people have evolved complex systems of perception and action, emotion and cognition. Machines need analogous systems. They need to perceive the world and act upon it. They need to think and make decisions, to solve problems and reason. And yes, they need something akin to the emotional processes of people. No, not the same emotions that people have but the machine equivalents—the better to survive the hazards and dangers of the world, take advantage of opportunities, anticipate the consequences of their actions, and reflect upon what has happened and what is yet to come, thereby learning and improving performance. This is true for all autonomous, intelligent systems, animal, human, and machine.

  The Rise of a New Organism—a Hybrid of Machine+Person

  FIGURE 2.1

  Car+driver: a new hybrid organism. Rrrun, a sculpture by Marta Thoma.

  Photographed by the author from the Palo Alto, California, art

  collection at the Palo Alto Bowden Park.

  For years, researchers have shown that a three-level description of the brain is useful for many purposes, even if it is a radical simplification of its evolution, biology, and operation. These three-level descriptions have all built upon the early, pioneering description of the “triune” brain by Paul McLean, where the three levels move up from lower structures of the brain (the brainstem) to higher ones (the cortex and frontal cortex), tracing both the evolutionary history and the power and sophistication of brain processing. In my book Emotional Design, I further simplified that analysis for use by designers and engineers. Think of the brain as having three levels of processing:

  • Visceral: The most basic, the processing at this level is automatic and subconscious, determined by our biological heritage.

  • Behavioral: This is the home of learned skills, but still mostly subconscious. This processing level initiates and controls much of our behavior. One important contribution is to manage expectations of the results of our actions.

  • Reflective: This is the conscious, self-aware part of the brain, the home of the self and one’s self-image, which does analyses of our past and prospective fantasies that we hope—or fear—might happen.

  Were we to build these emotional states into machines, they would provide the same benefits to machines as their states provide us: rapid responses to avoid danger and accident, safety features for both the machines and any people who might be near, and powerful learning cues to improve expectations and enhance performance. Some of this is already happening. Elevators quickly jerk back their doors when they detect an obstacle (usually a recalcitrant human) in their path. Robotic vacuum cleaners avoid sharp drop-offs: fear of falling is built into their circuitry. These are visceral responses: the automatic fear responses prewired into humans through biology and prewired into machines by their designers. The reflective level of emotions places credit or blame upon our experiences. Machines are not yet up to this level of processing, but some day they will be, which will add even more power to their ability to learn and to predict.

  The future of everyday things lies in products with knowledge, with intelligence, products that know where they are located and who their owners are and that can communicate with other products and the environment. The future of products is all about the capabilities of machines that are mobile, that can physically manipulate the environment, that are aware of both the other machines and the people around them and can communicate with them all.

  By far the most exciting of our future technologies are those that enter into a symbiotic relationship with us: machine+person. Is the car+driver a symbiosis of human and machine in much the same way as the horse+rider might be? After all, the car+driver splits the processing levels, with the car taking over the visceral level and the driver the reflective level, both sharing the behavioral level in analogous fashion to the horse+rider.

  Just as the horse is intelligent enough to take care of the visceral aspects of riding (avoiding dangerous terrain, adjusting its pace to the quality of the terrain, avoiding obstacles), so too is the modern automobile able to sense danger, controlling the car’s stability, braking, and speed. Similarly, horses learn behaviorally complex routines for navigating difficult terrain or jumping obstacles, for changing canter when required and maintaining appropriate distance and coordination with other horses or people. So, too, does the modern car behaviorally modify its speed, keep to its own lane, brake when it senses danger, and control other aspects of the driving experience.

  FIGURE 2.2

  Horse+rider and car+driver as symbio
tic systems. A horse+rider can be treated as a symbiotic system, with the horse providing visceral-level guidance and the rider the reflective level, with both overlapping at the behavioral level. So, too, can a car+driver be thought of as a symbiotic system, with the car increasingly taking over the visceral level, the driver the reflective level. And, once again, with a lot of overlap at the behavioral level. Note that in both cases, the horse or the intelligent car also tries to exert control at the reflective level.

  Reflection is mostly left to the rider or driver, but not always, as when the horse decides to slow down or go home, or, not liking the interaction with its rider, decides to throw the rider off or just simply to ignore him or her. It is not difficult to imagine some future day when the car will decide which route to take and steer its way there or to pull off the road when it thinks it time to purchase gasoline or for its driver to eat a meal or take a break—or, perhaps, when it has been enticed to do so by messages sent to it by the roadway and commercial establishments along the path.

  Car+driver is a conscious, emotional, intelligent system. When automobiles were first available at the very start of the twentieth century, the human driver provided all processing levels: visceral, behavioral, and reflective. As the technology improved, more and more visceral elements were added, so that the car took care of internal engine and fuel adjustments and shifting. With antiskid braking, stability controls, cruise control, and now lane-keeping functionality, the car has taken on more and more of the behavioral side of driving. So, with most modern cars, the car provides the visceral part, and the driver the reflective part, with both active at the behavioral level.

 

‹ Prev