The Design of Future Things

Home > Other > The Design of Future Things > Page 1
The Design of Future Things Page 1

by Don Norman




  The Design of Future Things

  The Design

  of Future Things

  Donald A. Norman

  A MEMBER OF THE PERSEUS BOOKS GROUP NEW YORK

  Copyright © 2007 by Donald A. Norman

  Hardcover first published in 2007 by Basic Books,

  A Member of the Perseus Books Group

  Paperback first published in 2009 by Basic Books

  All rights reserved. Printed in the United States of America. No part of this book may be reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in critical articles and reviews. For information, address Basic Books, 387 Park Avenue South, New York, NY 10016–8810.

  Books published by Basic Books are available at special discounts for bulk purchases in the United States by corporations, institutions, and other organizations. For more information, please contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street, Suite 200, Philadelphia, PA 19103, or call (800) 255–1514, or e-mail [email protected].

  Designed by Timm Bryson

  Set in 11.5 point Minion

  Library of Congress Cataloging-in-Publication Data

  Norman, Donald A.

  The design of future things / Donald A. Norman.

  p. cm.

  Includes bibliographical references and index.

  ISBN-13: 978-0-465-00227-6

  ISBN-10: 0-465-00227-7

  1. Design, Industrial—Psychological aspects. 2. Human engineering. I. Title.

  TS171.4.N668 2007

  745.2—dc22

  2007035377

  Paperback ISBN: 978-0-465-00228-3

  10 9 8 7 6 5 4 3 2 1

  BOOKS BY DONALD A . NORMAN

  Textbooks

  Memory and Attention: An Introduction to Human Information

  Processing. (First edition, 1969; second edition 1976.)

  Human Information Processing. (With Peter Lindsay: First

  edition, 1972; second edition 1977.)

  Scientific Monographs

  Models of Human Memory. (Edited, 1970.)

  Explorations in Cognition. (With David E. Rumelhart and the

  LNR Research Group, 1975.)

  Perspectives on Cognitive Science. (Edited, 1981.)

  User Centered System Design: New Perspectives on Human-Computer Interaction. (Edited with Steve Draper, 1986.)

  Trade Books

  Learning and Memory, 1982.

  The Psychology of Everyday Things, 1988.

  The Design of Everyday Things, 1990 and 2002. (Paperback

  version of The Psychology of Everyday Things.)

  Turn Signals Are the Facial Expressions of Automobiles, 1992.

  Things That Make Us Smart, 1993.

  The Invisible Computer: Why Good Products Can Fail, the

  Personal Computer Is So Complex, and Information Appliances

  Are the Solution, 1998

  Emotional Design: Why We Love (or Hate) Everyday Things, 2004

  CD-ROM

  First Person: Donald A. Norman. Defending Human Attributes in

  the Age of the Machine, 1994.

  Contents

  1 Cautious Cars and Cantankerous Kitchens: How Machines Take Control

  2 The Psychology of People & Machines

  3 Natural Interaction

  4 Servants of Our Machines

  5 The Role of Automation

  6 Communicating with Our Machines

  7 The Future of Everyday Things

  Afterword: The Machine’s Point of View

  Summary of the Design Rules

  Recommended Readings

  Acknowledgments

  Notes

  References

  Index

  CHAPTER ONE

  Cautious Cars and

  Cantankerous Kitchens

  How Machines Take Control

  I’m driving my car through the winding mountain roads between my home and the Pacific Ocean. Sharp curves drop off steeply amidst the towering redwood trees and vistas of the San Francisco Bay on one side and the Pacific Ocean on the other. It’s a wonderful drive, the car responding effortlessly to the challenge, negotiating sharp turns with grace. At least, that’s how I am feeling. But then I notice that my wife is tense: she’s scared. Her feet are braced against the floor, her shoulders hunched, her arms against the dashboard. “What’s the matter?” I ask, “Calm down, I know what I’m doing.”

  Now imagine another scenario. I’m driving on the same winding, mountain road, and I notice that my car is tense: it’s scared. The seats straighten, the seat belts tighten, and the dashboard starts beeping at me. I notice the brakes are being applied automatically. “Oops,” I think, “I’d better slow down.”

  Do you think the idea of a frightened automobile fanciful? Let me assure you, it is not. This behavior already exists on some luxury automobiles—and more is being planned. Stray out of your lane, and some cars balk: beeping, perhaps vibrating the wheel or the seat or flashing lights in the side mirrors. Automobile companies are experimenting with partial correction, helping the driver steer the car back into its own lane. Turn signals were designed to tell other drivers that you are going to turn or switch lanes, but now they are how you tell your own car that you really do wish to turn or change lanes: “Hey, don’t try to stop me,” they say to your car. “I’m doing this on purpose.”

  I was once a member of a panel of consultants advising a major automobile manufacturer. I described how I would respond differently to my wife than my car. “How come?” asked fellow panelist Sherry Turkle, an MIT professor and an authority on the relationship between people and technology. “How come you listen to your car more than your wife?”

  How come, indeed. Sure, I can make up rational explanations, but they will miss the point. As we start giving the objects around us more initiative, more intelligence, and more emotion and personality, we now have to worry about how we interact with our machines.

  Why do I appear to pay more attention to my car than to my wife? The answer is complex, but in the end, it comes down to communication. When my wife complains, I can ask her why, then either agree with her or try to reassure her. I can also modify my driving so that she is not so disturbed by it. But I can’t have a conversation with my car: all the communication is one way.

  “Do you like your new car?” I asked Tom, who was driving me to the airport after a lengthy meeting. “How do you like the navigation system?”

  “I love the car,” said Tom, “but I never use the navigation system. I don’t like it: I like to decide what route I will take. It doesn’t give me any say.”

  Machines have less power than humans, so they have more authority. Contradictory? Yes, but, oh, so true. Consider who has more power in a business negotiation. If you want to make the strongest possible deal, who should you send to the bargaining table, the CEO or someone at a lower level? The answer is counterintuitive: quite often, the lower-level employee can make the better deal. Why? Because no matter how powerful the opposing arguments, the weak representative cannot close the deal. Even in the face of persuasive arguments, he or she can only say, “I’m sorry, but I can’t give you an answer until I consult with my boss,” only to come back the next day and say, “I’m sorry, but I couldn’t convince my boss.” A powerful negotiator, on the other hand, might be convinced and accept the offer, even if later, there was regret.

  Successful negotiators understand this bargaining ploy and won’t let their opponents get away with it. When I discussed this with a friend, a successful lawyer, she laughed at me. “Hey,” she said, “if the other side tried that on me, I’d call them on it. I won’t let them play that game wit
h me.” Machines do play this game on us, and we don’t have any way of refusing. When the machine intervenes, we have no alternatives except to let it take over: “It’s this or nothing,” they are saying, where “nothing” is not an option.

  Consider Tom’s predicament. He asks his car’s navigation system for directions, and it provides them. Sounds simple. Human-machine interaction: a nice dialogue. But notice Tom’s lament: “It doesn’t give me any say.” Designers of advanced technology are proud of the “communication capabilities” they have built into their systems. But closer analysis shows this to be a misnomer: there is no communication, none of the back-and-forth discussion that characterizes true dialogue. Instead, we have two monologues. We issue commands to the machine, and it, in turn, commands us. Two monologues do not make a dialogue.

  In this particular case, Tom does have a choice. If he turns the navigation system off, the car still functions, so because his navigation system doesn’t give him enough say over the route, he simply doesn’t use it. But other systems do not provide this option: the only way to avoid them is not to use the car. The problem is that these systems can be of great value. Flawed though they may be, they can save lives. The question, then, is how we can change the way we interact with our machines to take better advantage of their strengths and virtues, while at the same time eliminating their annoying and sometimes dangerous actions.

  As our technology becomes more powerful, its failure in terms of collaboration and communication becomes ever more critical. Collaboration means synchronizing one’s activities, as well as explaining and giving reasons. It means having trust, which can only be formed through experience and understanding. With automatic, so-called intelligent devices, trust is sometimes conferred undeservedly—or withheld, equally undeservedly. Tom decided not to trust his navigational system’s instructions, but in some instances, rejecting technology can cause harm. For example, what if Tom turned off his car’s antiskid brakes or the stability control? Many drivers believe they can control the car better than these automatic controls. But antiskid and stability systems actually perform far better than all but the most expert professional drivers. They have saved many lives. But how does the driver know which systems can be trusted?

  Designers tend to focus on the technology, attempting to automate whatever possible for safety and convenience. Their goal is complete automation, except where this is not yet possible because of technical limitations or cost concerns. These limitations, however, mean that the tasks can only be partially automated, so the person must always monitor the action and take over whenever the machine can no longer perform properly. Whenever a task is only partially automated, it is essential that each party, human and machine, know what the other is doing and what is intended.

  Two Monologues Do Not Make a Dialogue

  SOCRATES: You know, Phaedrus, that’s the strange thing about writing. . . . they seem to talk to you as if they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you just the same thing forever.

  —Plato: Collected Dialogues, 1961.

  Two thousand years ago, Socrates argued that the book would destroy people’s ability to reason. He believed in dialogue, in conversation and debate. But with a book, there is no debate: the written word cannot answer back. Today, the book is such a symbol of learning and knowledge that we laugh at this argument. But take it seriously for a moment. Despite Socrates’s claims, writing does instruct because we do not need to debate its content with the author. Instead, we debate and discuss with one another, in the classroom, with discussion groups, and if the work is important enough, through all the media at our disposal. Nonetheless, Socrates’s point is valid: a technology that gives no opportunity for discussion, explanation, or debate is a poor technology.

  As a business executive and as a chair of university departments, I learned that the process of making a decision is often more important than the decision itself. When a person makes decisions without explanation or consultation, people neither trust nor like the result, even if it is the identical course of action they would have taken after discussion and debate. Many business leaders ask, “Why waste time with meetings when the end result will be the same?” But the end result is not the same, for although the decision itself is identical, the way it will be carried out and executed and, perhaps most importantly, the way it will be handled if things do not go as planned will be very different with a collaborating, understanding team than with one that is just following orders.

  Tom dislikes his navigation system, even though he agrees that at times it would be useful. But he has no way to interact with the system to tailor it to his needs. Even if he can make some high-level choices—“fastest,” “shortest,” “most scenic,” or “avoid toll road”—he can’t discuss with the system why a particular route is chosen. He can’t know why the system thinks route A is better than route B. Does it take into account the long traffic signals and the large number of stop signs? And what if two routes barely differ, perhaps by just a minute out of an hour’s journey? He isn’t given alternatives that he might well prefer despite a slight cost in time. The system’s methods remain hidden so that even if Tom were tempted to trust it, the silence and secrecy promotes distrust, just as top-down business decisions made without collaboration are distrusted.

  What if navigation systems were able to discuss the route with the driver? What if they presented alternative routes, displaying them both as paths on a map and as a table showing the distance, estimated driving time, and cost, allowing the driver to choose? Some navigation systems do this, so that the drive from a city in California’s Napa Valley to Palo Alto might be presented like this:

  FROM ST. HELENA, CA TO PALO ALTO, CA

  This is a clear improvement, but it still isn’t a conversation. The system says, “Here are three choices: select one.” I can’t ask for details or seek some modification. I am familiar with all these routes, so I happen to know that the fastest, shortest, cheapest route is also the least scenic, and the most scenic route is not even offered. But what about the driver who is not so knowledgeable? We would never settle for such limited engagement with a human driver. The fact that navigation systems offering drivers even this limited choice of routes are considered a huge improvement over existing systems demonstrates how bad the others are, how far we still have to go.

  If my car decides an accident is imminent and straightens the seat or applies the brakes, I am not asked or consulted; nor am I even told why. Is the car necessarily more accurate because, after all, it is a mechanical, electronic technology that does precise arithmetic without error? No, actually it’s not. The arithmetic may be correct, but before doing the computation, it must make assumptions about the road, the other traffic, and the capabilities of the driver. Professional drivers will sometimes turn off automatic equipment because they know the automation will not allow them to deploy their skills. That is, they will turn off whatever they are permitted to turn off: many modern cars are so authoritarian that they do not even allow this choice.

  Don’t think that these behaviors are restricted to the automobile. The devices of the future will present the same issues in a wide variety of settings. Automatic banking systems already exist that determine whether you are eligible for a loan. Automated medical systems determine whether you should receive a particular treatment or medication. Future systems will monitor your eating, your reading, your music and television preferences. Some systems will watch where you drive, alerting the insurance company, the rental car agency, or even the police if they decide that you have violated their rules. Other systems monitor for copyright violations, making decisions about what should be permitted. In all these cases, actions are apt to be taken arbitrarily, with the systems making gross assumptions about your intentions from a limited sample of your behavior.

  So-called intelligent systems have become too smug. They think they know what is best for us. Their intellige
nce, however, is limited. And this limitation is fundamental: there is no way a machine has sufficient knowledge of all the factors that go into human decision making. But this doesn’t mean we should reject the assistance of intelligent machines. As machines start to take over more and more, however, they need to be socialized; they need to improve the way they communicate and interact and to recognize their limitations. Only then can they become truly useful. This is a major theme of this book.

  When I started writing this book, I thought that the key to socializing machines was to develop better systems for dialogue. But I was wrong. Successful dialogue requires shared knowledge and experiences. It requires appreciation of the environment and context, of the history leading up to the moment, and of the many differing goals and motives of the people involved. I now believe this to be a fundamental limitation of today’s technology, one that prevents machines from full, humanlike interaction. It is hard enough to establish this shared, common understanding with people, so how do we expect to be able to develop it with machines?

  In order to cooperate usefully with our machines, we need to regard the interaction somewhat as we do interaction with animals. Although both humans and animals are intelligent, we are different species, with different understandings and different capabilities. Similarly, even the most intelligent machine is a different species, with its own set of strengths and weaknesses, its own set of understandings and capabilities. Sometimes we need to obey the animals or machines; sometimes they need to obey us.

  Where Are We Going? Who Is in Charge?

  “My car almost got me into an accident,” Jim told me.

  “Your car? How could that be?” I asked.

  “I was driving down the highway using the adaptive cruise control. You know, the control that keeps my car at a constant speed unless there is a car in front, and then it slows up to keep a safe distance. Well, after awhile, the road got crowded, so my car slowed. Eventually, I came to my exit, so I maneuvered into the right lane and then turned off the highway. By then, I had been using the cruise control for so long, but going so slowly, that I had forgotten about it. But not the car. I guess it said to itself, ‘Hurrah! Finally, there’s no one in front of me,’ and it started to accelerate to full highway speed, even though this was the off-ramp that requires a slow speed. Good thing I was alert and stepped on the brakes in time. Who knows what might have happened.”

 

‹ Prev