The Design of Future Things

Home > Other > The Design of Future Things > Page 6
The Design of Future Things Page 6

by Don Norman


  The Sound of Boiling Water: Natural, Powerful, and Useful

  The sound of water boiling in a kettle provides a good example of a natural, informative signal. This sound is produced by pockets of heated water moving about, creating sounds that change naturally until, at last, a rapid, “rolling” boil is reached, at which time the teakettle settles down to a continuous, pleasant sound. These activities allow a person to tell roughly how close the water is to boiling. Now, add a whistle to signal when boiling has taken place, not through some artificial electronic tone but by enclosing the airspace in the spout, letting a small amount escape through the opening. The result is a naturally produced whistle, one that starts slowly, at first weak and unsteady, then progresses to a loud, continuous sound. Does it take some learning to predict how much time is available at each stage of the process? Sure, but the learning is done without effort. After listening to the sounds of boiling water a few times, you get it. No fancy, expensive electronics. Simple, natural sound. Let this be a model for the other systems: always try to find some naturally occurring component of the system that can serve as an informative cue about the state of things. Maybe it is a vibration, maybe sound, maybe the way light changes.

  In the automobile, it is possible to isolate the passenger compartment from most of the vibration and sounds. Although this might be a good idea for the passengers, it is a bad idea for the driver. Designers have had to work hard to reintroduce the outside environment in the form of “road feel” to the driver through sound and vibration of the steering wheel. If you use an electric drill, you know how important the sound of the motor and the feel of the drill are to accurate, precise drilling. Many cooks prefer gas ranges because they can more rapidly judge the degree of heat by the appearance of the flame than by the more abstract dials and indicators of the newer types of cooktops.

  So far, all my examples of natural signals come from existing appliances and devices, but what about the world of future things, where autonomous intelligence increasingly takes control? Actually, if anything, these completely automatic devices provide even richer opportunities. The sounds of the little cleaning robot scurrying about the floor remind us that it is busy and let us subtly monitor its progress. Just as the pitch of the vacuum cleaner’s motor naturally rises when items get stuck in its hose, the pitch of the robot’s motors tells us how easy or hard it is finding the task. The problems with automation occur when something breaks down, turning the task over to people, often without warning. Well, with naturalistic, continual feedback, there will be warning.

  Implicit Signals and Communication

  Whenever I walk into a research laboratory, I look to see how neat or messy it is. When everything is orderly, everything in its place, I suspect this is a laboratory where not much work is being done. I like to see disorder: that means active, engaged people. Disorder is a natural, implicit sign of activity.

  We leave traces of our activities: footprints in the sand, litter in the trash, books on desks, counters, and even the floor. In the academic field of semiotics, these are called signs or signals. To the reader of detective novels, they are called clues, and ever since the perceptive eye of Sherlock Holmes entered the world of detectives, they have provided the evidence of people’s activities. These nonpurposeful clues are what the Italian cognitive scientist Cristiano Castlefranchi calls “implicit communication.” Castlefranchi defines behaviorally implicit communication as natural side effects that can be interpreted by others. It “does not require a specific learning or training, or transmission,” says Castlefranchi. “It simply exploits perceptual patterns of usual behavior and their recognition.” Implicit communication is an important component of the design of intelligent things because it informs without interruption, annoyance, or even the need for conscious attention.

  Footprints, disorderly research laboratories, underlining and sticky notes on reading matter, the sounds of elevators or of a home’s appliances: all are natural, implicit signals that allow us to infer what is happening, to remain aware of the activities in the environment, to know when it is time to step in and take action and when it is possible to ignore them and continue with whatever we are doing.

  A good example comes from the world of the old-fashioned telephone. In the old days, when making an international phone call, clicks and hisses and noises let you know that progress was being made, and through the differing sounds, you could even learn how well things were progressing. As equipment and technology got better, the circuits became quieter, until they became noise free. Oops, all the implicit clues were gone. People waiting on the line heard silence, which they sometimes interpreted to mean the call had failed, so they hung up. It was necessary to reintroduce sounds into the circuit so people would know that the call was still being processed. “Comfort noise” is what the engineers called it, their condescending way of responding to the needs of their customers. The sounds are far more than “comfort.” They are implicit communication, confirming that the circuit is still active, informing the telephone caller that the system is still in the process of making the connection. And, yes, that implicit confirmation is reassuring, comforting.

  Although sound is important for providing informative feedback, there is a downside. Sounds are often annoying. We have eyelids that permit us to shut out scenes we do not wish to watch: there are no earlids. Psychologists have even devised scales of annoyance for rating noise and other sounds. Unwanted sound can disrupt conversations, make it difficult to concentrate, and disturb tranquil moments. As a result, much effort has gone into the development of quieter devices in the office, factory, and home. The automobile has become so quiet that many years ago Rolls-Royce used to brag that “at 60 mph the loudest noise in this new Rolls-Royce comes from the electric clock.”

  Although quiet can be good, it can also be dangerous. Without noise from the environment, the automobile driver can’t be aware of the sirens of emergency vehicles, or the honking of horns, or even the weather. If all roads feel equally smooth, regardless of their actual condition, regardless of how fast the car is traveling, how can the driver know what speed is safe? Sounds and vibrations provide natural indicators, implicit signals of important conditions. In electrically driven vehicles, the engine can be so silent that even the driver might be unaware that it is operating. Pedestrians subconsciously rely upon the implicit sounds of automobiles to keep them informed of nearby vehicles; as a result, they have on occasion been taken unawares by the silent, electrically propelled ones (or by any quiet vehicle, a bicycle, for example). It has become necessary to add a signal inside the automobile to remind the driver that the engine is running (alas, one manufacturer does this most unnaturally by using a beeping sound). It is even more important to add some naturalistic sounds outside the vehicle. The Federation for the Blind, an organization whose members have already been affected by the silence of these vehicles, has suggested adding something in the car’s wheel well or on the axle that would make a sound when the car was moving. If done properly, this could produce a natural-sounding cue that would vary with the speed of the vehicle, a desirable attribute.

  Because sound can be both informative and annoying, this raises the difficult design problem of understanding how to enhance its value while minimizing its annoyance. In some cases, this can be done by trying to minimize distasteful sounds, lowering their intensity, minimizing the use of rapid transients, and trying to create a pleasant ambience. Subtle variations in this background ambience might yield effective communication. One designer, Richard Sapper, created a kettle whose whistle produced a pleasant musical chord: the musical notes E and B. Note that even annoyance has its virtues: emergency signals, such as those of ambulances, fire trucks, and alarms for fire, smoke, or other potential disasters, are deliberately loud and annoying, the better to attract attention.

  Sound should still be used where it appears to be a natural outgrowth of the interaction, but arbitrary, meaningless sounds are almost always annoying. Because sound
, even when cleverly used, can be so irritating, in many cases its use should be avoided. Sound is not the only alternative: sight and touch provide alternative modalities.

  Mechanical knobs can contain tactile cues, a kind of implicit communication, for their preferred settings. For example, in some rotating tone controls you can feel a little “blip” as you rotate it past the preferred, neutral position. The controls in some showers will not go above a preset temperature unless the user manipulates a button that enables higher temperatures. The “blip” in the tone control allows someone to set it to the neutral position rapidly and efficiently. The stop in the shower serves as a warning that higher temperatures might be uncomfortable, or even dangerous, for some people. Some commercial airplanes use a similar stop on their throttles: when the throttles are pushed forward, they stop at the point where higher throttle setting might damage the engines. In an emergency, however, if the pilot believes it is necessary to go beyond in order to avoid a crash, the pilot can force the throttle beyond the first stopping point. In such a case, damage to the engine is clearly of secondary importance.

  Physical marks provide another possible direction. When we read paper books and magazines, we may leave marks of our progress, whether through normal wear and tear or by deliberate folding of pages, insertion of sticky notes, highlighting, underlining, and margin notes. In electronic documents, all of these cues don’t have to be lost. After all, the computer knows what has been read, what pages have been scrolled to, which sections have been read. Why not make wear marks on the software, letting the reader discover which sections have been edited, commented upon, or read the most? The research team of Will Hill, Jim Hollan, Dave Wroblewski, and Tim McCandless have done just that, adding marks on electronic documents so that viewers can find which sections have been looked at the most. Dirt and wear have their virtues as natural indicators of use, relevance, and importance. Electronic documents can borrow these virtues without the deficits of dirt, squalor, and damage to the material. Implicit interaction is an interesting way to develop intelligent systems. No language, no forcing: simple clues in both directions indicate recommended courses of action.

  Implicit communication can be a powerful tool for informing without annoying. Another important direction is to exploit the power of affordances, the subject of the next section.

  Affordances as Communication

  It started off with an e-mail: Clarisse de Souza, a professor of informatics in Rio de Janeiro wrote to disagree with my definition of “affordance.” “Affordance,” she told me, “is really communication between the designer and the user of a product.” “No,” I wrote back. “An affordance is simply a relationship that exists in the world: it is simply there. Nothing to do with communication.”

  I was wrong. She was not only right, but she got me to spend a delightful week in Brazil, convincing me, then went on to expand upon her idea in an important book, Semiotic Engineering. I ended up a believer: “Once designs are thought of as shared communication and technologies as media, the entire design philosophy changes radically, but in a positive and constructive way,” is what I wrote about the book for its back cover.

  To understand this discussion, let me back up a bit and explain the original concept of an affordance and how it became part of the vocabulary of design. Let me start with a simple question: how do we function in the world? As I was writing The Design of Everyday Things, I pondered this question: when we encounter something new, most of the time we use it just fine, not even noticing that it is a unique experience. How do we do this? We encounter tens of thousands of different objects throughout our lives, yet in most cases, we know just what to do with them, without instruction, without any hesitation. When faced with a need, we are often capable of designing quite novel solutions; “hacks” they are sometimes called: folded paper under a table leg to stabilize the table, newspapers pasted over a window to block the sun. Years ago, as I pondered this question, I realized that the answer had to do with a form of implicit communication, a form of communication that today we call “affordances.”

  The term affordance was invented by the great perceptual psychologist J. J. Gibson to explain our perceptions of the world. Gibson defined affordances as the range of activities that an animal or person can perform upon an object in the world. Thus, a chair affords sitting and supporting for an adult human, but not for a young child, an ant, or an elephant. Affordances are not fixed properties: they are relationships that hold between objects and agents. Moreover, to Gibson, affordances existed whether they were obvious or not, visible or not, or even whether or not anyone had ever discovered it. Whether or not you knew about it was irrelevant.

  I took Gibson’s term and showed how it can be applied to the practical problems of design. Although Gibson didn’t think they needed to be visible, to me, the critical thing was their visibility. If you didn’t know that an affordance existed, I argued, then it was worthless, at least in the moment. In other words, the ability of a person to discover and make use of affordances is one of the important ways that people function so well, even in novel situations when encountering novel objects.

  Providing effective, perceivable affordances is important in the design of today’s things, whether they be coffee cups, toasters, or websites, but these attributes are even more important for the design of future things. When devices are automatic, autonomous, and intelligent, we need perceivable affordances to show us how we might interact with them and, equally importantly, how they might interact with the world. We need affordances that communicate: hence the importance of de Souza’s discussion with me and of her semiotic approach to affordances.

  The power of visual, perceivable affordances is that they guide behavior, and in the best of cases, they do so without the person’s awareness of the guidance—it just feels natural. This is how we can interact so well with most of the objects around us. They are passive and responsive: they sit there quietly, awaiting our activity. In the case of appliances, such as a television set, we push a button, and the television set changes channels. We walk, turn, push, press, lift, and pull, and something happens. In all these cases, the design challenge is to let us know beforehand what range of operations is possible, what operation we need to perform, and how we go about doing it. During the carrying out of the action, we want to know how it is progressing. Afterward, we want to know what change took place.

  This description pretty much describes all the designed objects with which we interact today, from household appliances to office tools, from computers to older automobiles, from websites and computer applications to complex mechanical devices. The design challenges are large and not always carried out successfully, hence our frustrations with so many everyday objects.

  Communicating with Autonomous, Intelligent Devices

  The objects of the future will pose problems that cannot be solved simply by making the affordances visible. Autonomous, intelligent machines pose particular challenges, in part because the communication has to go both ways, from person to machine and from machine to person. How do we communicate back and forth with these machines? To answer this question, let’s look at the wide range of machine+person coupling—an automobile, bicycle, or even a horse—and examine how that machine+person entity communicates with another machine+person entity.

  In chapter 1, I described my discovery that my description of the symbiotic coupling of horse and rider was a topic of active research by scientists at the National Aeronautics and Space Administration’s (NASA) Langley Research Center in Virginia and the Institut für Verkehrsführung und Fahr in Braunschweig, Germany. Their goal, like mine, is to enhance human-machine interaction.

  When I visited Braunschweig to learn about their research, I also learned more about how to ride a horse. A critically important aspect of both horseback riding and of a driver’s controlling a horse and carriage, Frank Flemisch, the director of the German group explained to me, is the distinction between “loose-rein” and “tight-rein” c
ontrol. Under tight reins, the rider controls the horse directly, with the tightness communicating this intention to the horse. In loose-rein riding, the horse has more autonomy, allowing the rider to perform other activities or even to sleep. Loose and tight are the extremes on a continuum of control, with various intermediate stages. Moreover, even in tight-rein control, where the rider is in control, the horse can balk or otherwise resist the commands. Similarly, in loose-rein control, the person can still provide some oversight using the reins, verbal commands, pressure from the thighs and legs, and heel kicks.

  An even closer analog of the interaction between horse and driver is that of a wagon or carriage, as in Figure 3.2. Here, the driver is not as tightly coupled to the horse as the rider who sits on its back, so this is more similar to the average, nonprofessional driver and a modern automobile. The coupling between horse and driver on the wagon, or driver and automobile, is restricted. Even here, though, the continuum between “tight-rein” and “loose-rein” control still applies. Note how the degree of animal autonomy or of human control is communicated by exploiting the implicit communication made possible through the affordances of the reins. Combining implicit communication with affordances is a powerful, very natural concept. This aspect of working with a horse is the critical component that can be borrowed in the design of machine+human systems—in designing the system so that the amount of independence and interaction can vary in a natural manner, capitalizing upon the affordances of the controller and the communicative capabilities it provides.

 

‹ Prev