Book Read Free

The Design of Future Things

Page 7

by Don Norman


  FIGURE 3.2

  Loose-rein guidance of a horse and carriage. With an intelligent horse providing the power and guidance, the driver can relax and not even pay much attention to the driving. This is loose-rein control, where the horse has taken over.

  Photograph by the author in Brugge, Belgium.

  When I drove the automobile simulator at Braunschweig, the difference between “loose-” and “tight-rein” control was apparent. Under tight-rein conditions, I did most of the work, determining the force on the accelerator, brake, and steering wheel, but the car nudged me, this way or that, trying to keep me on a steady course within the highway’s lane boundaries. If I got too close to the car ahead of me, the steering wheel pushed back, indicating that I should back off. Similarly, if I lagged behind too much, the steering wheel moved forward, urging me to speed up a bit. Under loose-rein conditions, the car was more aggressive in its actions, so much so that I hardly had to do anything at all. I had the impression that I could close my eyes and simply let the car guide me through the driving. Unfortunately, during the limited time available for my visit, I wasn’t able to try everything I now realize I should have. The one thing missing from the demonstration was a way for the driver to select how much control to give to the system. This transition in amount of control is important, for when an emergency arises, it may be necessary to transfer the control very rapidly, without distracting from the attention required to deal with the situation.

  The horse+rider conceptualization provides a powerful metaphor for the development of machine+human interfaces, but the metaphor alone is not enough. We need to learn more about these interfaces, and it is reassuring to see that research has already begun, with scientists studying how a person’s intentions might best be communicated to the system, and vice versa.

  One way for the system to communicate its goals and intentions to a person is through an explicit presentation of the strategy that is being followed. One research group, Christopher Miller and his colleagues, proposes that systems share a “playbook” with everyone involved. The group describes their work as “based on a shared model of the tasks in the domain. This model provides a means of human-automation communication about plans, goals, methods and resource usage—a process akin to referencing plays in a sports team’s playbook. The Playbook enables human operators to interact with subordinate systems with the same flexibility as with well-trained human subordinates, thus allowing for adaptive automation.” The idea is that the person can convey intentions by selecting a particular playbook for the automatic systems to follow, or if the automation is in control, it shows the playbook it has selected. These researchers are concerned with the control of airplanes, so the playbook might specify how it will control take off and the achievement of cruising altitude. Whenever the machine is working autonomously, controlling what is happening, it always displays the play that it is following, letting the human understand how the immediate actions fit into the overall scheme and change the choice of plays if necessary. A critical component here is the form by which the play is shown. A written description or a list of planned actions is not likely to be acceptable, requiring too much effort to process. For the playbook approach to be effective, especially for everyday people who do not wish to undergo training to accommodate the intelligent objects in their homes, a simple means of displaying the plays is essential.

  I’ve seen similar concepts at work on the displays of large commercial copiers, where the display clearly shows the “playbook” being followed: perhaps 50 copies, duplex, two-sided copying, stapled, and sorted. I have seen nice graphical depictions, with the image of a piece of paper turning over, showing printing on both sides and how the printed page is combined with other pages so that it is easy to tell if it has been aligned properly, with the page flipped along the short edge or the long one, and with a depiction of the final stapled documents stacked up neatly in a pile, with the height of the pile showing how far the job has progressed.

  When automation is operating relatively autonomously under loose-rein conditions, display schemes similar to the playbook are especially relevant to allow people to determine just what strategy the machine is following and how far along it is in its actions.

  The Bicycles of Delft

  Delft is a charming small town near the Atlantic coast of the Netherlands, home of the Technische Universiteit Delft, or in English, the Delft University of Technology. The streets are narrow, with several major canals encircling the business district. The walk from the hotel section to the university is picturesque, meandering past and over canals, through the narrow winding streets. The danger comes not from automobiles but from the swarms of bicycles, weaving their way at great speeds in all directions and, to my eyes, appearing out of nowhere. In Holland, bicycles have their own roadways, separate from the roads and pedestrian paths. But not in the central square of Delft. There, bicyclists and pedestrians mix.

  FIGURE 3.3

  Holland is the land of multiple bicycles, which, although environmentally friendly, present a traffic hazard to people trying to walk across the square. The rule is: Be predictable. Don’t try to help the bicyclists. If you stop or swerve, they will run into you.

  (Photograph by the author.)

  “It’s perfectly safe,” my hosts kept reassuring me, “as long as you don’t try to help out. Don’t try to avoid the bikes. Don’t stop or swerve. Be predictable.” In other words, maintain a steady pace and a steady direction. The bicyclists have carefully calculated their course so as to miss one another and all the pedestrians under the assumption of predictability. If pedestrians try to outmaneuver the bicyclists, the results will be disastrous.

  The bicyclists of Delft provide a model for how we might interact with intelligent machines. After all, here we have a person, the walker, interacting with an intelligent machine, a bicycle. In this case, the machine is actually the couplet of bicycle+person, with the person providing both the motive power and the intelligence. Both the person walking and the bicycle+person have the full power of the human mind controlling them; yet, these two cannot coordinate successfully. The combination bicycle+person doesn’t lack intelligence: it lacks communication. There are many bicycles, each traveling quite a bit faster than the pace of the walker. It isn’t possible to talk to the bicyclists because, by the time they are close enough for conversation, it is too late to negotiate. In the absence of effective communication, the way to interact is for the person walking to be predictable so that no coordination is required: only one of the participants, the bicycle+person has to do planning; only one has to act.

  This story provides a good lesson for design. If a person cannot coordinate activities with an intelligent, human-driven machine, the bicycle+person, why would we ever think the situation would be any easier when the coordination must take place with an intelligent machine? The moral of this story is that we shouldn’t even try. Smart machines of the future should not try to read the minds of the people with whom they interact, either to infer their motives or to predict their next actions. The problem with doing this is twofold: first, they probably will be wrong; second, doing this makes the machine’s actions unpredictable. The person is trying to predict what the machine is going to do while, at the same time, the machine is trying to guess the actions of the person—a sure guarantee of confusion. Remember the bicycles of Delft. They illustrate an important rule for design: be predictable.

  Now comes the next dilemma: which should be the predictable element, the person or the intelligent device? If the two elements were of equal capability and equal intelligence, it wouldn’t matter. This is the case with the bicyclists and pedestrians. The intelligence of both comes from human beings, so it really doesn’t matter whether it is the bicyclists who are careful to act predictably or the pedestrians. As long as everyone agrees who takes which role, things will probably work out okay. In most situations, however, the two components are not equal. The intelligence and general world knowledge of people far exceed
s the intelligence and world knowledge of machines. People and bicyclists share a certain amount of common knowledge or common ground: their only difficulty is that there is not sufficient time for adequate communication and coordination. With a person and a machine, the requisite common ground does not exist, so it is far better for the machine to behave predictably and let the person respond appropriately. Here is where the playbook idea could be effective by helping people understand just what rules the machine is following.

  Machines that try to infer the motives of people, that try to second-guess their actions, are apt to be unsettling at best, and in the worst case, dangerous.

  Natural Safety

  The second example illustrates how accident rate can be reduced by changing people’s perception of safety. Call this “natural” safety, for it relies upon the behavior of people, not safety warnings, signals, or equipment.

  Which airport has fewer accidents: an “easy” one that is flat, with good visibility and weather conditions (e.g., Tucson, in the Arizona desert) or a “dangerous” one with hills, winds, and a difficult approach (e.g., San Diego, California, or Hong Kong)? Answer—the dangerous ones. Why? Because the pilots are alert, focused, and careful. One of the pilots of an airplane that had a near crash while attempting to land at Tucson told NASA’s voluntary accident reporting system that “the clear, smooth conditions had made them complacent.” (Fortunately, the terrain avoidance system alerted the pilots in time to prevent an accident. Remember the first example that opened chapter 2, where the plane said, “Pull up, Pull up,” to the pilots? That’s what saved them.) The same principle about perceived versus real safety holds with automobile traffic safety. The subtitle of a magazine article about the Dutch traffic engineer Hans Monderman makes the point: “Making driving seem more dangerous could make it safer.”

  People’s behavior is dramatically impacted by their perception of the risk they are undergoing. Many people are afraid of flying but not of driving in an automobile or, for that matter, being struck by lightning. Well, driving in a car, whether as driver or passenger, is far riskier than flying as a passenger in a commercial airline. As for lightning, well, in 2006 there were three deaths in U.S. commercial aviation but around fifty deaths by lightning. Flying is safer than being out in a thunderstorm. Psychologists who study perceived risk have discovered that when an activity is made safer, quite often the accident rate does not change. This peculiar result has led to the hypothesis of “risk compensation”: when an activity is changed so that it is perceived to be safer, people take more risks, thereby keeping the accident rate constant.

  Thus, adding seat belts to cars, or helmets to motorcyclists, or protective padding to football uniforms, or higher, better fitting boots for skiers, or antiskid brakes and stability controls to automobiles leads people to change their behavior to keep risk the same. The same principle even applies to insurance: If they have insurance against theft, people aren’t as careful with their belongings. Forest rangers and mountaineers have discovered that providing trained rescue squads has the tendency to increase the number of people who risk their lives because they now believe that if they get into trouble, they will be rescued.

  Risk homeostasis is the term given to this phenomenon in the literature on safety. Homeostasis is the scientific term for systems that tend to maintain a state of equilibrium, in this case, a constant sense of safety. Make the environment appear safer, goes this hypothesis, and drivers will engage in riskier behavior, keeping the actual level of safety constant. This topic has been controversial since it was first introduced in the 1980s by the Dutch psychologist Gerald Wilde. The controversy surrounds the reasons for the effect and its size, but there is no doubt that the phenomenon itself is real. So, why not put this phenomenon to use in reverse? Why not make things safer by making them look more dangerous than they actually are?

  Suppose that we got rid of traffic safety features: no more traffic lights, stop signs, pedestrian crossings, wider streets, or special bike paths. Instead, we might add roundabouts (traffic circles) and make streets narrower. The idea seems completely crazy; it reverses common sense. Yet, it is precisely what the Dutch traffic engineer Hans Monderman advocates for cities. Proponents of this method use the name “Shared Space” to describe their work with several successful applications across Europe: Ejby in Denmark, Ipswich in England, Ostende in Belgium, Makkinga and Drachten in the Netherlands. This philosophy does not change the need for signals and regulations on high-speed highways, but in small towns and even in restricted districts within large cities, the concept is appropriate. The group reports that in London, England, “Shared Space principles were used for the redesigning of the busy shopping street Kensington High Street. Because of the positive results (a 40% reduction in road accidents) the city council are going to apply Shared Space in Exhibition Road, the central artery in London’s most important museum district.” Here is how they describe their philosophy:

  Shared Space. That is the name of a new approach to public space design that is receiving ever-wider attention. The striking feature is the absence of conventional traffic management measures, such as signs, road marking, humps and barriers, and the mixing of all traffic flows. “Shared Space gives people their own responsibility for what ‘their’ public space will look like and how they are going to behave in it,” says Mr. Hans Monderman, head of the Shared Space Expert Team.

  “The traffic is no longer regulated by traffic signs, people do the regulating themselves. And precisely that is the whole idea. Road users should take each other into account and return to their everyday good manners. Experience shows that the additional advantage is that the number of road accidents decreases in the process.”

  This concept of reverse risk compensation is a difficult policy to follow, and it takes a courageous city administration. Even though it might reduce accidents and fatalities overall, it can’t prevent all accidents, and as soon as there is one fatality, anxious residents will argue for warning signs, traffic lights, special pedestrian paths, and widening of the streets. It is very difficult to sustain the argument that if it looks dangerous, it may actually be safer.

  Why does making something look more dangerous actually make it safer? Several people have taken up the challenge of explaining this result. In particular, the British researchers Elliott, McColl, and Kennedy propose that the following cognitive mechanisms are involved:

  • More complex environments tend to be associated with slower driving speeds, the likely mechanisms being increases in cognitive load and perceived risk.

  • Natural traffic calming, such as a humpback bridge or a winding road, can be very effective in reducing speeds, as well as being more acceptable to drivers. Carefully designed schemes, using the properties of natural traffic calming, have the potential to achieve a similar effect.

  • Emphasizing changes of environment (e.g., highway or village boundaries) can increase awareness, reduce speed, or both.

  • Enclosing a distant view or breaking up linearity can reduce speeds.

  • Creating uncertainty can reduce speeds.

  • Combining measures tends to be more effective than implementing individual ones but can be visually intrusive and may be costly.

  • Roadside activity (e.g., parked vehicles, pedestrians, or a cycle lane) can reduce speeds.

  The leading causes of accidental injuries and death in the home include falls and poisoning. Why not apply the same counterintuitive concept of reverse risk compensation? What if we made dangerous activities look more dangerous? Suppose we simultaneously made bathtubs and showers look more slippery (while actually making them less so). Suppose we designed stairways to look more dangerous than they really are. We might make some ingestible items look more forbidding, especially poisons. Would amplifying the appearance of danger reduce the occurrence of accidents? Probably.

  How might the principles of reverse risk compensation apply to the automobile? Today, the driver is bathed in comfort, acousticall
y isolated from road noise, physically isolated from vibration, warm, comfortable, listening to music, and interacting with passengers, or perhaps talking on the telephone. (In fact, studies show that talking on a mobile phone while driving, even a hands-free telephone, is just as dangerous as driving while drunk.) There is a distancing from the events, a loss of situation awareness. And with the development of automatic devices that take over stability, braking, and lane keeping, there is an even greater distance from reality.

  Suppose, however, that the driver could be removed from that comfortable position and placed outside, much like the stage coach driver of an earlier era, exposed to the weather, to the rushing air, to the sights, sounds, and vibrations of the road. Obviously, drivers would not permit us to do this to them, but how can we get back situation awareness without necessarily subjecting the driver to the harsh outside environment? Today, through the car’s computers, motors, and advanced mechanical systems, we can control not only how a car behaves but also how it feels to the driver. As a result, we could do a better job of coupling the driver to the situation in a natural manner, without requiring signals that need to be interpreted, deciphered, and acted upon.

  Just imagine how you would feel if, while driving your car, the steering wheel suddenly felt loose, so that it became more difficult to control the car. Wouldn’t you quickly become more alert, more concerned with maintaining a safe passage? What if we deliberately introduced this feeling? Wouldn’t drivers become more cautious? This behavior is certainly possible in the design of some future car. More and more, automobiles are transitioning toward what is called “drive by wire,” where the controls are no longer mechanically connected to anything other than a computer. This is how modern airplanes are controlled, and in many vehicles, the throttle and brakes already work in this way, passing signals to the automobile’s many microprocessors. Someday, steering will be “by wire,” with electric motors or hydraulic mechanisms providing feedback to the driver so that it will feel as if the driver is turning the wheels and feeling the road through the wheel’s vibrations. When we reach this point, then it will be possible to mimic the feel of skidding, or heavy vibration, or even a loose, wobbly steering wheel. The neat thing about smart technology is that we could provide precise, accurate control, even while giving the driver the perception of loose, wobbly controllability.

 

‹ Prev