by Don Norman
It’s late in the evening in Boulder, Colorado, and Mike Mozer is sitting in his living room, reading. After a while he yawns, stretches, then stands up and wanders toward his bedroom. The house, ever alert to his activity, decides that he is going to bed, so it turns off the living room lights and turns on the lights in the entry, the master bedroom, and the master bath. It also turns the heat down. Actually, it is the computer system in his house that continually monitors Mozer’s behavioral patterns and adjusts the lighting, heating, and other aspects of the home to prepare for his anticipated behavior. This is no ordinary program. It operates through what is called a “neural network” designed to mimic the pattern-recognition and learning abilities of human neurons, thus, the human brain. Not only does it recognize Mozer’s activity patterns, but it can appropriately anticipate his behavior most of the time. A neural network is a powerful pattern recognizer, and because it examines the sequence of his activities, including the time of day at which they occur, it predicts both what he will do and when. As a result, when Mozer leaves the house to go to work, it turns off the heat and hot water heater in order to save energy, but when its circuits anticipate his return, it turns them back on again so that the house will be comfortable when he enters.
Is this house smart? Intelligent? The designer of this automated system, Mike Mozer, doesn’t think so: he calls it “adaptive.” It is instructive to look at Mozer’s experience as we try to understand just what it means to be intelligent. The house has over seventy-five sensors that measure each room’s temperature, ambient light, sound levels, door and window positions, the weather outside and amount of sunlight, and any movements by inhabitants. Actuators control the heating of the rooms and the hot water, lighting, and ventilation. The system contains more than five miles of cabling. Neural network computer software can learn, so the house is continually adapting its behavior according to Mozer’s preferences. If it selects a setting that is not appropriate, Mozer corrects the setting, and the house then changes its behavior. One journalist described how this happens:
Mozer demonstrated the bathroom light, which turned on to a low intensity as he entered. “The system picks the lowest level of the light or heat it thinks it can get away with in order to conserve energy, and I need to complain if I am not satisfied with its decision,” he said. To express his discomfort, he hit a wall switch, causing the system to brighten the light and to “punish itself” so that the next time he enters the room, a higher intensity will be selected.
The house trains its owner as much as the owner trains the house. When working late at night at the university, Mozer would sometimes realize that he had to get home: his house was expecting him, dutifully turning on the heat and hot water, getting ready for his arrival. This raises an interesting question: why can’t he just call his home and tell it that he is going to be late? Similarly, his attempt to discover and fix some faulty hardware led to a system that also could detect when someone dawdled too long in the bathroom. “Long after the hardware problem was resolved,” said Mozer, “we left the broadcast message in the system, because it provided useful feedback to the inhabitants about how their time was being spent.” So, now the house warns inhabitants when they spend too much time in the bathroom? This home sounds like a real nag.
Is this an intelligent house? Here are some more comments by Mozer himself on the limits to the control system’s intelligence:
The Adaptive House project has inspired much brainstorming about ways to extend the project further, most of which seem entirely misguided. One idea often mentioned is controlling home entertainment systems—stereos, TVs, radios, etc. The problem with selection of video and audio in the home is that the inhabitants’ preferences will depend on state of mind, and few cues are directly available from the environment—even using machine vision—that correlate with state of mind. The result is likely to be that the system mispredicts often and annoys the inhabitants more than it supports them. The annoyance is magnified by the fact that when inhabitants seek audio or video entertainment, they generally have an explicit intention to do so. This intention contrasts with, say, temperature regulation in a home, where the inhabitants do not consciously consider the temperature unless it becomes uncomfortable. If inhabitants are aware of their goals, achieving the goal is possible with a simple click of a button, and errors—such as blasting the stereo when one is concentrating on a difficult problem—are all but eliminated. The benefit/cost trade-off falls on the side of manual control.
If only the house could read the mind of its owner. It is this inability to read minds, or, as the scientists prefer to say, to infer a person’s intentions, that defeats these systems. Here the problem goes far beyond the lack of common ground, as anyone who has ever lived with another person knows. There may be much sharing of knowledge and activities, but it is still difficult to know exactly what another person intends to do. In theory, the mythical British butler could anticipate the wants and desires of his master, although my knowledge of how well this succeeds comes from novels and television—not the most reliable sources. Even here, much of the butler’s success comes about because his employers’ lives are well regulated by the pace of social events, so that the schedule dictates which tasks need doing.
Automatic systems that decide whether or not to do some activity can, of course, be right or wrong. Failures come in two forms: misses and false alarms. A miss means that the system has failed to detect a situation, therefore to perform the desired action. A false alarm means that the system has acted when it shouldn’t have. Think of an automated fire detection system. A miss is a failure to signal a fire when it happens. A false alarm is the signaling of a fire, even though none is present. These two forms of error have different costs.
A failure to detect a fire can have disastrous consequences, but false detections can also create problems. If the only action taken by the fire detector is to sound an alarm, a false alarm is mostly just a nuisance, but it also diminishes trust in the system. But what if the false alarm turns on the sprinkler system and notifies the fire department? Here the cost can be enormous, especially if the water damages valuable objects. If a smart home misreads the intentions of its occupants, the costs of misses and false alarms are usually small. If the music system suddenly comes on because the house thinks the resident would like to hear music, it is annoying but not dangerous. If the system diligently turns up the heat every morning, even though the inhabitants are away on vacation, there are no serious consequences. In an automobile, however, if the driver relies on the car to slow up every time it gets too close to the car in front, a miss can be life threatening. And a false alarm, where the car veers because it thinks the driver is wandering out of his lane or brakes because it incorrectly thinks something is in front of it, can be life threatening if nearby vehicles are surprised by the action and fail to respond quickly enough.
Whether false alarms are dangerous or simply annoying, they diminish trust. After a few false alarms, the alarm system will be disregarded. Then, if there is a real fire, the inhabitants are apt to ignore the warning as “just another false alarm.” Trust develops over time and is based on experience, along with continual reliable interaction.
The Mozer home system works for its owner because he is also the scientist who built it, so he is more forgiving of problems. Because he is a research scientist and an expert on neural networks, his home serves as a research laboratory. It is a wonderful experiment and would be great fun to visit, but I don’t think I would want to live there.
Homes That Make People Smart
In sharp contrast to the fully automated home that tries to do things automatically, a group of researchers at Microsoft Research Cambridge (England) designs homes with devices that augment human intelligence. Consider the problem of coordinating the activities of a home’s inhabitants—say, a family with two working adults and two teenagers. This presents a daunting problem. The technologist’s traditional approach in dealing with multiple agendas is
to imagine intelligent calendars. For example, the home could match up the schedules of every house member to determine when meals should be scheduled and who should drive others to and from their activities. Just imagine your home continually communicating with you—emailing, instant messaging, text messaging, or even telephoning—reminding you of your appointments, when you need to be home for dinner, when to pick up other family members, or even when to stop at the market on the way home.
Before you know it, your home will expand its domain, recommending articles or television shows it thinks might interest you. Is this how you want to lead your life? Many researchers apparently think so. This is the approach followed by most developers of smart homes in research facilities at universities and industrial research laboratories around the world. It is all very efficient, all very modern, and most unhuman.
The research team in Microsoft’s Cambridge laboratories started with the premise that people make homes smart, not technology. They decided to support each particular family’s solution to their own needs, not to automate any one solution. The team spent time doing what’s known as ethnographic research, observing home dwellers, watching real, everyday behavior. The goal is not to get in the way, not to change anything that is happening, but to be unobtrusive, simply watching and recording how people go about their normal activities.
A comment about research methods: you probably think that if a crew of scientists showed up at your home with voice recorders, cameras, and video camcorders, they could hardly be considered unobtrusive. In fact, the typical family adapts to experienced researchers and goes about its usual business, including family squabbles and disagreements. This “applied ethnography,” or “rapid ethnography,” is different from the ethnographic work of anthropologists who spend years in exotic locations carefully observing a group’s behavior. When applied scientists, engineers, and designers study the culture of the modern household in order to provide assistance, the goal is first to discover the places where people have difficulties, then to determine things that might aid them in those places. For this purpose, the designers are looking for large phenomena, major points of frustration or annoyance, where simple solutions can have a major, positive effect. This approach has been quite successful.
Family members communicate with one another through a wide variety of means. They write messages and notes that they leave wherever they think they might be noticed—on chairs, desktops, and computer keyboards, pasted on computer screens, or placed on stairs, beds, or doors. Because the kitchen has become the central gathering point for many families, one of the most common places to post notices is on the refrigerator. Most refrigerators are made of steel, and steel affords a hold for magnets. The original discoverers of magnets would be amazed to see that in today’s home, the major use of magnets is to affix notes and announcements, children’s drawings, and photographs to the front and sides of the refrigerator. This has spawned a small industry that makes refrigerator magnets, clips, notepads, frames for photos and pens, all to be fastened to the doors and sides.
Expensive refrigerators are often made of stainless steel, or the door is covered with wood paneling, thus destroying the affordance: magnets don’t stick. When this happened to me, my initial reaction was annoyance to discover that an unintended consequence of the move to wood was the loss of my home communication center. Post-it notes still work but are aesthetically unacceptable on such appliances. Fortunately, entrepreneurs have rushed to fill the void, creating bulletin boards that can be mounted more discretely in the kitchen, and some of these have steel surfaces that provide a receptive home for magnets.
The very popularity of the refrigerator creates a problem, as you can see in Figure 5.1. Too many announcements, photographs, and newspaper clippings make it difficult to tell when a new item has been added. In addition, the refrigerator is not always the most appropriate place for either the note sender or its intended recipient. The Microsoft team developed a series of “augmented” note devices, which included a set of “reminding magnets.” One form of magnet glows gently for a period after being moved, drawing people’s attention to the notice underneath. Another set of magnets is labeled by the day of the week, each of which glows subtly when the specific day has arrived, attracting attention without being bothersome. Thus, the “garbage pickup is Wednesday morning” magnet can be pinned to the refrigerator with the Tuesday magnet.
The fixed location of the refrigerator was overcome through cellular and internet technology. The team devised the notepad shown in Figure 5.2A. It can be placed in the kitchen adjacent to the refrigerator (or anywhere in the house, for that matter) and allows messages to be added from anywhere through e-mail or a cell phone’s text-messaging facility. Thus, the message board can display short messages, either to specific family members or to everyone. These messages can be posted either by handwriting, using a stylus on the display screen, sending an e-mail, or text-messaging from a mobile telephone (e.g., “Stuck in meeting—start dinner without me”). Figure 5.2B shows the message pad in action. One of the family’s children, Will, has sent a text message asking to be picked up. He sent it to the central message board rather than to any particular person because he didn’t know who might be available. Tim responded, adding a handwritten note so that other family members will know that the situation is being taken care of. This system accomplishes its goal of making people smarter by providing them with the tools they need but still letting them decide if, when, and how to make use of this assistance.
FIGURE 5.1
The refrigerator and “reminding magnets” studied by the Microsoft Research group at Cambridge, England. The top photograph is a typical refrigerator door being used as a bulletin board. When there are so many notes, it is difficult to find the relevant ones. The bottom picture shows the smart magnets: Put the “wednesday” magnet over the note relevant to that day, and when Wednesday comes, the magnet starts glowing, reminding without annoying.
(Photographs courtesy of the Socio-Digital Systems Group,
Microsoft Research Cambridge)
Other experimental smart homes have showcased a variety of related approaches. Imagine that you are in the middle of preparing to bake a cake when the telephone rings. You answer the phone, but when you return, how do you know where you left off? You recall adding flour to the bowl but aren’t sure how many scoops. In the Georgia Institute of Technology’s Aware Home, the “Cooks Collage” acts as a reminder. A television camera at the bottom of a cupboard photographs the cooking actions, displaying the steps taken. If you are interrupted in the middle of cooking, the display shows images of the last actions performed so you can readily remind yourself where you were. The philosophy here is very similar to that behind Microsoft’s work: augmentative technology should be voluntary, friendly, and cooperative. Use it or ignore it, as you wish.
FIGURE 5.2
Microsoft Research kitchen display. A: The display can be put anywhere, here shown in the kitchen. B: One of the children (Will) has sent a text message from his mobile phone to the message center asking to be picked up (but not saying where he is). Another family member, Tim, has responded, using a stylus to write a note on the display so the rest of the family knows what is happening.
(Photographs courtesy of the Socio-Digital Systems Group,
Microsoft Research Cambridge)
Notice the important distinction between the devices of the Cambridge and Georgia Tech projects and those of the traditional smart home. Both groups of researchers could have tried to make the devices intelligent. In Cambridge, they could have made them sense who was in the room and change the displays accordingly, or they could have tried to read people’s diaries and calendars, deciding for themselves what events to remind them of, what time they should be leaving the home for their next appointment. This is, indeed, a common preoccupation of researchers in the smart home field. Similarly, the researchers in Atlanta could have made an artificially intelligent assistant that read recipes, prompting
and instructing every step of the way, or perhaps even an automated device that would make the cake itself. Instead, both groups devised systems that would fit smoothly into people’s life styles. Both systems rely upon powerful, advanced technology, but the guiding philosophy for each group is augmentation, not automation.
Intelligent Things: Autonomous or Augmentative?
The examples of smart homes show two different directions in which research on smart things can move. One is toward intelligent autonomy, systems that attempt to infer the intentions of people. The other is toward intelligent augmentation, providing useful tools but letting people decide when and where they are to be used. Both systems have their merits, and both have their problems.
Augmentative tools are comforting, for they leave the decisions about activities to people. Thus, we can take them or leave them, choosing those that we feel aid our lives, ignoring those that do not. Moreover, because these are voluntary, different people can make different choices, so that people can choose whatever mix of technology suits their life style.
Autonomous devices can be useful when jobs are dull, dangerous, or dirty. Autonomous tools are useful when the task otherwise could not be accomplished. Consider search-and-rescue missions in dangerous situations, for example, into the rubble of buildings after major earthquakes, fires, or explosions. Moreover, in situations where people could do the task, it is often very nice when someone else does the work for us, even when that someone else is a machine.
Still, some tasks are simply not yet ready to be automated. “Automation always looks good on paper. . . . Sometimes you need real people,” read the headline of a New York Times article commenting on the Denver, Colorado, airport’s failed attempt to get its automated baggage-handling system to work properly. This system, the article claimed, “immediately became famous for its ability to mangle or misplace a good portion of everything that wandered into its path.” After ten years of trying and the expenditure of hundreds of millions of dollars, the airport gave up and dismantled the system.