Alan Cooper, Robert Reinmann, David Cronin - About Face 3- The Essentials of Interaction Design (pdf)
Page 29
Use soft-keyboard input sparingly. It may be tempting to make use of an onscreen keyboard for entering data on touch screen kiosks. However, this input mechanism should only be used to enter very small amounts of text. Not only is it awkward for the user, but it typically results in a thick coating of fingerprints on the display.
Avoid drag-and-drop. Drag-and-drop can be very difficult for users to master on a touch screen, making it inappropriate for kiosk users who will never spend enough time to master demanding interaction idioms. Scrolling of any kind should also be avoided on kiosks except when absolutely necessary.
Some kiosks make use of hardware buttons mapped to onscreen functions in lieu of touch screens. As in handheld systems, the key concern is that these mappings remain consistent, with similar functions mapped to the same buttons from screen to screen. These buttons also should not be placed so far from the screen or arranged spatially so that the mapping becomes unclear (see Chapter 10 for a more detailed discussion of mapping issues). In general, if a touch screen is feasible, it should be strongly considered in favor of mapped hardware buttons.
Postures for kiosks
The large, full-screen nature of kiosks would appear to bias them towards sovereign posture, but there are several reasons why the situation is not quite that simple.
First, users of kiosks are often first-time users (with some obvious exceptions, such as ATM users and users of ticket machines for public transport), and are in most cases not daily users. Second, most people do not spend any significant amount of time in front of a kiosk: They perform a simple transaction or search, get the information they need, and then move on. Third, most kiosks employ either touch screens or bezel buttons to the side of the display, and neither of these input mechanisms support the high data density you would expect of a sovereign application.
Fourth, kiosk users are rarely comfortably seated in front of an optimally placed monitor, but are standing in a public place with bright ambient light and many distractions. These user behaviors and constraints should bias most kiosks towards transient posture, with simple navigation, large, colorful, engaging interfaces with
14_084113 ch09.qxp 4/3/07 6:04 PM Page 195
Chapter 9: Platform and Posture
195
clear affordances for controls, and clear mappings between hardware controls (if any) and their corresponding software functions. As in the design of handhelds, floating windows and dialogs should be avoided; any such information or behavior is best integrated into a single, full screen (as in sovereign-posture applications).
Kiosks thus tread an interesting middle ground between the two most common desktop postures.
Because transactional kiosks often guide users through a process or a set of information screen by screen, contextual orientation and navigation are more important than global navigation. Rather than helping users understand where they are in the system, help them to understand where they are in their process. It’s also important for transactional kiosks to provide escape hatches that allow users to cancel transactions and start over at any point.
DESIGN
Kiosks should be optimized for first-time use.
principle
Educational and entertainment kiosks vary somewhat from the strict transient posture required of more transactional kiosks. In this case, exploration of the kiosk environment is more important than the simple completion of single transactions or searches. In this case, more data density and more complex interactions and visual transitions can sometimes be introduced to positive effect, but the limitations of the input mechanisms need to be carefully respected, lest the user lose the ability to successfully navigate the interface.
Designing for television-based interfaces
Television-based interfaces such as TiVo and most cable and satellite set-top boxes rely on user interaction through a remote control that is typically operated by users when they are sitting across the room from the television. Unless the remote control uses radio-frequency communications (most use one-way infrared), it also means that the user will need to point the remote towards the TV and set-top boxes. All of this makes for challenges and limitations in designing effective information display controls for system navigation and operation.
Use a screen layout and visual design that can be easily read from across the room. Even if you think you can rely on high-definition television (HDTV) screen resolutions, your users will not be as close to the TV screen as they would be to, say, a computer monitor. This means that text and other navigable content will need to be displayed in a larger size, which will in turn dictate how screens of information are organized.
14_084113 ch09.qxp 4/3/07 6:04 PM Page 196
196
Part II: Designing Behavior and Form
Keep onscreen navigation simple. People don’t think about their TV like they do a computer, and the navigation mechanisms provided by remotes are limited, so the best approach is one that can be mapped easily to a five-way (up, down, left, right, and center) controller. There may be room to innovate with scroll wheels and other input mechanisms for content navigation, but these will likely need to be compatible with other set-top devices in addition to yours (see the next point), so take care in your design choices. In addition, visual wayfinding techniques such as color-coding screens by functional area and providing visual or textual hints about what navigational and command options are available on each screen (TiVo does a particularly good job of this) are important for ensuring ease of use.
Keep control integration in mind. Most people hate the fact that they need multiple remotes to control all the home entertainment devices connected to their TV. By enabling control of commonly used functions on other home entertainment devices besides the one you are designing for (ideally with minimal configuration), you will be meeting a significant user need. This will mean that your product’s remote control or console will need to broadcast commands for other equipment and may need to keep track of some of the operational state of that equipment as well. Logitech’s line of Harmony universal remote controls does both of these things, and the remotes are configured via a Web application when connected to a computer via USB.
Keep remote controls as simple as possible. Many users find complex remote controls daunting, and most functions available from typical home entertainment remotes remain little used. Especially when remote controls take on universal functionality, the tendency is to cram them with buttons — 40, 50, or even 60
buttons on a universal remote is not unusual. One way to mitigate this is to add a display to the remote, which can allow controls to appear in context, and thus fewer buttons are available in any one source. These controls can be accessible via a touch screen or via soft-labeled physical buttons that lie adjacent to the screen. Each of these approaches has drawbacks: Most touch screens do not provide tactile feedback, so the user is forced to look away from his TV to actuate a touch screen control. Soft-labeled buttons address this problem, but add more buttons back to the surface of the remote. The addition of a display on your remote may also tempt you to allow navigation to multiple “pages” of content or controls on the display. While there are instances where this may be warranted, any design choice that divides the user’s attention between two displays (the TV and the remote) runs the risk of creating user confusion and annoyance.
Focus on user goals and activities, not on product functions. Most home entertainment systems require users to understand the topology and states of the system in order to use it effectively: For example, to watch a movie, a user may need to know how to turn the TV on, how to turn the DVD player on, how to switch input on the TV to the one that the DVD player is connected to, how to
14_084113 ch09.qxp 4/3/07 6:04 PM Page 197
Chapter 9: Platform and Posture
197
turn on surround sound, and how to set the TV to widescreen mode. Doing this may require three separate remote controls, or half a dozen button presses on a function-oriented u
niversal remote. Remote controls like Logitech’s Harmony take a different approach: organizing the control around user activities (such as
“watch a movie”), and using knowledge the user provides (at setup time) on what is connected to what to perform the appropriate sequence of device-level commands. While this is much more complex to develop, it is a clear win for the user if implemented well.
Designing for automotive interfaces
Automotive interfaces, especially those that offer sophisticated navigation and entertainment (telematics) functionality, have a particular challenge around driver safety. Complex or confusing interactions that require too much attention to accomplish can put all on the road at risk, and such systems require significant design effort and usability validation to avoid such issues. This can be a challenge, given the spatial limitations of the automobile dashboard, center console, and steering wheel.
Minimize time that hands are off the wheel. Commonly used navigation controls (e.g., play/pause, mute, skip/scan) should be available on the steering wheel (driver use) as well as on the center console (passenger use).
Enforce consistent layout from screen to screen. By maintaining a very consistent layout, the driver will be able to keep his bearings between context shifts.
Use direct control mappings when possible. Controls with labels on them are better than soft-labeled controls. Touch screen buttons with tactile feedback are also preferable to soft-labels with adjacent hard buttons, because again it requires fewer cognitive cycles on the part of the driver operating the system to make the mapping.
Choose input mechanisms carefully. It’s much easier for drivers to select content via knobs than a set of buttons. There are fewer controls to clutter the interface, knobs protrude and so are easier to reach, and they afford (when properly designed) both rough and fine controls in an elegant and intuitive way.
Keep mode/context switching simple and predictable. With its iDrive system, BMW mapped most of the car’s entertainment, climate control, and navigation into a single control that was a combination of a knob and a joystick. The idea was to make things simple, but by overloading the control so extremely, BMW created a danger for users by requiring them to navigate an interface in order to switch contexts and modes. Modes (e.g., switching from CD to FM, or climate control to navigation) should be directly accessible with a single touch or button press, and the location of these mode buttons should be fixed and consistent across the interface.
14_084113 ch09.qxp 4/3/07 6:04 PM Page 198
198
Part II: Designing Behavior and Form
Provide audible feedback. Audible confirmations of commands help reduce the need for the driver to take his eyes off the road. However, care needs to be taken to ensure that this feedback is itself not too loud or distracting. For in-car navigation systems, verbal feedback highlighting driving directions can be helpful, as long as the verbal instructions (e.g., turning instructions and street names) are delivered early enough for the driver to properly react to them. Speech input is another possibility, using spoken commands to operate the interface. However, the automobile environment is noisy, and it is not clear that verbalizing a command, especially if it needs to be repeated or corrected for, is any less cognitively demanding than pressing a button. While this kind of feature makes for great marketing, we think the jury is still out on whether it makes for a better or safer user experience in the automobile.
Designing for appliances
Most appliances have extremely simple displays and rely heavily on hardware buttons and dials to manipulate the state of the appliance. In some cases, however, major appliances (notably washers and dryers) will sport color LCD touch screens allowing rich output and direct input.
Appliance interfaces, like the phone interfaces mentioned in the previous section, should primarily be considered transient posture interfaces. Users of these interfaces will seldom be technology-savvy and should, therefore, be presented the most simple and straightforward interfaces possible. These users are also accustomed to hardware controls. Unless an unprecedented ease of use can be achieved with a touch screen, dials and buttons (with appropriate tactile, audible, and visual feedback via a view-only display or even hardware lamps) may be a better choice. Many appliance makers make the mistake of putting dozens of new — and unwanted —
features into their new, digital models. Instead of making it easier, that “simple”
LCD touch screen becomes a confusing array of unworkable controls.
Another reason for a transient stance in appliance interfaces is that users of appliances are trying to get something very specific done. Like users of transactional kiosks, they are not interested in exploring the interface or getting additional information; they simply want to put the washer on normal cycle or cook their frozen dinners.
One aspect of appliance design demands a different posture: Status information indicating what cycle the washer is on or what the VCR is set to record should be presented as a daemonic icon, providing minimal status quietly in a corner. If more than minimal status is required, an auxiliary posture for this information then becomes appropriate.
14_084113 ch09.qxp 4/3/07 6:04 PM Page 199
Chapter 9: Platform and Posture
199
Designing for audible interfaces
Audible interfaces, such as those found in voice message systems and automated call centers, involve some special challenges. Navigation is the most critical challenge because it is easy to get lost in a tree of functionality with no means of visualizing where one is in the hierarchy, and bad phone tree interactions are a common way to erode an otherwise strong brand identity. (Almost all voice interfaces are based upon a tree, even if the options are hidden behind voice recognition, which introduces a whole other set of problems.)
The following are some simple principles for designing usable audible interfaces:
Organize and name functions according to user mental models. This is important in any design, but doubly important when functions are described only verbally, and only in context of the current function. Be sure to examine context scenarios to determine what the most important functions are, and make them the most easily reachable. This means listing the most common options first.
Always signpost the currently available functions. The system should, after every user action, restate the current available activities and how to invoke them.
Always provide a way to get back one step and to the top level. The interface should, after every action, tell the user how to go back one step in the function structure (usually up one node in the tree) and how to get to the top level of the function tree.
Always provide a means to speak with a human. If appropriate, the interface should give the user instructions on how to switch to a human assistant after every action, especially if the user seems to be having trouble.
Give the user enough time to respond. Systems usually require verbal or telephone keypad entry of information. Testing should be done to determine an appropriate length of time to wait; keep in mind that phone keypads can be awkward and very slow for entering textual information.
In conclusion, it’s important to keep in mind that the top-level patterns of posture and platform should be among the first decisions to be made in the design of an interactive product. In our experience, many poorly designed products suffer from the failure to make these decisions consciously at any point. Rather than diving directly into the details, take a step back and consider what technical platform and behavioral posture will best meet the needs of your users and business, and what the implications of these decisions might be on detailed interactions.
Notes
1. Perfetti and Landesman, 2001
14_084113 ch09.qxp 4/3/07 6:04 PM Page 200
15_084113 ch10.qxp 4/3/07 6:05 PM Page 201
10
Orchestration and Flow
If our goal is to make the people who use our products more productive, effective, and engaging,
we must ensure that users remain in the right frame of mind. This chapter discusses a kind of mental ergonomics — how we can ensure that our products support user intelligence and effectiveness and how we can avoid disrupting the state of productive concentration that we want our users to be able to maintain.
Flow and Transparency
When people are able to concentrate wholeheartedly on an activity, they lose awareness of peripheral problems and distractions. The state is called flow, a concept first identified by Mihaly Csikszentmihalyi in Flow: The Psychology of Optimal Experience.
In Peopleware: Productive Projects and Teams, Tom DeMarco and Timothy Lister describe flow as a “condition of deep, nearly meditative involvement.” Flow often induces a “gentle sense of euphoria” and can make you unaware of the passage of time. Most significantly, a person in a state of flow can be extremely productive, especially when engaged in constructive activities such as engineering, design, development, or writing. To state the obvious, then, to make people more productive and happy, it behooves us to design interactive products to promote and enhance flow, and for us to go to great pains to avoid any potentially flow-disturbing behavior. If
15_084113 ch10.qxp 4/3/07 6:05 PM Page 202
202
Part II: Designing Behavior and Form
the application consistently rattles a user and disrupts her flow, it becomes difficult for her to maintain that productive state.
In most cases, if a user could achieve his goals magically, without your product, he would. By the same token, if a user needs the product but could achieve his goals without messing about with a user interface, he would. Interacting with a lot of software will never be an entirely aesthetically pleasing experience (with many obvious exceptions, including things like games, creative tools like music sequencers, and content-delivery systems like Web browsers). For a large part, interacting with software (especially business software) is a pragmatic exercise.