Book Read Free

Alan Cooper, Robert Reinmann, David Cronin - About Face 3- The Essentials of Interaction Design (pdf)

Page 27

by About Face 3- The Essentials of Interaction Design (pdf)


  While there may be a small amount of truth in this, the fact of the matter is that designing a good user experience for any platform requires careful consideration and hard work. Delivering the product through a browser certainly doesn’t get you this for free.

  Postures for Web applications

  Web applications, much like desktop applications, can have sovereign or transient posture, but since we use the term to refer to products with complex and sophisticated functionality, by definition they tend towards sovereign posture.

  Sovereign posture Web applications strive to deliver information and functionality in a manner that best supports more complex human activities. Often this requires a rich and interactive user interface. A good example of such a Web application is Flickr, an online photo-sharing service that provides for things like drag-and-drop image sorting and in-place editing for text labels and annotation (see Figure 9-6). Other examples of sovereign posture Web applications include a multitude of enterprise software delivered through a browser.

  Unlike page-oriented informational and transactional Web sites, the design of sovereign Web applications is best approached in the same manner as desktop applications. Designers also need a clear understanding of the technical limitations of the medium and what can reasonably be accomplished on time and budget by the development organization. Like sovereign desktop applications, most sovereign Web applications should be full-screen applications, densely populated with controls and data objects, and they should make use of specialized panes or other screen regions to group related functions and objects. Users should have the feeling that they are in an environment, not that they are navigating from page to page or place to place. Redrawing and re-rendering of information should be minimized (as opposed to the behavior on Web sites, where almost any action requires a full redraw).

  14_084113 ch09.qxp 4/3/07 6:04 PM Page 180

  180

  Part II: Designing Behavior and Form

  Figure 9-6 Flickr’s Organize tool allows users to create sets of photos and change their attributes in a batch, in one place, without navigating through countless Web pages to do so.

  The benefit of treating sovereign Web applications as desktop applications rather than as collections of Web pages is that it allows designers to break out of the constraints of page-oriented models of browser interaction to address the complex behaviors that these client-server applications require. Web sites are effective places to get information you need, just as elevators are effective places to get to a particular floor in a building. But you don’t try to do actual work in elevators; similarly, users are not served by being forced to attempt to do real, interaction-rich transactional work using page-based Web sites accessed through a browser.

  One advantage to delivering enterprise functionality through a browser-based user interface is that, if done correctly, it can provide users with better access to occasionally used information and functionality without requiring them to install every tool they may need on their computers. Whether it is a routine task that is only performed once a month or the occasional generation of an ad hoc report, transient posture Web applications aim to accomplish just this.

  When designing transient posture Web applications, as with all transient applications, it is critical to provide for clear orientation and navigation. Also keep in mind that one user’s transient application may be another user’s sovereign application.

  Think hard about how compatible the two users’ sets of needs are — it is commonly

  14_084113 ch09.qxp 4/3/07 6:04 PM Page 181

  Chapter 9: Platform and Posture

  181

  the case that an enterprise Web application serves a wide range of personas and requires multiple user interfaces accessing the same set of information.

  Internet-enabled applications

  Two of the most exciting benefits to emerge from the continued evolution of the Internet are the instant access to immense amounts of information and the ease of collaboration. Both of these benefits depend on the World Wide Web, but that does not mean your product must be delivered through a Web browser to capitalize on them.

  Another excellent approach is to abandon the browser entirely and, instead, create a non-browser-based, Internet-enabled application. By building an application using a standard desktop platform such as .NET or Java/Swing so that it communicates with standard Internet protocols, you can provide rich, clean, sophisticated interactions without losing the ability to access data on the Web. The recent development of data interfaces like RSS and Web application programming interfaces (APIs) allows products to deliver the same information and content from the Web that a browser could, but presented with the far superior user experience that only a native application can deliver.

  A good example of this is Apple iTunes, which allows users to shop for and download music and video, retrieve CD information, and share music over the Internet, all through a user interface that’s been optimized for these activities in a way that would be next to impossible in a Web browser.

  Another example where this is a useful approach is with PACSs (picture archiving and communication systems) used by radiologists to review patient images like MRIs (magnetic resonance imaging). These systems allow radiologists to quickly navigate through hundreds of images, zoom in on specific anatomy, and adjust the images to more clearly identify different types of tissue. Clearly these are not interactions well suited to a Web browser. However, it is very useful for radiologists to be able to review imagery from remote locations. For example, a radiologist at a big research hospital may provide a consultation for a rural hospital that doesn’t have the expertise to diagnose certain conditions. To facilitate this, many PACSs use Internet protocols to enable remote viewing and collaboration.

  Intranets

  Intranets (and their cousins, the extranets) are usually hybrids of a Web site and Web application. An intranet is a private version of the Web that is only accessible

  14_084113 ch09.qxp 4/3/07 6:04 PM Page 182

  182

  Part II: Designing Behavior and Form

  to employees of a company (and its partners, clients, or vendors in the case of an extranet), typically including both a significant number of informational pages about the company, its departments, and their activities, as well as components of richer functionality ranging from timesheet entry and travel arrangements to procurement and budgeting. Designing for the informational portion requires information architecture to create a strong organizational structure, whereas designing for the application portion requires interaction design to define critical behaviors.

  Other Platforms

  Unlike software running on a computer, which has the luxury of being fairly immersive if need be, interaction design for mobile and public contexts requires special attention to creating an experience that coexists with the noise and activity of the real world happening all around the product. Handheld devices, kiosks, and other embedded systems, such as TVs, microwave ovens, automobile dashboards, cameras, bank machines, and laboratory equipment, are unique platforms with their own opportunities and limitations. Without careful consideration, adding digital smarts to devices and appliances runs the risk that they will behave more like desktop computers than like the products that your users expect and desire.

  General design principles

  Embedded systems (physical devices with integrated software systems) involve some unique challenges that differentiate them from desktop systems, despite the fact that they may include typical software interactions. When designing any embedded system, whether it is a smart appliance, kiosk system, or handheld device, keep these basic principles in mind:

  Don’t think of your product as a computer.

  Integrate your hardware and software design.

  Let context drive the design.

  Use modes judiciously, if at all.

  Limit the scope.

  Balance navigation with display density.

  Customize for your p
latform.

  We discuss each of these principles in more detail in the following sections.

  14_084113 ch09.qxp 4/3/07 6:04 PM Page 183

  Chapter 9: Platform and Posture

  183

  Don’t think of your product as a computer

  Perhaps the most critical principle to follow while designing an embedded system is that what you are designing is not a computer, even though its interface might be dominated by a computer-like bitmap display. Your users will approach your product with very specific expectations of what the product can do (if it is an appliance or familiar handheld device) or with very few expectations (if you are designing a public kiosk). The last thing that you want to do is bring all the baggage — the idioms and terminology — of the desktop computer world with you to a “simple”

  device like a camera or microwave oven. Similarly, users of scientific and other technical equipment expect quick and direct access to data and controls within their domain, without having to wade through a computer operating system or file system to find what they need.

  Programmers, especially those who have designed for desktop platforms, can easily forget that even though they are designing software, they are not always designing it for computers in the usual sense: devices with large color screens, lots of power and memory, full-size keyboards, and mouse pointing devices. Few, if any, of these assumptions are valid for most embedded devices. And most importantly, these products are used in much different contexts than desktop computers.

  Idioms that have become accepted on a PC are completely inappropriate on an embedded device. “Cancel” is not an appropriate label for a button to turn off an oven, and requiring people to enter a “settings” mode to change the temperature on a thermostat is preposterous. Much better than trying to squeeze a computer interface into the form factor of a small-screen device is to see it for what it is and to then figure out how digital technology can be applied to enhance the experience for its users.

  Integrate your hardware and software design

  From an interaction standpoint, one defining characteristic of embedded systems is the often closely intertwined relationship of hardware and software components of the interface. Unlike desktop computers, where the focus of user attention is on a large, high-resolution, color screen, most embedded systems offer hardware controls that command greater user attention and that must integrate smoothly with user tasks. Due to cost, power, and form factor constraints, hardware-based navigation and input controls must often take the place of onscreen equivalents. Therefore, they need to be specifically tailored to the requirements of the software portion of the interface as well as to the goals and ergonomic needs of the user.

  14_084113 ch09.qxp 4/3/07 6:04 PM Page 184

  184

  Part II: Designing Behavior and Form

  It is therefore critical to design the hardware and software elements of the system’s interface — and the interactions between them — simultaneously, and from a goal-directed, ergonomic, and aesthetic perspective. Many of the best, most innovative digital devices available today, such as the TiVo and iPod, were designed from such a holistic perspective, where hardware and software combine seamlessly to create a compelling and effective experience for users (see Figure 9-7). This seldom occurs in the standard development process, where hardware engineering teams regularly hand off completed mechanical and industrial designs to the software teams, who must then accommodate them, regardless of what is best from the user’s perspective.

  Figure 9-7 A Cooper design for a smart desktop phone, exhibiting strong integration of hardware and software controls. Users can easily adjust volume/speakerphone, dial new numbers, control playback of voicemail messages with hardware controls, and manage known contacts/numbers, incoming calls, call logs, voicemail, and conferencing features using the touch screen and thumbwheel. Rather than attempt to load too much functionality into the system, the design focuses on making the most frequent and important phone features much easier to use. Note the finger-sized regions devoted to touchable areas on the screen and use of text hints to reinforce the interactions.

  14_084113 ch09.qxp 4/3/07 6:04 PM Page 185

  Chapter 9: Platform and Posture

  185

  Let context drive the design

  Another distinct difference between embedded systems and desktop applications is the importance of environmental context. Although there can sometimes be contextual concerns with desktop applications, designers can generally assume that most software running on the desktop will be used on a computer that is stationary and located in a relatively quiet and private location. Although this is becoming less true as laptops gain both the power of desktop systems and wireless capabilities, it remains the case that users will, by necessity of the form factor, be stationary and out of the hubbub even when using laptops.

  Exactly the opposite is true for many embedded systems, which are either designed for on-the-go use (handhelds) or are stationary but in a location at the center of public activity (kiosks). Even embedded systems that are mostly stationary and secluded (like household appliances) have a strong contextual element: A host jug-gling plates of hot food for a dinner party is going to be distracted, not in a state of mind to navigate a cumbersome set of controls for a smart oven. Navigation systems built into a car’s dashboard cannot safely use “soft-keys” that change their meaning in different contexts because the driver is forced to take her eyes off the road to read each function label. Similarly, a technician on a manufacturing floor should not be required to focus on difficult-to-decipher equipment controls —

  that kind of distraction could be life-threatening in some circumstances.

  Thus the design of embedded systems must match very closely the context of use.

  For handhelds, this context concerns how and where the device is physically handled. How is it held? Is it a one-handed or two-handed device? Where is it kept when not in immediate use? What other activities are users engaged in while using the device? In what environments is it being used? Is it loud, bright, or dark there?

  How does the user feel about being seen and heard using the device if he is in public? We’ll discuss some of these issues in detail a bit later.

  For kiosks, the contextual concerns focus more on the environment in which the kiosk is being placed and also on social concerns: What role does the kiosk play in the environment? Is the kiosk in the main flow of public traffic? Does it provide ancillary information, or is it the main attraction itself? Does the architecture of the environment guide people to the kiosks when appropriate? How many people are likely to use the kiosk at a time? Are there sufficient numbers of kiosks to satisfy demand without a long wait? Is there sufficient room for the kiosk and kiosk traffic without impeding other user traffic? We touch on these and other questions shortly.

  14_084113 ch09.qxp 4/3/07 6:04 PM Page 186

  186

  Part II: Designing Behavior and Form

  Use modes judiciously, if at all

  Desktop computer applications are often rich in modes: The software can be in many different states in which input and other controls are mapped to different behaviors. Tool palettes (such as those in Photoshop) are a good example: Choose a tool, and mouse and keyboard actions will be mapped to a set of functions defined by that particular tool; choose a new tool, and the behavior resulting from similar input changes.

  Unfortunately, users are easily confounded by modal behavior that is less than clearly obvious. Because devices typically have smaller displays and limited input mechanisms, it is very difficult to convey what mode the product is in, and often requires significant navigational work to change modes. Take, for example, mobile telephones. They often require navigation of seemingly countless modes organized into hierarchical menus. Most cell phone users only use the dialing and address book functionality and quickly get lost if they try to access other functions. Even an important function such as silencing the ringer is often beyond the expertise o
f average phone users.

  When designing for embedded systems, it’s important to limit the number of modes, and mode switches should ideally result naturally from situational changes in context. For example, it makes sense for a PDA/phone to shift into telephone mode when an incoming call is received and to shift back to its previous mode when the call is terminated. (Permitting a call while other data is being accessed is a preferable alternative.) If modes are truly necessary, they should be clearly accessible in the interface, and the exit path should also be immediately clear. The four hardware application buttons on most Palm OS handhelds are a good example of clearly marked modes (see Figure 9-8).

  Limit the scope

  Most embedded systems are used in specific contexts and for specific purposes.

  Avoid the temptation to turn these systems into general-purpose computers. Users will be better served by devices that enable them to do a limited set of tasks more effectively, than by devices that attempt to address too many disparate tasks in one place. Devices such as Microsoft Windows Mobile handhelds, which of late have attempted to emulate full desktop systems, run the risk of alienating users with cumbersome interfaces saturated with functions whose only reason for inclusion is that they currently exist on desktop systems. While many of us are reliant on our

 

‹ Prev