The User Experience Team of One

Home > Other > The User Experience Team of One > Page 18
The User Experience Team of One Page 18

by Leah Buley


  • What questions do you have about the information and functionality that you’re seeing?

  • Are you satisfied that this is a reasonable number of steps?

  • Is there anything that feels too complicated or cumbersome?

  • Is there any language that doesn’t make sense? Instructions? Labels on buttons? Anything else?

  4. Review and look for themes.

  After all the sticky notes have been written, invite the team to step back and look at other people’s sticky notes, looking for themes and issues they might have missed.

  5. Discuss and synthesize.

  Now, engage the group in a discussion about the big themes that emerged. You may find that the issues identified run the gamut from language, flow, and ease-of-use, all the way to core assumptions about the design concept. That’s great. You’ve just gotten a lot of help to make the design even better. During the group discussion, record the points on a flipchart or white board. That list effectively becomes the synthesis of all the individual sticky notes. Close with a discussion about what’s most successful in the designs, in order to end the session on a positive note. Or close by discussing the top areas that need to be improved and what the next steps for making those improvements are.

  6. Update the designs.

  After the meeting, revisit the designs to address the issues that the group identified. Some may be simple, quick fixes. Some may require more thought and rework to the designs.

  Tips and Tricks for Black Hat Sessions

  • Use them whenever, wherever. Black Hat sessions don’t require a lot of prep and material—just some people, some designs, some Post-it notes, and a little bit of time. That makes a Black Hat session a durable and self-contained pop-up exercise that you can use whenever and wherever it’s helpful. For example, if you’re having a difficult design review meeting where the team seems to be rehashing the same issues again and again, call a timeout and run a Black Hat session. You can even do these by yourself, as a form of quality control.

  • Focus feedback. Black Hat sessions can be general, as described above, but they can also be focused on specific topics. A Black Hat session that focuses on, say, technical feasibility can be a great way to quickly involve engineering and identify designs that may prove problematic from an implementation perspective. Similarly, you can run Black Hat sessions that are focused on content, usability, conversion, pretty much anything.

  • If you work remotely... Giving critical feedback can be uncomfortable. So can receiving it (even if you’re the one requesting it). For that reason, it’s far preferable to do this sort of activity in person. The in-person dynamic makes it easier for you to read non-verbal cues, diffuse any awkwardness, and encourage your colleagues to be open and honest. It also makes it easier for you to use things like humor to defang the process. If you can’t do this in person, here are some tips for conducting this kind of review remotely:

  • Set expectations as you would at an in-person session: explain that you need the team to provide critical feedback; set time limits; and provide a mechanism for them to capture and share their thoughts screen by screen and one by one.

  • Make the feedback visible by adding annotations directly within screen sharing software. Or use a tool like Usabilla (http://usabilla.com/) to get your team to go through the product and give feedback asynchronously.

  • Regularly remind them that this kind of feedback can be uncomfortable to give, but that you really need their honesty and candor to do your work well. Give them permission to say hard things.

  METHOD 21

  Quick-and-Dirty Usability Test

  Can people use this product as intended?

  A quick-and-dirty usability test is a natural output to many of the design methods described in Chapter 7, “Design Methods.” The essence of the quick-and-dirty usability test is that you do it quickly—like the name says. With this method, you’ll forego rigor and perfectionism to make it possible to get rapid feedback on designs. You’ll let go of recruiting and scheduling time with real users and just test the designs with anyone who’s available. Think of it as putting the design in front of the first person you find (who is unfamiliar with the product) and seeing if they can make sense of it.

  Of course, ideally you should test designs with people who truly represent the intended end-user, and if you have the time and team support, you should go that route. But if you’re just trying to get a gut check on whether a design direction works or doesn’t, a fresh pair of eyes can help you see things from a new perspective and settle wavering questions.

  Average Time

  As little as 10 or 15 minutes per person, whenever you need it.

  Use When

  • At any point during the design process when you want to do a quality check on the designs.

  • As often as possible to check your work along the way.

  Try It Out

  1. Find someone, anyone.

  As you’re working on a design, when you want to see if it makes sense to others, print out the design or grab your laptop and wander over to anyone who hasn’t seen it yet. This could be someone who sits in the desk next to yours, someone you encounter walking down the hall or in the cafeteria, or if you truly work alone, a friend or family member.

  2. Ask them what they’re seeing and how they think it works.

  Think about the purpose of the page, screen, or section of the design that you’re working on. What are the main things people should be able to use it for? With this list of primary tasks in mind, show your design to your volunteer. Ask her how she thinks she could interact with this design to accomplish a particular task. If there are multiple screens or steps that you’re designing, proceed through each screen, asking her to explain what she’s seeing and what she would do to advance to the next step. That may only take 5 minutes, or it might take 20.

  3. Find a few more volunteers.

  Once you’ve shown your design to one person, try to find a few more people to run through the same process. Your colleagues may enjoy getting involved, since it’s a break from their normal routine and shows that you value their perspective.

  4. Iterate the designs.

  If you identify anything that’s especially confusing to people or that they interpreted differently than you had intended, go back and revise the design.

  Tips and Tricks for Quick-and-Dirty Usability Tests

  • Not for expert users. Quick-and-dirty usability tests are preferable for products that don’t have a highly technical purpose or audience (where the average person has a better chance of being a reasonable stand-in for your actual users). If you do have a very technical product and you’re trying to run a quick-and-dirty usability test, you should try to find someone who is a good stand-in for a typical user. At a minimum, you may need to spend a few minutes up front explaining some concepts and terminology.

  • Be willing to stop and fix things. If you discover after your first few conversations that something in the new design is just not working for people, stop and fix the design before continuing. Ultimately, it’s more productive to test three different designs of progressively improving quality with two people each than to test one bad design with six people.

  • If you work remotely... Enlist the help of family, friends, acquaintances, or people you meet on the street. You can also use online tools like OpenHallway (www.openhallway.com/), Chalkmark (www.optimalworkshop.com/chalkmark.htm), or TryMyUI (www.trymyui.com/) to create test scenarios and record users going through these scenarios remotely. TryMyUI also finds test participants for you.

  METHOD 22

  Five-Second Test

  What impression is created by a specific screen, step, or moment within the product.

  First popularized by Christine Perfetti at User Interface Engineering, a five-second test is a lightning fast but surprisingly insightful method to quickly assess the information hierarchy in a product. Read more at www.uie.com/articles/five_second_test/. A five-sec
ond test helps you see how clear and memorable a given moment in the product or service is to users (see Figure 8.4).

  FIGURE 8.4

  In a five-second test, show a design to a user for five seconds, and then remove it from sight and ask her what she remembers about the design.

  Like a quick-and-dirty usability test, a five-second test can and should be done regularly to check your work as you progress through the design process. You can even combine a quick-and-dirty usability test with a five-second test for a rapid but rich round of validation. In a five-second test, you basically expose the user to a screen or moment in a product, ask her to look at it for five seconds, and then remove the screen from view. Once the screen has been removed, ask her what she remembers seeing, and what she thought the overall purpose of the page or screen was. Considering that people often use products in a distracted, multitasking state, the five-second test is actually a pretty good indicator for how people really experience your products.

  Average Time

  5–10 minutes per screen

  Use When

  • You want to test the information hierarchy of a page, screen, or state.

  • As often as possible to check your work along the way.

  Try It Out

  1. Find a volunteer.

  Find someone to test your designs on. This can be anyone who is handy (as in the quick-and-dirty usability test) or representative users. Explain that you are going to show your volunteer a screen in a product, but only for five seconds, after which you’ll take it away and ask her some questions about it.

  2. Commence the five-second countdown.

  Show your participant the design that you are testing and silently count off five seconds. You can do this in person by showing her a printout or a design on the screen of your computer, mobile device, or tablet. If you’re doing this remotely, you can do it through screen sharing software, such as WebEx, Skype, or Adobe Connect.

  3. Ask the volunteer what she remembers.

  After five seconds have passed, remove the picture from view. Now ask your research participant what she remembers seeing on the page or screen. Also ask her what she thinks the purpose of the page was, and, if she is unfamiliar with your product, what she thinks the product was.

  4. Did she get it right?

  Did she notice the most important messages or information that you’re trying to convey in that moment? If not, your information hierarchy may be off. Did she correctly interpret the purpose of the product and the screen? If not, the balance of messaging and basic affordances (or, what it looks like you can do with that page) may need more work. Could she correctly identify the type of product this is? If not, you may need to think about navigation, branding, or messaging.

  5. Repeat regularly.

  Repeat as many times as needed to vet key screens or moments in the product.

  Tips and Tricks for Five-Second Tests

  • Test a variety of screens. This is a great way to validate important parts of a product, like the start screen or the home page. But people sometimes enter products through the back door, too. They poke around online; they save bookmarks; they follow links in emails. To test that the product design is robust at all levels, consider running five-second tests on random lower-level pages as well.

  • If you work remotely... Online tools for this sort of testing are getting more sophisticated all the time. Check out FiveSecondTest (http://fivesecondtest.com/), Verify (http://verifyapp.com/), or Clue (www.clueapp.com/).

  METHOD 23

  UX Health Check

  A UX health check measures baseline quality of a user experience and assesses changes in quality over time.

  In a UX health check, you regularly assemble a cross-functional team to do an audit on the quality of the product’s user experience. This technique was developed by Livia Labate and Austin Govella while at Comcast. It’s a way to quickly figure out how well the team feels the product is currently measuring up against user experience expectations (see Figure 8.5). This is a quick, rather unscientific method, but it has the benefit of inclusivity; you are establishing and measuring this baseline with the help of your colleagues. If you conduct this process regularly, it enables you to demonstrate and agree collectively on changes in quality over time.

  FIGURE 8.5

  An example of a UX health check spreadsheet.

  Average Time

  1 hour on a recurring basis (could be weekly, monthly, quarterly, and so on)

  Use When

  You want to start tracking the quality of UX over time and don’t have other formal measures in place.

  Try It Out

  1. Designate a team.

  Identify a cross-functional group of people to be the health check team, and set up a recurring meeting: monthly, quarterly, weekly, or whatever duration makes sense for your product. Ideally, this is the team who is responsible for the product on a day-to-day basis.

  2. Break the product into sections.

  Looking at your overall offering, break it down into sections or areas of focus. This could correspond to the sections of the product from a navigational perspective (for example, registration, account, homepage, etc.). Or, alternately, this could be layers of the experience (content, brand, interactivity, cross-channel consistency, etc.)

  3. Set competitive benchmarks.

  For each section or area of focus, pick a relevant competitive benchmark to serve as an inspiration. For example, you want your product suggestions to be as good as Amazon’s. Or you want your cross-channel consistency to be as good as Apple’s, and so on.

  4. Set a target.

  Next, for each of those sections, decide how good your product actually needs to be, compared to its competitive benchmark. You may not be able to make your cross-channel consistency 100% as good as Apple’s, but maybe 50% as good would be pretty great. As a team, assign a target percentage for each section and its benchmark. As you discuss why you’ve chosen the target percentage that you have, note and document your rationale. This is so that you and the team can remember your thought process in the future and explain it if anyone asks.

  5. Measure yourself against the benchmarks.

  Now, for each of these sections, give the product a rating. You might want to be 50% as good as Apple, but after discussion, you decide that you are presently only 25% as good. Discuss how well each section measures up against its competitive benchmark, and give each section a percentage number that reflects where you think you are today. The team may need to have a bit of discussion to arrive at a number that everyone can agree on. That’s good! The discussion is the most valuable part.

  6. Spot the biggest opportunities for improvement.

  Once you’ve agreed on your rankings, identify the biggest gaps between your target and where you stand today, and then discuss how you’re going to improve it.

  7. Repeat regularly.

  As you continue to evolve the product, keep checking back and measuring yourself against your benchmark. Where your product is improving, congratulate yourselves. Where your product is underperforming relative to your baseline, focus on your next round of improvements.

  Tips and Tricks for a UX Health Check

  • Don’t pretend it’s hard science. This is definitely an unscientific approach. Basically, it takes a subjective measure—who you like and want to be like—and layers on further subjectivity by asking how good everyone thinks you are compared to them. Don’t pretend that this is anything but a thumb-in-the-wind measurement. However, what is effective about this technique is that it gives a cross-functional team a shared and familiar language to talk about product quality. Equally effective, it’s a low overhead activity that you can do on a regular basis. This enables you to develop a long view of your own product and to look backward and forward in time in discussing what you’re improving now, next, and later.

  • Use it to prioritize. One of the subtle benefits of this method is that it helps you easily see where to prioritize your resou
rces and effort. Often, there is a need to improve the user experience throughout the product, but wholesale redesign isn’t realistic. You may want to rebuild from the ground up, but a far more pragmatic approach would have you focus on improving a small set of things at a time. The group-led process of the UX health check makes it clear to everyone (including you) what that handful of priorities should be.

  • If you work remotely... You can do this process via a conference call with screen sharing to ensure that you’re all looking at the same part of the product as you provide your assessment. Keep in mind that the discussion is the most important part of this process, so resist the urge to turn it into an asynchronous activity where everyone just sends in his or her individual scores.

  If You Only Do One Thing...

  Falling in love with your own ideas is an ever-present risk in design. Being proud of your work is great, but you never want it to prevent you from seeing that designs can evolve, simplify, and improve. This chapter offers a variety of methods for assessing how well a design is working—quickly, and with minimal overhead. The spirit of this chapter is simple: always be more curious about what isn’t working than what is.

  So if you only have time to try one thing from this chapter, focus on the “Black Hat Session.” Of all the methods in this chapter, the Black Hat session is the fastest and most blunt instrument for satisfying your curiosity about what isn’t working. Black Hat sessions clear away all the niceties and expose bad or unworkable designs with striking efficiency.

  CHAPTER 9

 

‹ Prev