0321702832.pdf

Home > Other > 0321702832.pdf > Page 10
0321702832.pdf Page 10

by Steve Krug


  Is the participant miserable? It’s not unusual for the participant to experience a wide range of feelings while doing the tasks. With apologies to Elisabeth Kübler-Ross:

  Optimism Thought

  Puzzlement/ Frustration/ Resignation/

  Confusion Anger Self-blame Making the user miserable is overrated. I actually think you learn less 7

  from a miserable user. (As someone has pointed out, it’s not a crash test; you don’t have to actually destroy the car to see the problems.)

  You don’t need to stop at the first sign of a struggle, but if there is a struggle, you need to start thinking “Is this worth it? Is it causing the participant too much discomfort?” Always err on the side of the participant’s feelings.

  How much time do you have left, and is it important to get on to some other tasks? Unless this is the last task in the session, you always want to be keeping an eye on the clock.

  Are you still learning something? My rule of thumb is this: when it starts to feel like you’re not likely to learn anything more by continuing, let them continue a little bit longer and then move on. About half the time something useful will happen in this “overtime.”

  If the participant hasn’t finished the task but you’ve decided to move on, just wait for a natural pause and then say something like “That’s great. Very helpful. I want to move us along, since we’ve got more to do.” (Note the use of

  “us” and “we” to avoid any suggestion that you’re doing this because of some failure on the participant’s part.)

  7 ...who I hope will identify himself, so I can give him credit...

  [ 77 ]

  chapter 8

  Probing

  5 Minutes

  While the participant is

  Thanks, that was very helpful.

  doing the tasks, you’ll

  inevitably notice things

  If you’ll excuse me for a minute, I’m just going to

  that you’d like to know

  see if the people on the team have any follow-up

  more about.

  questions they’d like me to ask you.

  But stopping to ask questions tends to interrupt the user’s flow and train of thought and introduces the risk of your inadvertently giving “clues.”

  That’s why you always want to leave some time at the end to go back and probe. It’s your chance to make sure you understand what happened and to try to figure out—with the participant’s help—why it happened.

  While the participant is doing the tasks, you can always ask for minor clarifications (“Do you mean the ____ over here?”). But for anything deeper—

  the “Why do you think you did that?” kind of questions—you need to jot down a note to yourself (“Didn’t notice left nav” or “Chose second link. Why?” for example) and save it for the probing section.

  Before you start asking your own questions, call the observation room and ask your Hall Monitor if there’s anything the observers would like you to follow up on. (Feel free to use your own judgment about how to use the time available for probing, though. You don’t have to do everything they ask you to.) Typically, you’ll want to ask the participants things like whether they noticed certain things and why they made particular choices. You can also ask them to try doing a task again another way, or from a different starting point.

  If there are parts of the interface that you’re interested in that they didn’t get to in their travels, you can take them to specific pages (“I’d like you to go to the registration form”) and ask them questions about them.

  You may also want to follow up on any suggestions the participant made about features they think would be useful (“I wish there was a map to choose from

  [ 78 ]

  mind reading made easy

  instead of an alphabetical list of states”). Occasionally these can turn out to be 8

  great ideas, but for the most part they’re not. Users aren’t designers, and they don’t always know what they need, or even what they really want. Usually, if you let them talk their idea through, they’ll end up saying, “But I guess I really wouldn’t use it. I’d probably keep doing it the way I do it now.”

  Sometimes, though, users will make brilliant suggestions. How can you tell?

  Don’t worry; you’ll know. If it’s really a bright idea, a light bulb will go off over your head and the heads of everyone in the observation room. People will say things like “Why on earth didn’t we think of that? It’s so obvious.”

  Wrapping Up

  5 Minutes

  Thank them, ask if they

  Do you have any questions for me, now that we’re

  have any questions, pay

  done?

  them, and show them to

  the door. That’s it.

  At the end, I always like to say, “Thanks. That was exactly what we need.

  It’s been very helpful.”—even if things have gone badly. (Or especially when they’ve gone badly.)

  8 Like the car designed by Homer Simpson with shag carpeting, two bubble domes, and three horns (“…because you can never fi nd a horn when you’re mad”) that all play La Cucara-cha, which ends up costing $82,000 to manufacture.

  [ 79 ]

  chapter 8

  Prepare For The Next Test

  10 Minutes

  Notice that I’ve suggested

  that each test session last

  Stop the screen recorder!

  only 50 minutes, not a

  Save the recording!

  full hour. This is like the

  therapist’s 50-minute

  Clear the browser cache, history, and visited links

  hour—appointments

  Open a “neutral” screen in the browser (e.g., Google)

  are scheduled on the

  hour, but they last for 50

  Take time before the next session to jot down a few

  minutes—and it’s done for

  notes about things you observed

  the same reason. To get

  the most out of each session, you need some time between tests to clear your head, gather your thoughts, and perhaps fit in a bio-break.

  Obviously this means that you only have 50 minutes for testing.

  If you want to do longer sessions, you’re going to have to get a

  little funky with your start times. But always try to leave at least 10–15 minutes of down time between sessions. Don’t make the

  break too long, though, because observers will end up drifting

  away to take care of “just one thing” and not come back.

  During the break, you should

  Make a few notes. It will all run together, even with three tests.

  Reset the computer. You want to restore everything to the state it was in before the test. Reload your sample data and clear your browsing history.

  [ 80 ]

  mind reading made easy

  Consider making adjustments. Based on what you’ve seen in the previous session, you may decide to make changes to the test on the fly.

  For instance, if the first participant can’t complete a task and the reason is obvious, you can modify the task—or even skip it—for the remaining participants. You may even want to implement a quick fix to what you’re testing if it’s something you can do by making a simple change to a style sheet or rewording a heading.

  Freud would be proud of you

  Ever since I started doing usability testing twenty years ago, I’ve been struck by how many of the things a facilitator does with participants are just like the things a therapist does with clients. For instance:

  You’re trying to get them to externalize their thought process.

  You want to hear what they’re thinking so you can understand what’s confusing and troubling them. Your primary job is to keep ’em talking.

  You’re trying not to influence them. Like a therapist, you need to remain neutral. You can’t tell them what to do; they need to figure i
t out for themselves.

  You say the same few things over and over. Many of the phrases you’ll use are the same ones therapists use.

  You have ethical responsibilities.

  Keep ’em talking

  You’ll find that some participants will think aloud with only an occasional reminder. For the people who tend to forget to verbalize their thoughts, though, you have to decide how often you should prompt them.

  I used to think that it was a function of how long they’d been quiet: if they hadn’t said anything for 20 seconds (or 30, or 40—I was never quite sure what the right number was), then you’d ask what they were thinking. But I finally realized that it’s something else:

  If you’re not entirely sure you know what the user is thinking, ask.

  [ 81 ]

  chapter 8

  Most of the time when the user is quiet, you’ll still have a pretty good idea of what they’re thinking. For instance if it’s obvious that someone is reading something, you should just let them read. If they’re making progress along a path that makes sense to you and they don’t seem at all confused or hesitant, let them keep going. But as soon as you lose the feeling that you’re certain you know what they’re thinking, it’s time to ask.

  And you don’t have to worry about it getting annoying. It turns out you can say “What are you thinking?” dozens of times in a test and participants won’t even be aware of it. And if you get bored saying it, you can mix it up with

  “What are you looking at?” and “What are you doing?”—both of which have about the same effect.

  Stay neutral

  Like a therapist, one of the hardest

  Warm. Warm.

  parts of your job as facilitator is

  Warmer!

  HOT!!!

  staying neutral: you don’t want to

  influence the participants.

  The worst case is when the facilitator

  is actively trying to advance a

  personal agenda, either consciously or

  unconsciously. For instance, you may

  want to see the thing you’re testing

  succeed because you had a hand in

  designing it, or you may want to see it

  fail because you’ve thought all along it was a bad idea.

  As facilitator, you have a responsibility to be aware of your biases and scrupulously steer clear of influencing what happens during the testing. If you don’t, people will notice and your testing will lose its credibility.

  But even if you don’t have a personal agenda, you still have to do everything you can to avoid influencing the participant:

  You can’t tell them what to do or give them clues—even subtle ones. When the participant is struggling, you’ll want to help, but you need to resist the temptation.

  [ 82 ]

  mind reading made easy

  You can’t answer their questions. You’ll have to answer most questions with a question, like “What do you think?”

  You shouldn’t express your own opinions (“That’s a great feature”), or even agree with theirs (“Yeah, that is a great feature”).

  You need to try to maintain a poker face, not giving any sign that you’re particularly pleased or displeased with what’s happening. (I think it’s probably best to seem consistently somewhat pleased throughout—

  conveying the sense that the test is going well and you’re getting what you need.)

  “Things a therapist would say”

  While the participant is doing the tasks, to maintain your neutrality you’re going to be saying the same few things over and over. Here’s a handy chart: WHEN THIS HAPPENS:

  SAY THIS:

  You’re not absolutely sure you know what the “What are you thinking?”

  participant is thinking.

  “What are you looking at?”

  “What are you doing now?”

  Something happens that seems to surprise

  “Is that what you expected to happen?”

  them. For instance, they click on a link

  and say “Oh” or “Hmmm” when the new

  page appears.

  The participant is trying to get you to give

  “What would you do if you were at home?”

  him a clue. (“Should I use the _______?”)

  (Wait for answer.) “Then why don’t you go

  ahead and try that?”

  “What would you do if I wasn’t here?”

  “I’d like you to do whatever you’d normally

  do.”

  The participant makes a comment, and you’re “Was there something in particular that not sure what triggered it.

  made you think that?”

  The participant suggests concern that he’s

  “No, this is very helpful.”

  not giving you what you need.

  “This is exactly what we need.”

  [ 83 ]

  chapter 8

  The participant asks you to explain how

  “What do you think?”

  something works or is supposed to work

  (e.g., “Do these support requests get

  “How do you think it would work?”

  answered overnight?”).

  “I can’t answer that right now, because we

  need to know what you would do when you

  don’t have somebody around to answer

  questions for you. But if you still want to

  know when we’re done, I’ll be glad to answer

  it then.”

  The participant seems to have wandered

  “What are you trying to do now?”

  away from the task.

  There are also three other kinds of things you can say:

  Acknowledgment tokens. You can say things like “uh huh,” “OK,”

  and “mm hmm” as often as you think necessary. These signal that you’re taking in what the participant is saying and you’d like them to continue along the same lines. Note that they’re meant to indicate that you understand what the participant is saying, not that you necessarily agree with it. It’s “OK.” Not “OK!!!”

  Paraphrasing. Sometimes it helps to give a little summary of what the participant just said (“So you’re saying that the boxes on the bottom are hard to read?”) to make sure that you’ve heard and understood correctly.

  Clarifying for observers. If the user makes a vague reference to something on the screen, you may want to do a little bit of narration to make it easier for the observers to follow the action. For instance, when the user says “I love this,” you can say, “The list over here on the right?” (Since you’re sitting next to the participant, you sometimes have a better sense of what they’re looking at.)

  Ethical considerations

  There’s one final thing you have in common with a therapist: you have an ethical responsibility to your participants. Like anything to do with ethics, this responsibility can be complicated, but I like to think it boils down to this: Participants should leave the room in no worse shape than they entered.

  [ 84 ]

  mind reading made easy

  For the most part usability testing tends to be very benign. You’re not attaching electrodes to anyone, and unless you’re a closet sociopath, I don’t think you’re likely to cause anyone serious emotional damage. I assume you’re going to treat them with respect, empathy, and consideration of their feelings, even if they turn out to be a pain in the neck. (Perhaps especially if they turn out to be a pain in the neck.) In other words, you’re going to behave like a decent human being.

  The participant always has the right to stop the test and leave at any time without penalty. (You still pay them.) You should work to make the test as comfortable, unintimidating, and stress-free as possible, keep a close eye on the participant’s comfort level, and be very gracious and agreeable if they do want to stop. In some rare cases, you’ll ask them if they’d like to stop.

  You also have a responsibility to protect
the participants’ privacy. One of the best ways to do this is to avoid using identifying information. There’s no need to use their last names in the tests or recordings, and you’re not going to record their faces.

  You need to keep the recordings under your personal control and erase them as soon as they’re no longer needed. If you’re going to distribute clips within your organization, each one should begin with a scary-sounding FBI-style warning not to redistribute it, and you should redact any personal information like telephone or credit card numbers. (It’s fairly easy to cover things up with the editing features of Camtasia.) And if someone makes a particularly indiscreet (or even incriminating) statement, you should delete that portion of the recording.9 I would also never distribute clips of employees who were participants because it may put them in an awkward position.

  If you’re in an academic setting, you may be required to get approval of your entire test plan (including the script and an informed consent agreement) from your Institutional Review Board (IRB) to ensure that it meets your institution’s ethical standards. But you can probably make a very good case that informal usability tests like this are not the kind of study that your IRB has to oversee.

  (People have managed to get this kind of exemption in the past.)

  9 Carolyn Snyder has talked about doing this when a participant mentioned smoking pot, for instance. I was once testing a site with some college students (at a Catholic university, no less) and asked casually what kinds of sites a participant used. “Well, there’s porn…” he began. I left this clip out of my presentation.

  [ 85 ]

  chapter 8

  Tough customers

  Most participants turn out to be pleasant and productive. And then there are the less-than-perfect participants…

  10

  You may get a slow talker, a no-talker, a low-talker, a fast talker, a nonstop talker, a know-it-all, or even (fortunately, very rarely) the occasional wacko.

  Keeping some participants on task can feel like herding the proverbial kittens.

  Sometimes people will leave the site you’re testing. Sometimes they’ll get distracted by some bright, shiny object on a page or decide to tell you a story.

  Some people will want to talk about the economy.

  As with kittens, you need to be polite but firm and keep them moving. For instance, “Good. [creating a pause, and suggesting that things are actually going well] OK. [suggesting a transition is occurring] We’ve got a lot to cover, so I’m going to ask you to….”

 

‹ Prev