Lean UX

Home > Other > Lean UX > Page 9
Lean UX Page 9

by Jeff Gothelf


  Design Studio with remote teams

  To set up for the next step, a Design Studio session, we tried to mimic a colocated version of the activity as much as possible. We provided paper and pens at each location. We created a dual-monitor setup in each conference room so that each room would be able to see the sketches on one monitor while still being able to see their teammates via Skype on the second monitor, as shown in Figure 4-14. We asked each team to use a phone to photograph their sketches and email them to everyone else. This helped connect the dialog and the artifact to the conversation.

  Figure 4-14. Dual monitor setup during remote Design Studio

  After that initial setup, we were able to proceed with the Design Studio process as normal. Team members were able to present their ideas to both rooms and to receive trans-continental critique. The two teams were able to refine their ideas together and were eventually able to converge on one idea to take forward.

  Making Collaboration Work

  Not every team will find that collaboration comes easily. Most of us begin our careers by developing our individual technical skills as designers, developers, and so on. And in many organizations, collaboration across disciplines is rare. So it’s no wonder that it can feel challenging.

  One of the most powerful tools for improving collaboration is the Agile technique of the retrospective and the related practice of creating Team Working Agreements. Retrospectives are regularly scheduled meetings, usually held at the end of every sprint, in which the team takes an honest look back at the past sprint. They examine what went well, what went poorly, and what the team wants to improve. Usually, the team will select a few things to work on for the next sprint. We can think of no more powerful tool for improving collaboration than the regular practice of effective retrospectives.

  A Team Working Agreement is a document that serves as a partner to the retrospective. It keeps track of how the team has chosen to work together. It’s a self-created, continuously updated rule book that the team agrees to follow. At each retrospective, the team should check in with their Working Agreement to see if they’re still following it and if they need to update it to include new agreements or remove old ones that no longer make sense.

  Here’s an outline for what you should consider covering in your Team Working Agreements (we’ve made a copy of our favorite template available online at http://leanuxbook.com/links):

  Process overview

  What kind of process are we using? Agile? If so, what flavor? How long are our iterations?

  Ceremonies

  What rituals will the team observe? For example, when is stand-up each day? When do we hold planning meetings and demos?

  Communication/Tools

  What systems will we use to communicate and document our work? What is our project management tool? Where do we keep our assets?

  Working hours

  Who works where? When are folks in the office? If we’re in different locations, what accommodations will we make for time-zone differences?

  Requirements and design

  How do we handle requirements definition, story writing, and prioritization? When is a story ready for design? When is a design ready to be broken into stories?

  Development

  What practices have we settled on? Do we use pair programming? What testing style will we use? What methods will we use for source control?

  Work-in-progress limits

  What is our backlog and icebox size? What WIP limits exist in various stages of our process?

  Deployment

  What is our release cadence? How do we do story acceptance?

  And, any additional agreements.

  Wrapping Up

  Collaborative design (Figure 4-15) is an evolution of the UX design process. In this chapter, we discussed how opening up the design process brings the entire team deeper into the project. We talked about how the low-fidelity drawings created in Design Studio sessions can help teams generate many ideas and then converge on a set that the entire team can get behind. We showed you practical techniques you can use to create shared understanding—the fundamental currency of Lean UX. Using tools like design systems, style guides, collaborative design sessions, Design Studio, and simple conversation, your team can build a shared understanding that allows them to move forward at a much faster pace than in traditional environments.

  Figure 4-15. A team using collaborative design techniques

  Now that we have all of our assumptions declared and our design hypotheses created, we can begin the learning process. In the next chapter, we cover the Minimum Viable Product and how to use it to plan experiments. We use those experiments to test the validity of our assumptions and decide how to move forward with our project.

  1 In the years since we published the first edition of this book, the Design Studio method has become increasingly popular. There are now two comprehensive guides to the method. If you want to go deeper than our coverage, see Design Sprint by Banfield, Lombardo, and Wax and Sprint by Knapp, Zeratsky, and Kowitz.

  Chapter 5. Minimum Viable Products and Prototypes

  All life is an experiment. The more experiments you make, the better.

  —Ralph Waldo Emerson

  With the parts of your hypothesis now defined, you’re ready to determine which product ideas are valid and which ones you should discard. In this chapter, we discuss the Minimum Viable Product (MVP) and its relationship to Lean UX.

  Figure 5-1. Lean UX process

  Lean UX makes heavy use of the notion of MVP. MVPs help us test our assumptions—will this tactic achieve the desired outcome?—while minimizing the work we put into unproven ideas. The sooner we can find which features are worth investing in, the sooner we can focus our limited resources on the best solutions to our business problems. This is an important part of how Lean UX minimizes waste.

  In addition, we cover the following:

  What is an MVP anyway? We’ll resolve the confusion about what the phrase means.

  Creating an MVP. We’ll share a set of guidelines for creating MVPs.

  Examples of MVPs. We’ll share some inspiration and models that you can use in different situations.

  We’ll talk about how to create prototypes for Lean UX, and what you’ll need to consider when selecting a prototyping approach.

  What Is an MVP Anyway?

  If you ask a room full of technology professionals the question, “What is an MVP?” you’re likely to hear a lengthy and diverse list that includes such gems as the ones that follow:

  “It’s the fastest thing we can get out the door that still works.”

  “It’s whatever the client says it is.”

  “It’s the minimum set of features that allow us to say ‘it works.’”

  “It’s Phase 1.” (and we all know about the likelihood of Phase 2)

  The phrase MVP has caused a lot of confusion in its short life. The problem is that it gets used in two different ways. Sometimes, teams create an MVP primarily to learn something. They’re not concerned with delivering value to the market—they’re just trying to figure out what the market wants. In other cases, teams create a small version of a product or a feature because they want to start delivering value to the market as quickly as possible. In this second case, if you design and deploy the MVP correctly, you should also be able to learn from it, even if that’s not the primary focus.

  Example: Should We Launch a Newsletter?

  Let’s take, for example, a medium-sized company we consulted with recently. They were exploring new marketing tactics and wanted to launch a monthly newsletter. Newsletter creation is no small task. You need to prepare a content strategy, editorial calendar, layout and design, as well as an ongoing marketing and distribution strategy. You need writers and editors to work on it. All in all, it was a big expenditure for the company to undertake. The team decided to treat this newsletter idea as a hypothesis.

  The team asked themselves: What’s the
most important thing we need to know first? The answer: Was there enough customer demand for a newsletter to justify the effort? The MVP the company used to test the idea was a sign-up form on their current website. The sign-up form promoted the newsletter and asked for a customer’s email address. This approach wouldn’t deliver any value to the customer—yet. Instead, the goal was to measure demand and build insight on what value proposition and language drove sign-ups. The team felt that these tests would give them enough information to make a good decision about whether to proceed.

  The team spent half a day designing and coding the form and was able to launch it that same afternoon. The team knew that their site received a significant amount of traffic each day: they would be able to learn very quickly if there was interest in the newsletter.

  At this point, the team made no effort to design or build the actual newsletter. After the team had gathered enough data from their first experiment, and if the data showed that its customers wanted the newsletter, the team would move on to their next MVP, one that would begin to deliver value and create deeper learning around the type of content, presentation format, frequency, social distribution, and the other things they would need to learn to create a good newsletter. The team planned to continue experimenting with MVP versions of the newsletter—each one improving on its predecessor—that would provide more and different types of content and design, and ultimately deliver the business benefit they were seeking.

  Creating an MVP

  When it comes to creating an MVP, the first question is always what is the most important thing we need to learn next? In most cases, the answer to that will either be a question of value or a question of implementation.

  Your prioritized list of hypotheses has given you several paths to explore. As a team, discuss the first hypothesis in your list with the following framework:

  What’s the most important thing we need to learn first (or next) about this hypothesis? In other words, what’s the biggest risk currently associated with this approach?

  What’s the least amount of work we can do to learn that? This isn’t lazy: it’s Lean. There’s no reason to do any more work than you need to in order to determine your next step.

  The answer to the second question is your MVP. You will use your MVP to run experiments and the outcome of those experiments will inform you as to whether your hypothesis was correct. These experiments should provide you with enough evidence to decide whether the direction you are exploring should be pursued, refined, or abandoned.

  Creating an MVP to Understand Value

  Here are some guidelines to follow if you’re trying to understand the value of your idea:

  Get to the point

  Regardless of the MVP method you choose to use, focus your time distilling your idea to its core value proposition and present that to your customers. The things that surround your idea (things like navigation, logins, and password retrieval flows) will be irrelevant if your idea itself has no value to your target audience. Leave that stuff for later.

  Use a clear call to action

  You will know people value your solution when they demonstrate intent to use it or, gasp, pay for it. Giving people a way to opt in to or sign up for a service is a great way to know if they’re interested and whether they’d actually give you money for it.

  Prioritize ruthlessly

  Ideas, like artifacts, are transient. Let the best ones prove themselves and don’t hold onto invalidated ideas just because you like them. As designers ourselves, we know that this one is particularly difficult to practice. Designers tend to be optimists and often, we believe our solutions, whether we worked on them for five minutes or five months, are well-crafted and properly thought out. Remember, if the results of your experiment disagree with your design, it’s wrong.

  Stay agile

  Learnings will come in quickly; make sure you’re working in a medium or tool that allows you to make updates easily.

  Don’t reinvent the wheel

  Lots of the mechanisms and systems that you need to test your ideas already exist. Consider how you could use email, SMS, chat apps, Facebook Groups, eBay storefronts, discussion forums, and other existing tools to get the learning you’re seeking.

  Measure behavior

  Build MVPs with which you can observe and measure what people do. This lets you bypass what people say they (will) do in favor of what they actually do. In digital product design, behavior trumps opinion.

  Talk to your users

  Measuring behavior tells you “what” people did with your MVP. Without knowing “why” they behaved that way, iterating your MVP is an act of random design. Try to capture conversations from both those who converted as well as those who didn’t.

  Creating an MVP to Understand Implementation

  Here are some guidelines to follow if you’re trying to understand the implementation you’re considering launching to your customers:

  Be functional

  Some level of integration with the rest of your application must be in place to create a realistic usage scenario. Creating your new workflow in the context of the existing functionality is important here.

  Integrate with existing analytics

  Measuring the performance of your MVP must be done within the context of existing product workflows. This will help you to understand the numbers you’re seeing.

  Be consistent with the rest of the application

  To minimize any biases toward the new functionality, design your MVP to fit with your current look, feel, and brand. (This is where your Design System provides a ton of efficiency.)

  Some Final Guidelines for Creating MVPs

  MVPs might seem simple, but in practice but can prove challenging. Like most skills, the more you practice, the better you become at doing it. In the meantime, here are some guidelines to building valuable MVPs:

  It’s not easy to be pure

  You’ll find that it’s not always possible to test only one thing at a time: you’re often trying to learn whether your idea has value and determine implementation details at the same time. Although it’s better to separate these processes, keeping the aforementioned guidelines in mind as you plan your MVPs will help you to navigate the trade-offs and compromises you’re going to have to make.

  Be clear about your learning goals

  Make sure that you know what you’re trying to learn, and make sure you are clear about what data you need to collect to learn. It’s a bad feeling to launch an experiment only to discover you haven’t instrumented correctly, and are failing to capture some important data.

  Go small

  Regardless of your desired outcome, build the smallest MVP possible. Remember that it is a tool for learning. You will be iterating. You will be modifying it. You might very well be throwing it away entirely.

  You don’t necessarily need code

  In many cases, your MVP won’t involve any code at all. Instead you will rely on many of the UX designer’s existing tools: sketching, prototyping, copywriting, and visual design.

  The Truth Curve

  The amount of effort you put into your MVP should be proportional to the amount of evidence you have that your idea is a good one. That’s the point of the chart (Figure 5-2) created by Giff Constable. The X axis shows the level of investment you should put into your MVP. The Y axis shows the amount of market-based evidence you have about your idea. The more evidence you have, the higher the fidelity and complexity of your MVP can be. (You’ll need the extra effort, because what you need to learn becomes more complex.) The less evidence you have, the less effort you want to put into your MVP. Remember the key question: What’s the smallest thing that you can do to learn the next most important thing? Anything more than that is waste.

  Figure 5-2. The Truth Curve is a useful reminder that learning is continuous and increased investment is only warranted when the facts dictate it

  Examples of MVPs

  Let’s take a look
at a few different types of MVPs that are in common use:

  Landing page test

  This type of MVP helps a team determine demand for their product. It involves creating a marketing page with a clear value proposition, a call to action, and a way to measure conversion. Teams must drive relevant traffic to this landing page to get a large enough sample size for the results to be useful. They can do this either by diverting traffic from existing workflows or utilizing online advertising.

  Positive results from landing page tests are clear, but negative results can be difficult to interpret. If no one “converts,” it doesn’t necessarily mean your idea has no value. It could just mean that you’re not telling a compelling story. The good news is that landing page tests are cheap and can be iterated very quickly. In fact, there are products and services out there now such as Unbounce and LaunchRocket that are set up strictly for this type of learning. If you think about it, Kickstarter (and other crowdfunding sites) are full of landing page MVPs, as demonstrated in Figure 5-3. The people who list products on those sites are looking for validation (in the form of financial backing) that they should invest in actually building their proposed ideas.

  Feature fake (aka the button to nowhere)

  Sometimes, the cost of implementing a feature is very high. In these cases, it is cheaper and faster to learn about the value of the idea to create the appearance of the feature where none actually exists. HTML buttons, calls to action, and other prompts and links provide the illusion to your customer that a feature exists. Upon clicking or tapping the link, the user is notified that the feature is “coming soon” and that he will be alerted when this has happened. They’re like mini-landing pages, in that they exist to measure interest. Feature fakes should be used sparingly and taken down as soon as a success threshold has been reached. If you feel they might negatively affect your relationship with your customer, you can make it right by offering those that found your mousetrap a gift card or some other kind of compensation. And they’re not right for every business or every context.

 

‹ Prev