The Right It
Page 17
XYZ Hypothesis
Let’s combine our Thoughtland-based vision (which already includes some numbers) with our MEH, so we can express the latter in XYZ Hypothesis format, at least X% of Y will Z:
At least 2% of working professionals with daily commutes of one hour or longer each way will pay $3,000 to take an accredited ten-week class on BusU at least once a year.
Nice! We’ve shaved off a lot of fuzz, made our implicit assumptions explicit, and included an educated guess at some numbers that we can use in our experiments.
If we can get at least 2% of our target market to take a $3,000 BusU class once a year, we’ll have a good foundation for making BusU a viable and valuable business. But that’s a big if. It all sounds very promising, reasonable, and plausible; remember, however, that we are still in Thoughtland at this point and, for all we know, we are be about to fall into a false-positive trap. Time for our idea to pack its bag, say good-bye to Thoughtland, and get ready to be tested in the real world. Time to hypozoom.
From XYZ to xyz
Our next step is to take our XYZ Hypothesis and turn it into three xyz hypotheses that we can test quickly and easily with pretotyping. We want to minimize our investment of time and money by using what we already have available to us—and near us. Time to put the “think globally, test locally” tactic into action.
I begin by looking at my current situation and resources to see what I can leverage and put to good use:
I live in Mountain View, California, home to Google and LinkedIn. Both companies employ thousands of professionals and offer them education subsidies and tuition reimbursements.
Many Google and LinkedIn employees who work in Mountain View live in San Francisco and commute to work either by car or on company-sponsored buses.
I have many friends who work at Google and a few who work at LinkedIn.
I have a great relationship with several top-notch Stanford professors.
Bottom line: I have access to a wealth of resources I can use for my first experiments. Many of the people who work at Google and LinkedIn are engineers and, since most engineers are eager learners, they are a great subset of BusU’s target market for our initial test. But should I use Google or LinkedIn (or both) for our first set of experiments? To help me decide, I can use the Distance to Data metric.
Geographically speaking, Google’s and LinkedIn’s offices are both less than 10 miles from my home and within a couple of miles of each other—so that’s a tie. But since I know many more people at Google than I do at LinkedIn, if I measure DTD in terms of the number of emails I will have to send in order to reach the right people, then Google emerges as the better choice. I can save my LinkedIn contacts for future experiments to see if they confirm the data from my experiments with Google employees. So my initial zoomed-in target market (i.e., the y in xyz) will be Google engineers who live in San Francisco and work at the company’s Mountain View campus.
Great! Now that we have an easy-to-reach target market, we can use it to hypozoom. Here are three xyz hypotheses I can test using this group:
xyz1: At least 40% of Google engineers commuting from San Francisco to Mountain View who hear about BusU will visit the BusU4Google.com website and submit their google.com email address to be informed of upcoming classes.
* * *
xyz2: At least 20% of Google engineers commuting from San Francisco to Mountain View will attend a one-hour lunchtime presentation to learn more about BusU.
* * *
xyz3: At least 10% of Google engineers commuting from San Francisco to Mountain View will pay $300 for a one-week “Introduction to Artificial Intelligence” class on the bus taught by a Stanford AI professor.
Before we go further, let me address a question you may have about the numbers I’ve chosen for the value of x in these xyz hypotheses. I used 2% for X in the XYZ Hypothesis, so why do I use 40%, 20%, and 10% for x in the hypozoomed xyz hypotheses?
I did that because I am taking into account what is commonly called the conversion funnel. Not every person who walks into a store, attends a free seminar, or visits an ecommerce website becomes a paying customer. Quite the opposite. Data consistently shows that only a small fraction of people who sign up for free trials or seminars or visit a website convert to paying customers.
Out of every hundred people you invite to learn more about your new product, you’d be lucky if you got five of them to accept or follow up on your invitation (e.g., by coming to your store or visiting your website for more information). And out of those five or so, perhaps only one or two of them will end up making a purchase or some other form of commitment. Perhaps none of them will.
The other reason those three xyz hypotheses have different values for x is that each hypothesis involves a different amount of skin in the game:
xyz1: 1 point for a valid email address
xyz2: 60 points for an hour of time to attend a presentation
xyz3: 900 points for committing to pay $300 and attend a one-week class (300 points for the $300; 600 points for spending at least ten hours attending a class on a bus)
As the amount of skin in the game increases, you must anticipate that the number of people who sign up will decrease. Remember also that all of these numbers are, for now, just educated guesses—a starting point. Our experiments will tell us whether or not these figures are even in the ballpark; if they are not, we will revise our working hypothesis and plan the next steps accordingly.
Even after this explanation, you may still have some issues with how accurate or valid you think these hypotheses are. You might have even thought of much better hypotheses we could have used. Or perhaps you believe that my numbers are unrealistic or way off.
Fantastic! That’s the whole point of going through the effort of turning high-level ideas into detailed XYZ and xyz hypotheses. That’s why we say it with numbers. Those initial numbers are only guesstimates for now (a guesstimate is a combination of a guess and an estimate). They are meant to stimulate discussion and bring differences of opinion and disagreements to the surface, so we can resolve them. If this were a real situation and we were part of a team exploring the idea for BusU, we’d invest a couple of hours discussing several possible xyz hypotheses before converging on a few. And then, after a couple of experiments and some further thought, we would probably revise them once again—or come up with completely new ones. That’s how it works. That’s how it’s supposed to work.
So with the understanding that these may not be the best possible xyz hypotheses or the right initial numbers and that you’d handle this better or differently, let’s proceed.
Time to Test
We have three xyz hypotheses to choose from, and we have to pick one of them to pretotype for our first test. Each one of these hypotheses will provide us with valuable data, but which one should we start with?
Choosing Our First Pretotype
We’ve already applied the “test locally” tactic, so let’s see how the “testing now beats testing later” and “think cheap, cheaper, cheapest” tactics can help us determine where to start. To do that, we’ll evaluate and score each xyz hypothesis in terms of Hours to Data and Dollars to Data. Let’s begin.
xyz1: At least 40% of Google engineers commuting from San Francisco to Mountain View who hear about BusU will visit the BusU4Google.com website and will submit their google.com email address to be informed of upcoming classes.
To test xyz1, all we need to do is reach at least 100 Google engineers and develop a simple website. We estimate that that will take a couple of days at most and just a few dollars. The time and cost estimate for xyz1 is:
Hours to Data: about 48
Dollars to Data: less than $100
That’s pretty darn good.
Let’s see how xyz2 fares:
xyz2: At least 20% of Google engineers commuting from San Francisco to Mountain View will attend a one-hour lunchtime presentation to learn more about BusU.
To test xyz2, we still need access to 100 Google engineer
s, but we also need at least a few hours to create the presentation and a couple of weeks to get it on Google’s calendar, send out the announcements, and so on. This experiment will get us more skin in the game, but it will also require more time and effort to set up than xyz1. The time and cost estimate for xyz2 is:
Hours to Data: at least 336* (two weeks)
Dollars to Data: less than $100
Not bad. But “testing now beats testing later,” and xyz1 can get us YODA faster than xyz2. So that is still our top choice for the first pretotyping experiment.
What about xyz3?
xyz3: At least 10% of Google engineers commuting from San Francisco to Mountain View will pay $300 for a one-week “Introduction to Artificial Intelligence” class on the bus taught by a Stanford AI professor.
This would require even more time and effort than xyz2. We’d have to line up (and possibly pay) a professor willing to do this, rent a bus, and so on. The time and cost estimate for xyz3 is:
Hours to Data: at least 672 (four weeks)
Dollars to Data: more than $5,000
Compared to most market-research budgets, this qualifies as fast and cheap. But for us pretotypers, at this stage that’s way too much time and way too much money. Remember, “Testing now beats testing later” and “Think cheap, cheaper, cheapest.” At this point xyz1 is our fastest and cheapest route to YODA.
By applying our tactics and associated metrics, we’ve identified an xyz hypothesis (xyz1) that will give us our first taste of data in a couple of days and for just a fistful of dollars. If our initial experiments validate xyz1, we will then have YODA to justify investing a bit more time and money to test xyz2. And if xyz2 is validated, we can then justify investing a lot more time and money to test xyz3. But let’s not get ahead of ourselves. We have a great starting point, so let’s move on to pretotyping.
Executing Our First Pretotyping Experiment
Here’s the first hypothesis we will put to the test:
xyz1: At least 40% of Google engineers commuting from San Francisco to Mountain View who hear about BusU will visit the BusU4Google.com website and will submit their google.com email address to be informed of upcoming classes.
The next question I ask myself is this: What’s the best way to reach our y, the target market of Google engineers who commute from San Francisco to Mountain View?
Knowing Google, I assume the company has an internal website or mailing list for people who want to carpool with other Google employees or ride the Google buses. I ask one of my friends currently working at the company to see if that’s the case. He reports back to me with a list of several such resources, including an unofficial (i.e., employee managed) mailing list called MTVCarPoolers with over 1,600 members. Bingo!
I get an introduction to Beth, the list administrator, and we schedule a meeting so I can share the BusU idea with her. Beth loves the concept and agrees to help me test it. She confirms that more than half of the list members (820 of them to be exact) commute from San Francisco to the main Google campus in Mountain View. Perfect. I have potential customers for running at least eight different tests with 100 people each. (Since I am using a group of people that is representative of my hypothesized target market, 100 people is an adequate sample size for the results to be statistically meaningful.)
Now that I have an easy (and free) way to reach my target market, we need to create a pretotype website to establish a web presence we can use to collect our first YODA. I buy a suitable domain name and use one of the many drag-and-drop website development services (e.g., Squarespace, Weebly, Wix) to create a three-page website that introduces and explains the BusU concept. The total investment is $20 and about two hours of my time—easy, fast, and cheap.
The website describes how the BusU service works and lists examples of the classes that will be offered and the bios of the professors who will teach them. Visitors to the site are asked to fill in the following form (skin in the game) if they want more information:
Name:
Email:
Google job title:
Classes and topics you are interested in:
Comments or questions:
I show the pretotype website and the form to Beth. She asks me to make a change; she wants us to be up front with the fact that the service is not yet available but is currently being explored and that it is likely to happen if there is enough interest in it. Beth understands the importance of testing the idea first, but she wants to be as forthright and as ethical as possible. No problem. In fact, I thank her for bringing this up. I make those changes, and then we work together to compose the email message we will send to the first 100 Google employees on the San Francisco-to-Mountain View commuter list.
The next morning Beth sends out the email. In less than four hours, 88 people visit the website and 62 of them fill in the form. Woo-hoo! This is great. I had estimated 40% of the people would fill out the email form, and we got 62%.
But when I look at the data more closely, I notice that almost everyone who submitted a form asks if the service is free and, if not, they want to know how much it will cost and whether Google will help pay for it. Oops! I guess we should make those monetary points clearer in our email or on the website, especially since Google workers are accustomed to free meals, free massages, and many other perks.
I talk to Beth about the feedback, and we agree that for the next experiment I should edit both the email and website, being up front about the cost ($3,000 for a ten-week class—which includes the bus ride). We also make it clear that at this time this is not a Google-approved continuing education program and that the students are responsible for the full tuition.
But before we tackle the next experiment, let’s discuss how we would score this first experiment on the TRI Meter and what, if anything, we need to tweak in our hypotheses.
Analyzing and Iterating
Hypothesis xyz1 predicted a 40% response, and our test returned a very healthy 62%. This result exceeds our expectations and indicates a strong level of interest. Normally, a number this good (i.e., substantially better than our estimate) is an indication that an idea is Very Likely to succeed. But since we did not mention the hefty $3,000 price tag in our email and many Google workers may have assumed that the BusU classes would be free (or be reimbursed by Google), I decide to be conservative in my interpretation of the result. Instead of Very Likely, I score it as Likely.
I could have been even more conservative and scored the experimental result as 50/50 or tossed it away as bad data. But getting a greater than 60% response to any idea in any market happens so rarely that I decided to take it as evidence of strong market interest, and “If there’s a market, there’s a way.” The next set of experiments will determine whether I am right or wrong.
We have an encouraging first result, but the big black arrow on the TRI Meter is doing its job. It’s keeping us grounded in reality. It reminds us that most new ideas will fail in the market and that we need to have several more successful experiments—a preponderance of positive evidence—to balance out the Law of Market Failure.
In the meantime, Beth meets with Google’s continuing education program manager and learns that Google will not consider BusU an approved educational expense until it is more established and has proven to provide comparable value to offerings by more traditional and already accredited educational institutions. Ouch. I ask a friend who works at LinkedIn to see if his company has a similar policy. Alas, the answer he comes back with is not what we want to hear: “They say that you need to have some track record and/or accreditation before they’ll consider it a refundable employee education expense.”
Darn! It looks like we should remove the possibility of employer subsidy from our hypothesis—at least at first. That’s disappointing, but “now beats later” also applies to bad news—better to know now, rather than later, that we cannot count on company subsidies until BusU is more established.
I edit both the email and the website. I make explicit the $
3,000 tuition cost and make it clear that—at least for now—BusU classes are not eligible for Google’s educational reimbursement. Then I cross my fingers and send out the next batch of 100 emails.
This time the results are less encouraging: only 42 visits to the website and, out of those, only 22 people submitted a filled-in form. I am disappointed but not surprised. When people are faced with the $3,000 cost and the lack of company sponsorship (at least for the time being), it’s normal that fewer would be interested.
We had a plan, but we got punched in the face—not by Mike Tyson, but by the market. What now? Time to bring the plastic in plastic tactics into action.
The 22% response figure is approximately half our initial estimate of 40%. But considering that the 22 people who replied were okay with paying $3,000 for the class out of their own pocket, I conclude that this number is actually quite good—something we can work with. Are we just lowering our expectations? No! We are testing, calibrating, and adjusting our initial assumptions against reality. We are not “cheating.” It is true that we are getting a lower rate of response, but those who are responding are willing to put more of their own skin in the game (i.e., their own money instead of Google’s).
To reflect this change, we can tweak xyz1 as follows:
Lower our expected market engagement rate (x) from 40 to 20%
Mention the $3,000 cost
State up front that the courses are not eligible for company reimbursement
Here’s what this tweaked xyz1 would look like:
xyz1A: At least 20% of Google engineers commuting from San Francisco to Mountain View who hear about BusU’s $3,000 (not eligible for company reimbursement) courses will visit the BusU4Google.com website and will submit their google.com email address to be informed of upcoming classes.
Alternatively, we could:
Keep the 40% number