Super Crunchers

Home > Other > Super Crunchers > Page 5
Super Crunchers Page 5

by Ian Ayres


  Yahoo! and Microsoft are desperately trying to play catch-up in this analytic competition. Google has deservedly become a verb. I’m frankly in awe of how it has improved my life. Nonetheless, we Internet users are fickle friends. The search engine that can best guess what we’re really looking for is likely to win the lion’s share of our traffic. If Microsoft or Yahoo! can figure out how to outcrunch Google, they will very quickly take its place. To the Super Crunching victor go the web traffic spoils.

  Guilt by Association

  The granddaddy of all of Google’s Super Crunching is its vaunted PageRank. Among all the web pages that include the word “kumquat,” Google will rank a page higher if it has more web pages that are linking to it. To Google, every link to a page is a kind of vote for that web page. And not all votes are equal. Votes cast by web pages that are themselves important are weighted more heavily than links from web pages that have low PageRanks (because no one else links to them).

  Google found that web pages with higher PageRanks were more likely to contain the information that users are actually seeking. And it’s very hard for users to manipulate their own PageRank. Merely creating a bunch of new web pages that link to your home page won’t work because only links from web pages that themselves have reasonably high PageRanks will have an impact. And it’s not so easy to create web pages that other sites will actually link to.

  The PageRank system is a form of what webheads call “social network analysis.” It’s a good kind of guilt by association. Social network analysis can also be used as a forensic tool by law enforcement to help identify actual bad guys.

  I’ve used this kind of data mining myself.

  A couple of years ago, my cell phone was stolen. I hopped on the Internet and downloaded the record of telephone calls that were made both to and from my phone. This is where network analysis came into play. The thief made more than a hundred calls before my service was cut off. Yet most of the calls were to or from just a few phone numbers. The thief made more than thirty calls to one phone number, and that phone number had called into the phone several times as well. When I called that number, a voice mailbox told me that I’d reached Jessica’s cell phone. The third most frequent number connected me with Jessica’s mother (who was rather distraught to learn that her daughter had been calling a stolen phone).

  Not all the numbers were helpful. The thief had called a local weather recording a bunch of times. By the fifth call, however, I found someone who said he’d help me get my phone back. And he did. A few hours later, he handed it back to me at a McDonald’s parking lot. Just knowing the telephone numbers that a bad guy calls can help you figure out who the bad guy is. In fact, cell phone records were used in just this way to finger the two men who killed Michael Jordan’s father.

  This kind of network analysis is also behind one of our nation’s efforts to smoke out terrorists. USA Today reported that the National Security Agency has been amassing a database with the records of two trillion telephone calls since 2001. We’re talking thousands of terabytes of information. By finding out who “people of interest” are calling, the NSA may be able to identify the players in a terrorist network and the structure of the network itself.

  Just like I used the pattern of phone records to identify the bad guy who stole my phone, Valdis Krebs used network analysis of public information to show that all nineteen of the 9/11 hijackers were within two email or phone call connections to two al-Qaeda members whom the CIA already knew about before the attack. Of course, it’s a lot easier to see patterns after the fact, but just knowing a probable bad guy may be enough to put statistical investigators on the right track.

  The 64,000-terabyte question is whether it’s possible to start with just a single suspect and credibly identify a prospective conspiracy based on an analysis of social network patterns. The Pentagon is understandably not telling whether its data-mining contractors—which include our friend Teradata—have succeeded or not. Still, my own experience as a forensic economist working to smoke out criminal fraud makes me more sanguine that Super Crunching may prospectively contribute to homeland security.

  Looking for Magic Numbers

  A few years ago, Peter Pope, who was then the inspector general of the New York City School Construction Authority, called me and asked for help. The Construction Authority was spending about a billion dollars a year in a ten-year plan to renovate New York City schools. Many of the schools were in terrible disrepair and a lot of the money was being used on “envelope” work—roof and exterior repairs to maintain the integrity of the shell of the building. New York City had a long and sordid history of construction corruption and bid rigging, so the New York state legislature had created a new position of inspector general to put an end to inflated costs and waste.

  Peter was a recent law grad who was interested in doing a very different kind of public interest law. Making sure that construction auctions and contract change-orders are on the up-and-up is not as glamorous as taking on a death penalty case or making a Supreme Court oral argument, but Peter was trying to make sure that thousands of schoolchildren had a decent place to go to school. He and his staff were literally risking their lives. Organized crime is not happy when someone comes in and rocks their boat. Once Peter was on the scene, nothing was business as usual.

  Peter called me because he had discovered a specific type of fraud that had been taking place in some of his construction auctions. He called it the “magic number” scam.

  During the summer of 1992, Elias Meris, the principal owner of the Meris Construction Corporation, was under investigation by the Internal Revenue Service. Meris agreed, in exchange for IRS leniency, to wear a wire and provide information on a bid-rigging scam involving School Construction Authority employees and other contractors. Working undercover for prosecutors, Meris taped conversations with senior project officer John Dransfield and a contract specialist named Mark Parker.

  The contract specialist is the person who publicly opens the sealed bids of contractors one at a time at a procurement auction and reads out loud the price that a contractor has bid.

  In the “magic number” scam, the bribing bidder would submit a sealed bid with the absolute lowest price at which it would be willing to do the project. At the public bid openings, Parker would save the dishonest contractor’s bid for last and, knowing the current low bid, he would read aloud a false bid just below this price, so that the briber would win but would be paid just slightly less than the bidder who honestly should have won. Then Dransfield would use Wite-Out to doctor the briber’s bid—writing in the amount that Parker had read out loud. (If the lowest real bid turned out to be below the lowest amount at which the dishonest bidder wanted the job, the contract specialist wouldn’t use the Wite-Out and would just read the dishonest bidder’s written bid.) This “magic number” scam allowed dishonest bidders to win the contract whenever they were willing to do the job for less than the lowest true bid, but they would be paid the highest possible price.

  Pope’s investigation eventually implicated in the scam eleven individuals within seven contracting firms. Next time you’re considering renovating your New York pied-à-terre, you might want to avoid Christ Gatzonis Electrical Contractor Inc., GTS Contracting Corp., Batex Contracting Corp., American Construction Management Corp., Wolff & Munier Inc., Simins Falotico Group, and CZK Construction Corp. These seven firms used the “magic number” scam to win at least fifty-three construction auctions with winning bids totaling over twenty-three million dollars.

  Pope found these bad guys, but he called me to see if statistical analysis could point the finger toward other examples of “magic number” fraud. Together with auction guru Peter Cramton and a talented young graduate student named Alan Ingraham, we ran regressions to see if particular contract specialists were cheating.

  This is really looking for needles in a haystack. It is doubtful that a specialist would rig all of his auctions. The key for us was to look for auctions where the difference
between the lowest and second-lowest bid was unusually small. Using statistical regressions that controlled for a host of other variables—including the number of bidders and an engineer’s pre-auction estimate of cost as well as the third-lowest bid placed in the auction—Alan Ingraham identified two new contract specialists who presided over auctions where there was a disturbingly small difference between the winning and the second-lowest bid. Without knowing even the names of the contract specialists (the inspector general’s data referred to them by number only), we were able to point the inspector general’s office in a new direction. Alan turned the work into two chapters of his doctoral dissertation. While the results of the inspector general’s investigation are confidential, Peter was deeply appreciative and earlier this year thanked me for “helping us catch two more crooks.”

  This “magic number” story shows how Super Crunching can reveal the past. Super Crunching also can predict what you will want and what you will do. The stories of eHarmony and Harrah’s, magic numbers, and Farecast are all stories of how regressions have slipped the bounds of academia and are being used to predict all kinds of things.

  The regression formula is “plug and play”—plug in the specified attributes and, voilà, out pops your prediction. Of course, not all predictions are equally valuable. A river can’t rise above its source and regression predictions can’t overcome insufficient data. If your dataset is too small, no regression in the world is going to make very accurate predictions. Still, unlike intuitivists, regressions know their own limitations and can answer Ed Koch’s old campaign question, “How Am I Doing?”

  CHAPTER 2

  Creating Your Own Data with the Flip of a Coin

  In 1925, Ronald Fisher, the father of modern statistics, formally proposed using random assignments to test whether particular medical interventions had some predicted effect. The first randomized trial on humans (of an early antibiotic against tuberculosis) didn’t take place until the late 1940s. But now, with the encouragement of the Food and Drug Administration, randomized tests have become the gold standard for proving whether or not medical treatments are efficacious.

  This chapter is about how business is playing catch-up. Smart businesses know that regression equations can help them make better predictions. But for the first time, we’re also starting to see businesses combine regression predictions with predictions based on their own randomized trials. Businesses are starting to go out and create their own data by flipping coins. We’ll see that randomized testing is becoming an important tool for data-driven decision making. Like the new regression studies, it’s Super Crunching to answer the bottom-line questions of what works. The poster child for the power of combining these two core Super Crunching tools is a company that made famous the question “What’s in Your Wallet?”

  Capital One, one of the nation’s largest issuers of credit cards, has been at the forefront of the Super Crunching revolution. More than 2.5 million people call CapOne each month. And they’re ready for your call.

  When you call CapOne, a recording immediately prompts you to enter your card number. Even before the service representative’s phone rings, a computer algorithm kicks in and analyzes dozens of characteristics about the account and about you, the account holder. Super Crunching sometimes lets them answer your question even before you ask it.

  CapOne found that some customers call each month just to find out their balance or to see whether their payment has arrived. The computer keeps track of who makes these calls, and routes them to an automated system that answers the phone this way: “The amount now due on your account is $164.27. If you have a billing question, press 1….” Or: “Your last payment was received on February 9. If you need to speak with a customer-service representative, press 1….” A phone call that might have taken twenty or thirty seconds, or even a minute, now lasts just ten seconds. Everyone wins.

  Super Crunching also has transformed customer service calls into a sales opportunity. Data analysis of customer characteristics generates a list of products and services that this kind of consumer is most willing to buy, and the service rep sees the list as soon as she takes the call. It’s just like Amazon’s “customers who like this, also like this” feature, but transmitted through the rep. Capital One now makes more than a million sales a year through customer-service marketing—and their data-mining predictions are the big reason why. Again, everybody wins.

  But maybe not equally. CapOne gives itself the lion’s share of the gains whenever possible. For example, a statistically validated algorithm kicks in whenever a customer tries to cancel her account. If the customer is not so valuable, she is routed to an automated service where she can press a few buttons and cancel. If the customer has been (or is predicted to be) profitable, the computer routes her to a “retention specialist” and generates a list of sweeteners that can be offered.

  When Nancy from North Carolina called to close her account because she felt her 16.9 percent interest rate was too high, CapOne routed her call to a retention specialist named Tim Gorman. CapOne’s computer automatically showed Tim a range of three lower interest rates—ranging from 9.9 percent to 12.9 percent—that he could offer to keep Nancy’s business.

  When Nancy claimed on the phone that she just got a card with a 9.9 percent rate, Tim responded with “Well, ma’am, I could lower your rate to 12.9 percent.” Because of Super Crunching, CapOne knows that a lot of people will be satisfied with this reduction (even when they say they’ve been offered a lower rate from another card). And when Nancy accepts the offer, Tim gets an immediate bonus. Everyone wins. But because of data mining, CapOne wins a bit more.

  CapOne Rolls the Dice

  What really sets CapOne apart is its willingness to literally experiment. Instead of being satisfied with a historical analysis of consumer behavior, CapOne proactively intervenes in the market by running randomized experiments.

  In 2006, it ran more than 28,000 experiments—28,000 tests of new products, new advertising approaches, and new contract terms.

  Is it more effective to print on the outside envelope “LIMITED TIME OFFER” or “2.9 Percent Introductory Rate!”? CapOne answers this question by randomly dividing prospects into two groups and seeing which approach has the highest success rate.

  It seems too simple. Yet having a computer flip a coin and treating prospects who come up heads differently than the ones who come up tails is the core idea behind one of the most powerful Super Crunching techniques ever devised.

  When you rely on historical data, it is much harder to tease out causation. A miner of historical data who wants to find out whether chemotherapy worked better than radiation would need to control for everything else, such as patient attributes, environmental factors—really anything that might affect the outcome. In a large random study, however, you don’t need these controls. Instead of controlling for whether the patients smoked or had a prior stroke, we can trust that in a large randomized division, about the same proportion of smokers will show up in each treatment type.

  The sample size is the key. If we get a large enough sample, we can be pretty sure that the group coming up heads will be statistically identical to the group coming up tails. If we then intervene to “treat” the heads differently, we can measure the pure effect of the intervention. Super Crunchers call this the “treatment effect.” It’s the causal holy grail of number crunching: after randomization makes the two groups identical on every other dimension, we can be confident that any change in the two groups’ outcome was caused by their different treatment.

  CapOne has been running randomized tests for a long time. Way back in 1995, it ran an even larger experiment by generating a mailing list of 600,000 prospects. It randomly divided this pool of people into groups of 100,000 and sent each group one of six different offers that varied the size and duration of the teaser rate. Randomization let CapOne create two types of data. Initially the computerized coin flip was itself a type of data that CapOne created and then relied upon to dec
ide whether to assign a prospect to a particular group. More importantly, the response of these groups was new data that only existed because the experiment artificially perturbed the status quo. Comparing the average response rate of these statistically similar groups let CapOne see the impact of making different offers. Because of this massive study, CapOne learned that offering a teaser rate of 4.9 percent for six months was much more profitable than offering a 7.9 percent rate for twelve months.

  Academics have been running randomized experiments inside and outside of medicine for years. But the big change is that businesses are relying on them to reshape corporate policy. They can see what works best and immediately change their corporate strategy. When an academic publishes a paper showing that there’s point shaving in basketball, nothing much changes. Yet when a business invests tens of thousands of dollars on a randomized test, they’re doing it because they’re willing to be guided by the results.

  Other companies are starting to get in on the act. In South Africa, Credit Indemnity is one of the largest micro-lenders, with over 150 branches throughout the country. In 2004, it used randomized trials to help market its “cash loans.” Like payday loans in the U.S., cash loans are short-term, high-interest credit for the “working poor.” These loans are big business in South Africa, where at any time as many as 6.6 million people borrow. The typical loan is only R1000 ($150), about a third of the borrower’s monthly income.

 

‹ Prev