by Cathy O'Neil
But if you’re looking for a job, there’s an excellent chance that a missed credit card payment or late fees on student loans could be working against you. According to a survey by the Society for Human Resource Management, nearly half of America’s employers screen potential hires by looking at their credit reports. Some of them check the credit status of current employees as well, especially when they’re up for a promotion.
Before companies carry out these checks, they must first ask for permission. But that’s usually little more than a formality; at many companies, those refusing to surrender their credit data won’t even be considered for jobs. And if their credit record is poor, there’s a good chance they’ll be passed over. A 2012 survey on credit card debt in low- and middle-income families made this point all too clear. One in ten participants reported hearing from employers that blemished credit histories had sunk their chances, and it’s anybody’s guess how many were disqualified by their credit reports but left in the dark. While the law stipulates that employers must alert job seekers when credit issues disqualify them, it’s hardly a stretch to believe that some of them simply tell candidates that they weren’t a good fit or that others were more qualified.
The practice of using credit scores in hirings and promotions creates a dangerous poverty cycle. After all, if you can’t get a job because of your credit record, that record will likely get worse, making it even harder to land work. It’s not unlike the problem young people face when they look for their first job—and are disqualified for lack of experience. Or the plight of the longtime unemployed, who find that few will hire them because they’ve been without a job for too long. It’s a spiraling and defeating feedback loop for the unlucky people caught up in it.
Employers, naturally, have little sympathy for this argument. Good credit, they argue, is an attribute of a responsible person, the kind they want to hire. But framing debt as a moral issue is a mistake. Plenty of hardworking and trustworthy people lose jobs every day as companies fail, cut costs, or move jobs offshore. These numbers climb during recessions. And many of the newly unemployed find themselves without health insurance. At that point, all it takes is an accident or an illness for them to miss a payment on a loan. Even with the Affordable Care Act, which reduced the ranks of the uninsured, medical expenses remain the single biggest cause of bankruptcies in America.
People with savings, of course, can keep their credit intact during tough times. Those living from paycheck to paycheck are far more vulnerable. Consequently, a sterling credit rating is not just a proxy for responsibility and smart decisions. It is also a proxy for wealth. And wealth is highly correlated with race.
Consider this. As of 2015, white households held on average roughly ten times as much money and property as black and Hispanic households. And while only 15 percent of whites had zero or negative net worth, more than a third of blacks and Hispanic households found themselves with no cushion. This wealth gap increases with age. By their sixties, whites are eleven times richer than African Americans. Given these numbers, it is not hard to argue that the poverty trap created by employer credit checks affects society unequally and along racial lines. As I write this, ten states have passed legislation to outlaw the use of credit scores in hiring. In banning them, the New York City government declared that using credit checks “disproportionately affects low-income applicants and applicants of color.” Still, the practice remains legal in forty states.
This is not to say that personnel departments across America are intentionally building a poverty trap, much less a racist one. They no doubt believe that credit reports hold relevant facts that help them make important decisions. After all, “The more data, the better” is the guiding principle of the Information Age. Yet in the name of fairness, some of this data should remain uncrunched.
Imagine for a moment that you’re a recent graduate of Stanford University’s law school and are interviewing for a job at a prestigious law firm in San Francisco. The senior partner looks at his computer-generated file and breaks into a laugh. “It says here that you’ve been arrested for running a meth lab in Rhode Island!” He shakes his head. Yours is a common name, and computers sure make silly mistakes. The interview proceeds.
At the high end of the economy, human beings tend to make the important decisions, while relying on computers as useful tools. But in the mainstream and, especially, in the lower echelons of the economy, much of the work, as we’ve seen, is automated. When mistakes appear in a dossier—and they often do—even the best-designed algorithms will make the wrong decision. As data hounds have long said: garbage in, garbage out.
A person at the receiving end of this automated process can suffer the consequences for years. Computer-generated terrorism no-fly lists, for example, are famously rife with errors. An innocent person whose name resembles that of a suspected terrorist faces a hellish ordeal every time he has to get on a plane. (Wealthy travelers, by contrast, are often able to pay to acquire “trusted traveler” status, which permits them to waltz through security. In effect, they’re spending money to shield themselves from a WMD.)
Mistakes like this pop up everywhere. The Federal Trade Commission reported in 2013 that 5 percent of consumers—or an estimated ten million people—had an error on one of their credit reports serious enough to result in higher borrowing costs. That’s troublesome, but at least credit reports exist in the regulated side of the data economy. Consumers can (and should) request to see them once a year and amend potentially costly errors.*
Still, the unregulated side of the data economy is even more hazardous. Scores of companies, from giants like Acxiom Corp. to a host of fly-by-night operations, buy information from retailers, advertisers, smartphone app makers, and companies that run sweepstakes or operate social networks in order to assemble a cornucopia of facts on every consumer in the country. They might note, for example, whether a consumer has diabetes, lives in a house with a smoker, drives an SUV, or owns a pair of collies (who may live on in the dossier long after their earthly departure). These companies also scrape all kinds of publicly available government data, including voting and arrest records and housing sales. All of this goes into a consumer profile, which they sell.
Some data brokers, no doubt, are more dependable than others. But any operation that attempts to profile hundreds of millions of people from thousands of different sources is going to get a lot of the facts wrong. Take the case of a Philadelphian named Helen Stokes. She wanted to move into a local senior living center but kept getting rejected because of arrests on her background record. It was true that she had been arrested twice during altercations with her former husband. But she had not been convicted and had managed to have the records expunged from government databases. Yet the arrest records remained in files assembled by a company called RealPage, Inc., which provides background checks on tenants.
For RealPage and other companies like it, creating and selling reports brings in revenue. People like Helen Stokes are not customers. They’re the product. Responding to their complaints takes time and costs money. After all, while Stokes might say that the arrests have been expunged, verifying that fact eats up time and money. An expensive human being might have to spend a few minutes on the Internet or even—heaven forbid—make a phone call or two. Little surprise, then, that Stokes didn’t get her record cleared until she sued. And even after RealPage responded, how many other data brokers might still be selling files with the same poisonous misinformation? It’s anybody’s guess.
Some data brokers do offer consumers access to their data. But these reports are heavily curated. They include the facts but not always the conclusions data brokers’ algorithms have drawn from them. Someone who takes the trouble to see her file at one of the many brokerages, for example, might see the home mortgage, a Verizon bill, and a $459 repair on the garage door. But she won’t see that she’s in a bucket of people designated as “Rural and Barely Making It,” or perhaps “Retiring on Empty.” Fortunately for the data brokers, fe
w of us get a chance to see these details. If we did, and the FTC is pushing for more accountability, the brokers would likely find themselves besieged by consumer complaints—millions of them. It could very well disrupt their business model. For now, consumers learn about their faulty files only when word slips out, often by chance.
An Arkansas resident named Catherine Taylor, for example, missed out on a job at the local Red Cross several years ago. Those things happen. But Taylor’s rejection letter arrived with a valuable nugget of information. Her background report included a criminal charge for the intent to manufacture and sell methamphetamines. This wasn’t the kind of candidate the Red Cross was looking to hire.
Taylor looked into it and discovered that the criminal charges belonged to another Catherine Taylor, who happened to be born on the same day. She later found that at least ten other companies were tarring her with inaccurate reports—one of them connected to her application for federal housing assistance, which had been denied. Was the housing rejection due to a mistaken identity?
In an automatic process, it no doubt could have been. But a human being intervened. When applying for federal housing assistance, Taylor and her husband met with an employee of the housing authority to complete a background check. This employee, Wanda Taylor—no relation—was using information provided by Tenant Tracker, the data broker. It was riddled with errors and blended identities. It linked Taylor, for example, with the possible alias of Chantel Taylor, a convicted felon who happened to be born on the same day. It also connected her to the other Catherine Taylor she had heard about, who had been convicted in Illinois of theft, forgery, and possession of a controlled substance.
The dossier, in short, was a toxic mess. But Wanda Taylor had experience with such things. She began to dig through it. She promptly drew a line through the possible alias, Chantel, which seemed improbable to her. She read in the file that the Illinois thief had a tattoo on her ankle with the name Troy. After checking Catherine Taylor’s ankle, she drew a line through that felon’s name as well. By the end of the meeting, one conscientious human being had cleared up the confusion generated by web-crawling data-gathering programs. The housing authority knew which Catherine Taylor it was dealing with.
The question we’re left with is this: How many Wanda Taylors are out there clearing up false identities and other errors in our data? The answer: not nearly enough. Humans in the data economy are outliers and throwbacks. The systems are built to run automatically as much as possible. That’s the efficient way; that’s where the profits are. Errors are inevitable, as in any statistical program, but the quickest way to reduce them is to fine-tune the algorithms running the machines. Humans on the ground only gum up the works.
This trend toward automation is leaping ahead as computers make sense of more and more of our written language, in some cases processing thousands of written documents in a second. But they still misunderstand all sorts of things. IBM’s Jeopardy!-playing supercomputer Watson, for all its brilliance, was flummoxed by language or context about 10 percent of the time. It was heard saying that a butterfly’s diet was “Kosher,” and it once confused Oliver Twist, the Charles Dickens character, with the 1980s techno-pop band the Pet Shop Boys.
Such errors are sure to pile up in our consumer profiles, confusing and misdirecting the algorithms that manage more and more of our lives. These errors, which result from automated data collection, poison predictive models, fueling WMDs. And this collection will only grow. Computers are already busy expanding beyond the written word. They’re harvesting spoken language and images and using them to capture more information about everything in the universe—including us. These new technologies will mine new troves for our profiles, while expanding the risk for errors.
Recently, Google processed images of a trio of happy young African Americans and its automatic photo-tagging service labeled them as gorillas. The company apologized profusely, but in systems like Google’s, errors are inevitable. It was most likely faulty machine learning (and probably not a racist running loose in the Googleplex) that led the computer to confuse Homo sapiens with our close cousin, the gorilla. The software itself had flipped through billions of images of primates and had made its own distinctions. It focused on everything from shades of color to the distance between eyes and the shape of the ear. Apparently, though, it wasn’t thoroughly tested before being released.
Such mistakes are learning opportunities—as long as the system receives feedback on the error. In this case, it did. But injustice persists. When automatic systems sift through our data to size us up for an e-score, they naturally project the past into the future. As we saw in recidivism sentencing models and predatory loan algorithms, the poor are expected to remain poor forever and are treated accordingly—denied opportunities, jailed more often, and gouged for services and loans. It’s inexorable, often hidden and beyond appeal, and unfair.
Yet we can’t count on automatic systems to address the issue. For all of their startling power, machines cannot yet make adjustments for fairness, at least not by themselves. Sifting through data and judging what is fair is utterly foreign to them and enormously complicated. Only human beings can impose that constraint.
There’s a paradox here. If we return one last time to that ’50s-era banker, we see that his mind was occupied with human distortions—desires, prejudice, distrust of outsiders. To carry out the job more fairly and efficiently, he and the rest of his industry handed the work over to an algorithm.
Sixty years later, the world is dominated by automatic systems chomping away on our error-ridden dossiers. They urgently require the context, common sense, and fairness that only humans can provide. However, if we leave this issue to the marketplace, which prizes efficiency, growth, and cash flow (while tolerating a certain degree of errors), meddling humans will be instructed to stand clear of the machinery.
This will be a challenge, because even as the problems with our old credit models become apparent, powerful newcomers are storming in. Facebook, for example, has patented a new type of credit rating, one based on our social networks. The goal, on its face, is reasonable. Consider a college graduate who goes on a religious mission for five years, helping to bring potable water to impoverished villages in Africa. He comes home with no credit rating and has trouble getting a loan. But his classmates on Facebook are investment bankers, PhDs, and software designers. Birds-of-a-feather analysis would indicate that he’s a good bet. But that same analysis likely works against a hardworking housecleaner in East St. Louis, who might have numerous unemployed friends and a few in jail.
Meanwhile, the formal banking industry is frantically raking through personal data in its attempts to boost business. But licensed banks are subject to federal regulation and disclosure requirements, which means that customer profiling carries reputational and legal risk. American Express learned this the hard way in 2009, just as the Great Recession was gearing up. No doubt looking to reduce risk on its own balance sheet, Amex cut the spending limits of some customers. Unlike the informal players in the e-score economy, though, the credit card giant had to send them a letter explaining why.
This is when Amex delivered a low blow. Cardholders who shopped at certain establishments, the company wrote, were more likely to fall behind on payments. It was a matter of statistics, plain and simple, a clear correlation between shopping patterns and default rates. It was up to the unhappy Amex customers to guess which establishment had poisoned their credit. Was it the weekly shop at Walmart or perhaps the brake job at Grease Monkey that placed them in the bucket of potential deadbeats?
Whatever the cause, it left them careening into a nasty recession with less credit. Worse, the lowered spending limit would appear within days on their credit reports. In fact, it was probably there even before the letters arrived. This would lower their scores and drive up their borrowing costs. Many of these cardholders, it’s safe to say, frequented “stores associated with poor repayments” because they weren’t swimming in money
. And wouldn’t you know it? An algorithm took notice and made them poorer.
Cardholders’ anger attracted the attention of the mainstream press, including the New York Times, and Amex promptly announced that it would not correlate stores to risk. (Amex later insisted that it had chosen the wrong words in its message and that it had scrutinized only broader consumer patterns, not specific merchants.)
It was a headache and an embarrassment for American Express. If they had indeed found a strong correlation between shopping at a certain store and credit risk, they certainly couldn’t use it now. Compared to most of the Internet economy, they’re boxed in, regulated, in a certain sense handicapped. (Not that they should complain. Over the decades, lobbyists for the incumbents have crafted many of the regulations with an eye to defending the entrenched powers—and keeping pesky upstarts locked out.)
So is it any surprise that newcomers to the finance industry would choose the freer and unregulated route? Innovation, after all, hinges on the freedom to experiment. And with petabytes of behavioral data at their fingertips and virtually no oversight, opportunities for the creation of new business models are vast.
Multiple companies, for example, are working to replace payday lenders. These banks of last resort cater to the working poor, tiding them over from one paycheck to the next and charging exorbitant interest rates. After twenty-two weeks, a $500 loan could cost $1,500. So if an efficient newcomer could find new ways to rate risk, then pluck creditworthy candidates from this desperate pool of people, it could charge them slightly lower interest and still make a mountain of money.
That was Douglas Merrill’s idea. A former chief operating officer at Google, Merrill believed that he could use Big Data to calculate risk and offer payday loans at a discount. In 2009, he founded a start-up called ZestFinance. On the company web page, Merrill proclaims that “all data is credit data.” In other words, anything goes.