by Cathy O'Neil
To a degree, it is. But consider a hypothetical driver who lives in a rough section of Newark, New Jersey, and must commute thirteen miles to a barista job at a Starbucks in the wealthy suburb of Montclair. Her schedule is chaotic and includes occasional clopenings. So she shuts the shop at 11, drives back to Newark, and returns before 5 a.m. To save ten minutes and $1.50 each way on the Garden State Parkway, she takes a shortcut, which leads her down a road lined with bars and strip joints.
A data-savvy insurer will note that cars traveling along that route in the wee hours have an increased risk of accidents. There are more than a few drunks on the road. And to be fair, our barista is adding a bit of risk by taking the shortcut and sharing the road with the people spilling out of the bars. One of them might hit her. But as far as the insurance company’s geo-tracker is concerned, not only is she mingling with drunks, she may be one.
In this way, even the models that track our personal behavior gain many of their insights, and assess risk, by comparing us to others. This time, instead of bucketing people who speak Arabic or Urdu, live in the same zip codes, or earn similar salaries, they assemble groups of us who act in similar ways. The prediction is that those who act alike will take on similar levels of risk. If you haven’t noticed, this is birds of a feather all over again, with many of the same injustices.
When I talk to most people about black boxes in cars, it’s not the analysis they object to as much as the surveillance itself. People insist to me that they won’t give in to monitors. They don’t want to be tracked or have their information sold to advertisers or handed over to the National Security Agency. Some of these people might succeed in resisting this surveillance. But privacy, increasingly, will come at a cost.
In these early days, the auto insurers’ tracking systems are opt-in. Only those willing to be tracked have to turn on their black boxes. They get rewarded with a discount of between 5 and 50 percent and the promise of more down the road. (And the rest of us subsidize those discounts with higher rates.) But as insurers gain more information, they’ll be able to create more powerful predictions. That’s the nature of the data economy. Those who squeeze out the most intelligence from this information, turning it into profits, will come out on top. They’ll predict group risk with greater accuracy (though individuals will always confound them). And the more they benefit from the data, the harder they’ll push for more of it.
At some point, the trackers will likely become the norm. And consumers who want to handle insurance the old-fashioned way, withholding all but the essential from their insurers, will have to pay a premium, and probably a steep one. In the world of WMDs, privacy is increasingly a luxury that only the wealthy can afford.
At the same time, surveillance will change the very nature of insurance. Insurance is an industry, traditionally, that draws on the majority of the community to respond to the needs of an unfortunate minority. In the villages we lived in centuries ago, families, religious groups, and neighbors helped look after each other when fire, accident, or illness struck. In the market economy, we outsource this care to insurance companies, which keep a portion of the money for themselves and call it profit.
As insurance companies learn more about us, they’ll be able to pinpoint those who appear to be the riskiest customers and then either drive their rates to the stratosphere or, where legal, deny them coverage. This is a far cry from insurance’s original purpose, which is to help society balance its risk. In a targeted world, we no longer pay the average. Instead, we’re saddled with anticipated costs. Instead of smoothing out life’s bumps, insurance companies will demand payment for those bumps in advance. This undermines the point of insurance, and the hits will fall especially hard on those who can least afford them.
As insurance companies scrutinize the patterns of our lives and our bodies, they will sort us into new types of tribes. But these won’t be based on traditional metrics, such as age, gender, net worth, or zip code. Instead, they’ll be behavioral tribes, generated almost entirely by machines.
For a look at how such sorting will proliferate, consider a New York City data company called Sense Networks. A decade ago, researchers at Sense began to analyze cell phone data showing where people went. This data, provided by phone companies in Europe and America, was anonymous: just dots moving on maps. (Of course, it wouldn’t have taken much sleuthing to associate one of those dots with the address it returned to every night of the week. But Sense was not about individuals; it was about tribes.)
The team fed this mobile data on New York cell phone users to its machine-learning system but provided scant additional guidance. They didn’t instruct the program to isolate suburbanites or millennials or to create different buckets of shoppers. The software would find similarities on its own. Many of them would be daft—people who spend more than 50 percent of their days on streets starting with the letter J, or those who take most of their lunch breaks outside. But if the system explored millions of these data points, patterns would start to emerge. Correlations would emerge, presumably including many that humans would never consider.
As the days passed and Sense’s computer digested its massive trove of data, the dots started to take on different colors. Some turned toward red, others toward yellow, blue, and green. The tribes were emerging.
What did these tribes represent? Only the machine knew, and it wasn’t talking. “We wouldn’t necessarily recognize what these people have in common,” said Sense’s cofounder and former CEO Greg Skibiski. “They don’t fit into the traditional buckets that we’d come up with.” As the tribes took on their colors, the Sense team could track their movements through New York. By day, certain neighborhoods would be dominated by blue, then turn red in the evening, with a sprinkling of yellows. One tribe, recalled Skibiski, seemed to frequent a certain spot late at night. Was it a dance club? A crack house? When the Sense team looked up the address, they saw it was a hospital. People in that tribe appear to be getting hurt more often, or sick. Or maybe they were doctors, nurses, and emergency medical workers.
Sense was sold in 2014 to YP, a mobile advertising company spun off from AT&T. So for the time being, its sorting will be used to target different tribes for ads. But you can imagine how machine-learning systems fed by different streams of behavioral data will be soon placing us not just into one tribe but into hun dreds of them, even thousands. Certain tribes will respond to similar ads. Others may resemble each other politically or land in jail more frequently. Some might love fast food.
My point is that oceans of behavioral data, in coming years, will feed straight into artificial intelligence systems. And these will remain, to human eyes, black boxes. Throughout this process, we will rarely learn about the tribes we “belong” to or why we belong there. In the era of machine intelligence, most of the variables will remain a mystery. Many of those tribes will mutate hour by hour, even minute by minute, as the systems shuttle people from one group to another. After all, the same person acts very differently at 8 a.m. and 8 p.m.
These automatic programs will increasingly determine how we are treated by the other machines, the ones that choose the ads we see, set prices for us, line us up for a dermatologist appointment, or map our routes. They will be highly efficient, seemingly arbitrary, and utterly unaccountable. No one will understand their logic or be able to explain it.
If we don’t wrest back a measure of control, these future WMDs will feel mysterious and powerful. They’ll have their way with us, and we’ll barely know it’s happening.
In 1943, at the height of World War II, when the American armies and industries needed every troop or worker they could find, the Internal Revenue Service tweaked the tax code, granting tax-free status to employer-based health insurance. This didn’t seem to be a big deal, certainly nothing to rival the headlines about the German surrender in Stalingrad or Allied landings on Sicily. At the time, only about 9 percent of American workers received private health coverage as a job benefit. But with the new tax-free status, b
usinesses set about attracting scarce workers by offering health insurance. Within ten years, 65 percent of Americans would come under their employers’ systems. Companies already exerted great control over our finances. But in that one decade, they gained a measure of control—whether they wanted it or not—over our bodies.
Seventy-five years later, health care costs have metastasized and now consume $3 trillion per year. Nearly one dollar of every five we earn feeds the vast health care industry.
Employers, which have long been nickel and diming workers to lower their costs, now have a new tactic to combat these growing costs. They call it “wellness.” It involves growing surveillance, including lots of data pouring in from the Internet of Things—the Fitbits, Apple Watches, and other sensors that relay updates on how our bodies are functioning.
The idea, as we’ve seen so many times, springs from good intentions. In fact, it is encouraged by the government. The Affordable Care Act, or Obamacare, invites companies to engage workers in wellness programs, and even to “incentivize” health. By law, employers can now offer rewards and assess penalties reaching as high as 50 percent of the cost of coverage. Now, according to a study by the Rand Corporation, more than half of all organizations employing fifty people or more have wellness programs up and running, and more are joining the trend every week.
There’s plenty of justification for wellness programs. If they work—and, as we’ll see, that’s a big “if”—the biggest beneficiary is the worker and his or her family. Yet if wellness programs help workers avoid heart disease or diabetes, employers gain as well. The fewer emergency room trips made by a company’s employees, the less risky the entire pool of workers looks to the insurance company, which in turn brings premiums down. So if we can just look past the intrusions, wellness may appear to be win-win.
Trouble is, the intrusions cannot be ignored or wished away. Nor can the coercion. Take the case of Aaron Abrams. He’s a math professor at Washington and Lee University in Virginia. He is covered by Anthem Insurance, which administers a wellness program. To comply with the program, he must accrue 3,250 “HealthPoints.” He gets one point for each “daily log-in” and 1,000 points each for an annual doctor’s visit and an on-campus health screening. He also gets points for filling out a “Health Survey” in which he assigns himself monthly goals, getting more points if he achieves them. If he chooses not to participate in the program, Abrams must pay an extra $50 per month toward his premium.
Abrams was hired to teach math. And now, like millions of other Americans, part of his job is follow a host of health dictates and to share that data not only with his employer but also with the third-party company that administers the program. He resents it, and he foresees the day when the college will be able to extend its surveillance. “It is beyond creepy,” he says, “to think of anyone reconstructing my daily movements based on my own ‘self-tracking’ of my walking.”
My fear goes a step further. Once companies amass troves of data on employees’ health, what will stop them from developing health scores and wielding them to sift through job candidates? Much of the proxy data collected, whether step counts or sleeping patterns, is not protected by law, so it would theoretically be perfectly legal. And it would make sense. As we’ve seen, they routinely reject applicants on the basis of credit scores and personality tests. Health scores represent a natural—and frightening—next step.
Already, companies are establishing ambitious health standards for workers and penalizing them if they come up short. Michelin, the tire company, sets its employees goals for metrics ranging from blood pressure to glucose, cholesterol, triglycerides, and waist size. Those who don’t reach the targets in three categories have to pay an extra $1,000 a year toward their health insurance. The national drugstore chain CVS announced in 2013 that it would require employees to report their levels of body fat, blood sugar, blood pressure, and cholesterol—or pay $600 a year.
The CVS move prompted this angry response from Alissa Fleck, a columnist at Bitch Media: “Attention everyone, everywhere. If you’ve been struggling for years to get in shape, whatever that means to you, you can just quit whatever it is you’re doing right now because CVS has got it all figured out. It turns out whatever silliness you were attempting, you just didn’t have the proper incentive. Except, as it happens, this regimen already exists and it’s called humiliation and fat-shaming. Have someone tell you you’re overweight, or pay a major fine.”
At the center of the weight issue is a discredited statistic, the body mass index. This is based on a formula devised two centuries ago by a Belgian mathematician, Lambert Adolphe Jacques Quetelet, who knew next to nothing about health or the human body. He simply wanted an easy formula to gauge obesity in a large population. He based it on what he called the “average man.”
“That’s a useful concept,” writes Keith Devlin, the mathematician and science author. “But if you try to apply it to any one person, you come up with the absurdity of a person with 2.4 children. Averages measure entire populations and often don’t apply to individuals.” Devlin adds that the BMI, with numerical scores, gives “mathematical snake oil” the air of scientific authority.
The BMI is a person’s weight in kilograms divided by their height in centimeters. It’s a crude numerical proxy for physical fitness. It’s more likely to conclude that women are overweight. (After all, we’re not “average” men.) What’s more, because fat weighs less than muscle, chiseled athletes often have sky-high BMIs. In the alternate BMI universe, LeBron James qualifies as overweight. When economic “sticks and carrots” are tied to BMI, large groups of workers are penalized for the kind of body they have. This comes down especially hard on black women, who often have high BMIs.
But isn’t it a good thing, wellness advocates will ask, to help people deal with their weight and other health issues? The key question is whether this help is an offer or a command. If companies set up free and voluntary wellness programs, few would have reason to object. (And workers who opt in to such programs do, in fact, register gains, though they might well have done so without them.) But tying a flawed statistic like BMI to compensation, and compelling workers to mold their bodies to the corporation’s ideal, infringes on freedom. It gives companies an excuse to punish people they don’t like to look at—and to remove money from their pockets at the same time.
All of this is done in the name of health. Meanwhile, the $6 billion wellness industry trumpets its successes loudly—and often without offering evidence. “Here are the facts,” writes Joshua Love, president of Kinema Fitness, a corporate wellness company. “Healthier people work harder, are happier, help others and are more efficient. Unhealthy workers are generally sluggish, overtired and unhappy, as the work is a symptom of their way of life.”
Naturally, Love didn’t offer a citation for these broad assertions. And yet even if they were true, there’s scant evidence that mandatory wellness programs actually make workers healthier. A research report from the California Health Benefits Review Program concludes that corporate wellness programs fail to lower the average blood pressure, blood sugar, or cholesterol of those who participate in them. Even when people succeed in losing weight on one of these programs, they tend to gain it back. (The one area where wellness programs do show positive results is in quitting smoking.)
It also turns out that wellness programs, despite well-publicized individual successes, often don’t lead to lower health care spending. A 2013 study headed by Jill Horwitz, a law professor at UCLA, rips away the movement’s economic underpinning. Randomized studies, according to the report, “raise doubts” that smokers and obese workers chalk up higher medical bills than others. While it is true that they are more likely to suffer from health problems, these tend to come later in life, when they’re off the corporate health plan and on Medicare. In fact, the greatest savings from wellness programs come from the penalties assessed on the workers. In other words, like scheduling algorithms, they provide corporations with yet another
tool to raid their employees’ paychecks.
Despite my problems with wellness programs, they don’t (yet) rank as full WMDs. They’re certainly widespread, they intrude on the lives of millions of employees, and they inflict economic pain. But they are not opaque, and, except for the specious BMI score, they’re not based on mathematical algorithms. They’re a simple and widespread case of wage theft, one wrapped up in flowery health rhetoric.
Employers are already overdosing on our data. They’re busy using it, as we’ve seen, to score us as potential employees and as workers. They’re trying to map our thoughts and our friendships and predict our productivity. Since they’re already deeply involved in insurance, with workforce health care a major expense, it’s only natural that they would extend surveillance on a large scale to workers’ health. And if companies cooked up their own health and productivity models, this could grow into a full-fledged WMD.
As you know by now, I am outraged by all sorts of WMDs. So let’s imagine that I decide to launch a campaign for tougher regulations on them, and I post a petition on my Facebook page. Which of my friends will see it on their news feed?
I have no idea. As soon as I hit send, that petition belongs to Facebook, and the social network’s algorithm makes a judgment about how to best use it. It calculates the odds that it will appeal to each of my friends. Some of them, it knows, often sign petitions, and perhaps share them with their own networks. Others tend to scroll right past. At the same time, a number of my friends pay more attention to me and tend to click the articles I post. The Facebook algorithm takes all of this into account as it decides who will see my petition. For many of my friends, it will be buried so low on their news feed that they’ll never see it.