Dirty Work
Page 29
By the time Jack Poulson learned about the Dragonfly project, a darker view had emerged. The founders of tech companies like Facebook were no less driven by greed than the CEOs of BP and Transocean, it turned out. By 2016, it was also apparent that trafficking in information could have some very serious downsides. After Brexit and the election of Donald Trump, Facebook came under fire for its role in spreading a torrent of virulent propaganda, some of it disseminated by the consulting firm Cambridge Analytica, which gained access to the private data of millions of Facebook users who were bombarded with fake news and conspiracy theories designed to alter their voting habits. (In 2019, the Federal Trade Commission fined Facebook five billion dollars for mishandling users’ personal data.) In Myanmar, Facebook served as the main conduit for incendiary messages about the Rohingya, a Muslim minority whose members were subjected to rape and killing in what the UN described as a “textbook example of ethnic cleansing.”
The internet could be used not only to connect and empower people but also to surveil and manipulate them, it was becoming clear, a danger hardly limited to the political arena. According to Shoshana Zuboff, a professor emerita at Harvard Business School, the web’s seemingly magical ability to anticipate users’ needs masked a far more sinister development: the unchecked power that Google, Facebook, and other technology companies possessed to gather and store personal data about their customers, information accrued through hidden tracking mechanisms that was harvested to benefit targeted advertisers and modify human behavior. “Under this new regime, the precise moment at which our needs are met is also the precise moment at which our lives are plundered for behavioral data, and all for the sake of others’ gain,” argued Zuboff in her bracing book, The Age of Surveillance Capitalism. The backlash against the billion-dollar companies (Amazon, Google, Facebook) profiting from this regime was exacerbated by the sense that we were all complicit in their rise. Even as people railed at the deleterious effects of Facebook and Twitter, they grew more and more addicted to devices and screens, not infrequently venting their dismay in text messages to friends or posts on social media.
For two decades, technology companies had attracted talented young people because such companies enabled them both to make a lot of money and to feel good about the impact they were having on the world. Now some tech workers began to ask themselves—and, on occasion, their bosses—hard questions about whether the products and services they were designing might be harming the world. At Salesforce, a cloud computing company based in San Francisco, employees circulated a petition urging the company’s CEO to end multiple contracts with the U.S. Customs and Border Protection agency, which they feared implicated them in the Trump administration’s policy of separating parents from children at the border. At Amazon, workers protested the sale of facial recognition software to law enforcement agencies, out of concern that the technology could be used to track civil rights activists and critics of police brutality (in 2020, the sale was halted for one year).
As consciousness of the downsides of technology rose, the smugness that people in Silicon Valley had long evinced when talking about their jobs began to give way to discomfort, even shame. But while the moral luster of working in Silicon Valley had faded, this does not mean it qualified as dirty work. One key difference is that, like bankers and other white-collar professionals, tech workers who felt compromised by what they were doing had far more flexibility to do something about it. They inhabited a starkly different world than Harriet Krzykowski, who refrained from saying anything even after she learned what happened to Darren Rainey, not because she thought silencing herself was morally acceptable, but because she needed the paycheck and knew that challenging security could endanger her life. Had Harriet confronted the guards when the horrors of the “shower treatment” first came to her attention, she might have been able to avoid feeling sullied, knowing that, once she figured out what was going on, she did what she could to stop or expose it.
When Jack Poulson emailed his manager at Google to tender his conditional resignation, he still did not know exactly what was going on. But the decision to write and send the email was itself an indication that, unlike people who worked in industrial slaughterhouses and at prisons like Dade, he felt he had some leverage with his superiors. It was an example of what the economist Albert Hirschman called “voice,” a mode of protest Hirschman examined in his influential book, Exit, Voice, and Loyalty, which analyzed the choices available to government officials, workers, and other social actors confronted by immoral or dysfunctional behavior. One of these choices was to “kick up a fuss” from within in the hope of effecting change.
Among the dirty workers I’d met who had attempted this strategy, this hope was invariably frustrated. Kicking up a fuss at Dade triggered retaliation from the guards or, as George Mallinckrodt learned, getting fired. The immigrants I’d interviewed who worked in industrial slaughterhouses knew better than to even try to voice complaints, aware the company could easily replace them by drawing on the pool of cheap, unskilled laborers. So, too, with rig workers who refrained from complaining about safety measures, and drone operators like Heather Linebaugh who saw how the military dealt with dissenters like Chelsea Manning. But Jack’s experience was different, in no small part because his skills and training—a PhD in applied math from the University of Texas at Austin, a master’s degree in aerospace engineering—made him far more difficult to replace and far more cognizant of his value. Jack first got a sense of this when he was still in graduate school and decided to do an internship one summer at a lab run by the Department of Energy. In the contract he was asked to sign, Jack noticed a clause stipulating that for a full year after he left, the lab would own everything related to his work. This struck him as unfair—why should open-source software that he wrote on his own time not belong to him? he wondered—so he refused to sign it. On an email chain, Jack saw that a lawyer wrote, “How dare an intern question that—fire him!” But Jack’s objection to the provision did not get him terminated. Instead, the scope of his responsibilities was narrowed so that the lab could only lay claim to work he did directly for its benefit.
A few years later, after he’d published some academic papers and begun teaching computational science and engineering at Georgia Tech, Jack received an offer from Stanford. After negotiating terms that included a job for his partner, a neuroscientist, Jack accepted the offer. But on the way to Palo Alto, he learned that Stanford had reneged on this provision. He immediately called the university. “Well, okay, then I don’t accept your offer,” he told an administrator over the phone. Stanford promptly reversed course, arranging for his partner to work as a manager in a neuroscience lab.
From these experiences, Jack inferred something that would have been unthinkable to the dirty workers I’d met and interviewed, which was that he didn’t have to accept terms of employment that he found objectionable and that using his voice could benefit him. At Google, this belief was reinforced when, about a year and a half after he was hired, he informed the company that he was moving to Toronto, where his partner had been accepted into a PhD program. In response, Google proposed to keep him at the same job in his new location but at a 40 percent lower salary, which his manager presented as a cost-of-living adjustment. The offer did not please Jack, who pointed out that while housing expenses were lower in Canada, taxes were higher. If these were the terms, he would resign, he told his manager. Later that day, one of the managers on his team showed up at his desk with a new offer, which included five hundred thousand dollars in stock options spread over four years. “Will this account for the gap?” the manager asked. Jack looked over the revised proposal and said that it would.
Jack’s habit of speaking up for himself on such occasions was, in part, a reflection of his personality; a more timid, less independent-minded person might have acted differently. But it was also a reflection of the elevated stature and authority that tech workers like him possessed, not only when it came to negotiating their salaries
but also over matters of ethics and conscience. After Dragonfly’s existence came to light, Jack was hardly the only Google employee to exercise voice. Two weeks after The Intercept published its story, more than one thousand Google workers signed a letter expressing their dismay about the plan and demanding more of a say about projects the company pursued. “We urgently need more transparency, a seat at the table, and a commitment to clear and open processes,” they wrote.
Unlike dirty workers in lower-skilled professions, the employees who signed this letter felt entitled to a seat at the table. Many had degrees from elite universities that inculcated graduates with a belief that their voices mattered. Tech workers often received the same message from their employers, nowhere more so than at Google, where “smart creatives” were encouraged to pose challenging questions at weekly gatherings known as TGIF meetings. (The motto “Don’t be evil” was just another way “to empower employees,” observed Eric Schmidt and Jonathan Rosenberg in How Google Works.) As it turns out, the input from workers was less welcome when it came to Dragonfly, where a TGIF meeting ended up going badly awry, leaving many employees convinced the company didn’t actually value their voices.
By the time the meeting occurred, Jack realized the company had little interest in the views of its employees. The email he sent to his manager about the Dragonfly project had elicited a vague, unsatisfying reply. Frustrated by what he perceived as stonewalling, he eventually posted his concerns on an internal company message board, in a letter decrying Dragonfly as a “forfeiture of our values.” This finally did get some attention from his superiors, who invited him to air his concerns at a meeting. The meeting did not go as Jack hoped it would. According to Jack, instead of clarifying what protections would exist for human rights activists in China, Jeff Dean, the head of artificial intelligence at Google, downplayed this concern, reminding Jack that the U.S. government also conducted electronic surveillance through the Foreign Intelligence Surveillance Act. When Jack brought up the letter that Amnesty International and other human rights groups had sent to Sundar Pichai, he was told that outsiders were in no position to tell Google how to run its business. At a certain point during the meeting, Jack zoned out, realizing there were limits to the leverage he had. But while this was disappointing, it did not lead him to feel trapped in the way Harriet Krzykowski had at Dade. Instead, it led him to pursue the other mode of protest that Albert Hirschman outlined in his classic study: “exit.” The day after the meeting, he left the company.
* * *
In theory, the option to exit is available to any person working a dirty and demeaning job. In reality, it is a far easier option to exercise if you have the skills and education to pursue other alternatives, something most dirty workers sorely lack. The lack of alternatives is precisely what led high school graduates from “rural ghettos” to apply to work as prison guards, a “job of last resort” that few other people wanted. It is what led undocumented immigrants to work at slaughterhouses that struggled to find enough native-born Americans to hire.
What these workers labored under was the pressure of economic necessity, the same force that had compelled many of them to take on dirty jobs in the first place. Tech workers were not entirely immune to this pressure. Although many earned good salaries, living in Silicon Valley was expensive, all the more so if you had a family to support. One reason more of his peers did not resign over the Dragonfly project was that they had kids, Jack told me when we met for lunch in Toronto, about a year after he’d left Google. Although Jack did not have kids, he, too, worried about the financial consequences of losing his job—with good reason, it turned out. The year after he quit, his annual income fell by 80 percent, he told me. On the other hand, the money he’d earned while at Google (including the stock options he’d received) afforded him a measure of financial security that the dirty workers I’d met could scarcely have dreamed about. Unlike these workers, moreover, Jack had the skills and credentials to pursue alternatives, a fact underscored when, a few weeks after he left Google, an article about his resignation appeared. It was written by Ryan Gallagher, the same reporter at The Intercept who had broken the original story about the Dragonfly project. “There are serious worldwide repercussions to this,” said Jack in the article.
Having never spoken to the media before, Jack was understandably nervous about what the personal repercussions of doing so might be, not least because, when he left Google, the company had warned him not to talk to the press. In much of Silicon Valley, he soon discovered, the response was to offer him a job. “I think I got, like, thirty companies reaching out in a forty-eight-hour period, trying to hire me—at least thirty,” he told me. One company “flat out offered to pay me more than whatever Google paid me,” he recalled. “Basically, every Silicon Valley company, sometimes multiple people from the same company, were reaching out.” Offers also came to return to Stanford.
As the offers suggested, Jack did not emerge from the controversy over Dragonfly with diminished career prospects. If anything, these prospects were enhanced, burnished by his stature as a talented knowledge worker with a moral backbone, which, amid the broader backlash against Silicon Valley, made him a desirable person to hire. It also made him a desirable speaker and interview subject. After the Intercept article appeared, he was flooded with requests for interviews from other media outlets—Bloomberg, CNN International, Fox. He was also invited to speak at venues such as the Geneva Academy of International Humanitarian Law and Human Rights and to submit testimony at congressional hearings on the tech industry.
“I’m doing just fine,” Jack said cheerfully at the restaurant in Toronto where we met for lunch, sounding not at all like a person who missed working in Silicon Valley and even less like the dirty workers I’d gotten to know. The contrast in their demeanor was as striking as the difference in their financial prospects. After leaving Google, Jack had experienced his share of challenges, struggling to find a high-tech job that would not make him feel compromised in some other way. (At one point, he did some consulting for a company that was helping humanitarian organizations respond to natural disasters, he told me, only to learn that one of its clients—the Department of Homeland Security—could potentially use the same technology to surveil migrants at the border, prompting him to quit.) But unlike the dirty workers I’d met, he had come away from the Dragonfly controversy relatively unscathed, with his integrity untarnished and with no trace of the moral and emotional burdens—nightmares, hair loss, guilt, shame—they endured.
* * *
Not long after meeting Jack Poulson, I spoke with another former Google employee, Laura Nolan. A graduate of Trinity College who lived in Dublin, Laura started working for Google in 2013 as a site reliability engineer, a specialized discipline that involved improving the performance and efficiency of large online software systems and services.
For several years, Laura flourished at the company, earning excellent performance reviews and taking pride in her job. In October 2017, while visiting the Bay Area, she was briefed about a new project that would improve Google’s machine learning analysis of classified imagery and data. “Why are we doing this?” Laura asked out of curiosity. A colleague pulled her aside and told her it was for Project Maven, an artificial intelligence contract with the U.S. Department of Defense, the aim of which was to enhance the Pentagon’s ability to track and identify objects—including vehicles and human beings—in aerial drone footage.
When Laura learned this, she was stunned. Scanning all of the world’s books, reaching a billion users in Africa: this was Google’s mission, she’d thought. She did not think that helping the U.S. military conduct extrajudicial killings was part of the mission. As she would subsequently learn, Project Maven was not a weapons project. It was a surveillance program designed to automate and accelerate the process of sifting through the vast volume of footage that drones recorded as they hovered over distant war zones. The distinction failed to comfort Laura, particularly as she read more about the
nature of drone warfare. Among the sources she found was Kill Chain, a book by the investigative reporter Andrew Cockburn. The “kill chain” began with surveillance and often ended with “signature strikes” that killed innocent civilians, Cockburn argued, the same conclusion that Christopher Aaron and Heather Linebaugh had reached. Automating this process would inevitably lead to more surveillance, Laura feared—and, in turn, to a “scaling up” of lethal strikes.
Although Google’s contract with the Pentagon was for just fifteen million dollars, it was a pilot project that could pave the way for more lucrative collaborations with the Pentagon, including a ten-billion-dollar Joint Enterprise Defense Infrastructure, Laura learned. When she shared her concerns about Project Maven with one of her directors in Dublin, he was frank about the stakes, telling her, “We have to do this because of shareholder value.” So much for Google’s idealism, Laura thought to herself as she began to mull leaving her job, which soon became a source of guilt rather than pride. After learning about Project Maven, she had trouble sleeping. She put on weight. She felt anxious and refrained from telling even her partner and closest friends what was wrong. The silence was obligatory, a consequence of the nondisclosure agreement she had signed, which forbade her to talk about Project Maven with anyone outside Google.
Even inside Google, few employees knew about Project Maven. This changed in February 2018 when a group of engineers who’d been working on the project aired their concerns about it on an internal message board. A week later, word of Project Maven leaked to the press. When this happened, Laura was thrilled. Finally, she could break her silence. Along with more than three thousand other Google workers, she signed a petition addressed to Google’s CEO, Sundar Pichai, that called for Project Maven to be canceled. “We believe that Google should not be in the business of war,” stated the petition, which was obtained and published by The New York Times.