The Perfect Weapon
Page 29
Moreover, the early hopes that Cyber Command would prove to be the military’s new Special Operations Forces turned out to be more hype than reality. “They simply didn’t run at the tempo of Special Forces—they weren’t hitting foreign networks every night the way the Special Forces hit houses in Afghanistan,” said one senior official who was dealing with both the NSA and Cyber Command. “And so they didn’t have a lot of opportunity to learn from their mistakes.”
The fact of the matter was that the US was simply not conducting major cyber operations against foreign adversaries at anywhere near that pace; at most, the Cyber Mission Forces assigned to conduct offensive operations were doing just a few each year, every one requiring presidential authorization. The result was that Cyber Command came to resemble its parent, Strategic Command, which watches over America’s nuclear forces: they spent a lot of time training, debating doctrine, establishing procedures for operations, and playing out scenarios.
None of those scenarios, it turned out, involved what would happen if a foreign power tried to manipulate an American election.
* * *
—
Alex Stamos, the blunt, bearded, whip-smart security chief for Facebook, had a simple explanation for why the world’s biggest purveyor of news and communications didn’t see the propaganda that the Internet Research Agency and other Russian groups were distributing to influence the 2016 election: they simply weren’t looking.
“The truth is that it was no one’s job to look for straight-up propaganda,” Stamos told me in February 2018, as the world began to fall in on the company that once had viewed itself—with one part hubris, one part blindness—as a force for the spread of democracy and the bane of dictators.
“We were looking for traditional abuse of the platform,” he said at the Munich Security Conference, an annual gathering of foreign ministers, national-security officials, and think-tankers that, in 2018, was consumed by the new weaponry of cyber and social media. “We missed it because we weren’t looking.”
Stamos knew the vulnerabilities of complex systems cold—and had little time or patience for executives who didn’t like to hear stark assessments of why their ideas wouldn’t work. Among those who discovered how quickly Stamos could get in someone’s face was Adm. Michael Rogers, in his early days at the National Security Agency. In February 2015, I was at a cybersecurity conference where Rogers was speaking, in his usual measured terms, about balancing the need for encrypted communications with the government’s need to be able to decrypt conversations among terrorists, spies, and criminals.
Stamos, then the chief security officer at Yahoo!, grabbed a microphone and repeated to Rogers the argument that creating a back door in a communications system was akin to “drilling a hole in a windshield.” The entire structure would be so weakened, he suggested, that it would destroy the concept of secure communications. When Rogers went into his routine about balancing interests again, Stamos kept pushing. The video of the event quickly went viral.
Over time, though, Stamos’s insistent voice grated on the top leadership at Yahoo!, especially when he pressed for such full encryption of data that Yahoo! itself would not be able to decrypt the communications on its own platform—mimicking what Apple had done with the iPhone. Of course, if Yahoo! could not pluck out keywords from its customers’ searches and communications, it couldn’t make money by selling advertising and services that catered to them. There was no way that Marissa Mayer, the struggling company’s chief executive, was going to choose that much privacy protection over the revenues arising from harvesting the habits of Yahoo! users. Stamos soon left to become Facebook’s chief security officer—where he would also run afoul of the leadership.
The Facebook that Stamos joined in 2015—eleven years after the company’s storied founding—still thought of itself as a huge pipeline for the vast transmission of content, but not as a publisher. Its business plan was based on the assumption that it would not exercise editorial judgments; instead, like the telephone company, it would carry content but not edit it. Naturally, this was a false analogy: From the start, Facebook made its money not by selling connectivity, but by acting as the world’s seemingly friendly surveillance machine, then selling what it learned about users, individually and collectively. The old phone companies never did that. As my colleague Kevin Roose wrote, “Facebook can’t stop monetizing our personal data for the same reason Starbucks can’t stop selling coffee—it’s the heart of the enterprise.”
Yet the idea that Facebook and its competitors could pursue that strategy and ignore the content of what was appearing on their platforms—and thus avoid editing on a massive scale—lay somewhere between naïveté and delusion. Phone companies had to crack down on telephone fraud; network television couldn’t broadcast pornographic films. Even Netflix confronted limits. So over time Facebook was forced to keep revising its “terms of service,” defining exploitative, racist, or illegal activity (selling drugs, gambling, or prostitution, for example) that would not be permitted. But the company never referred to these as editorial decisions; instead, they were “community standards” that would force it to disable accounts or alert law-enforcement authorities if there was a “genuine risk of physical harm or direct threats to public safety.”
It all had a feel-good sensibility to it—until Facebook tried to define what those policies meant in real life. It began with simple things. Soon it got very complicated.
Child pornography was easy; it was banned early in Facebook’s history. Then parents of newborns discovered that the site was taking down pictures they were sharing of their own babies in the bath. If they reposted them, their accounts were disabled. Then, in 2016, came the first moment when Facebook executives discovered they had to make a news judgment, because, as it turns out, algorithms have no sense of history. Norway’s major daily, Aftenposten, put on a photography exhibit and posted on Facebook the iconic Vietnam War photograph of a young girl running down the road, naked, to escape napalm and mayhem. The photo won the 1973 Pulitzer Prize. Naturally, the picture was immediately banned by Facebook’s algorithms. Aftenposten called the company out, and within a day a clearly ridiculous deletion was reversed, after a series of hurried conference calls among Facebook executives. It was not a hard decision.
But that conference call was, essentially, the first time senior Facebook managers had to think like news editors, balancing their rules against history, artistic sensibility, and, most important, news judgment. It was the moment when they realized that no algorithm could do the job. When I said that to a senior official at the company, he grimaced and asked, “You think there will be more?”
His question reflected how deeply the company was in denial about what was coming. Facebook executives, along with their colleagues at Google, celebrated when their creations helped organize students in Tahrir Square to oust President Hosni Mubarak, and Libyans to overthrow Muammar Gaddafi in Libya. “The Arab Spring was great,” Alex Stamos said to me. “Glory days.” But time was not on the side of the democrats. Little thought had been given to what would happen when the world’s autocrats and terrorists caught on, and the degree to which the same platforms enable social control, brutality, and repression.
It started with the beheading videos in the early 2000s. And the captured pilot burned alive in a cage. On Twitter, ISIS operatives created the Dawn of Glad Tidings app for their followers to download, which allowed them to send out mass messages detailing recent attacks, complete with coordinated images and hashtags. Social-media companies found that new accounts popped up faster than old ones could be found and killed off.
Of course, the decision to take down the beheading videos was an easy one; they violated the “terms of service.” Yet the companies quickly ran into the same problem the NSA did: There are lots of places to hide on the Internet. ISIS had placed perfect digital copies of their library of horror and recruitment videos all around the worl
d. A predictable arms race ensued. YouTube automated its review systems to speed the process of bringing down videos.
It became an unwinnable game of digital whack-a-mole. As Lisa Monaco, Obama’s homeland security adviser, said, “We are not going to kill our way out of this conflict. And we are not going to delete our way out of it either.”
In 2017, Facebook and others rolled out an impressive technological fix, a “digital fingerprint” for every image, created by turning every beheading photograph or exploitative picture of a child into a black-and-white image and then assigning each pixel a numeric value based on its contrast.
“Let’s say that somebody uploads an ISIS propaganda video,” explained Monika Bickert, a former prosecutor who became Facebook’s representative to governments around the world. “With the fingerprint, if somebody else tries to upload that video in the future we would recognize it even before the video hits the site.” But discerning motive, she concedes, requires human review. “If it’s terrorism propaganda, we’re going to remove it,” she told me. “If somebody is sharing it for news value or to condemn violence, that’s different.”
Google, meanwhile, tried a different approach: “Google Redirect,” an effort to send people who searched for ISIS propaganda or white-supremacist content to alternative sites that might make them think twice. To make it work, Jigsaw, the company’s New York–based think-tank and experimental unit, interviewed scores of people who had been radicalized, trying to understand their personality traits, starting with their distrust of mainstream media.
“The key was getting them at the right moment,” Yasmin Green, who helped spearhead the movement, told me. It turned out there was a small window of time between when potential recruits develop an interest in joining an extremist group and when they make the decision. Rapid exposure to the personal testimony of former recruits who escaped from the brutalities of life inside ISIS, and describe it in gory detail, was far more likely to dissuade new recruits than lectures about the benefits of a liberal view of the world. So were the accounts of religious leaders who could undercut the group’s argument that it was following the Koran.
“Our job is to get more and better information in the hands of vulnerable people,” Green told me.
But that job requires the world’s pipeline providers to create, delete, and choose. In short, they had become editors. Just not fast enough to stay ahead of the Russians.
* * *
—
Mark Zuckerberg quickly came to regret his dismissive words, six days after Donald Trump’s election, that Facebook had nothing to do with it.
“Personally I think the idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way—I think is a pretty crazy idea,” Zuckerberg wrote. “Voters make decisions based on their lived experience. I do think there is a certain profound lack of empathy in asserting that the only reason someone could have voted the way they did is they saw some fake news. If you believe that, then I don’t think you have internalized the message the Trump supporters are trying to send in this election.”
Nine days later Zuckerberg was in Peru, at a summit that President Obama was also attending. The president took him into a private room and made a direct appeal: He had to take the threat of disinformation more seriously, or it would come to haunt the company, and the country, in the next election. Zuckerberg pushed back, Obama’s aides later told me. Fake news was a problem, but there was no easy fix, and Facebook wasn’t in the business of checking every fact that got posted in the global town square. Both men left the meeting dissatisfied.
By the time Zuckerberg spoke, however, Alex Stamos and his security team were nearing the end of an excavation of Facebook history, digging into the reports of how the Russians had used ads and posts to manipulate voters. As Stamos dug, he began to run into some quiet resistance inside the company about going further. His study was delivered to the company’s leadership on December 9, 2016, laying out the Russian activity his group had found. But when it was ultimately published four months later under the bland headline “Information Operations and Facebook,” it had been edited down to bare essentials and stripped of its specifics. Russia was not even mentioned. Instead the study referred to anonymous “malicious actors” that were never named. It played down the effects, confusing volume with impact. “The reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues.”
Then it concluded: “Facebook is not in a position to make definitive attribution to the actors sponsoring this activity.”
In fact, they had a pretty good idea by April that “Fancy Bear,” the Russian group directed by the GRU, was behind some of the Facebook activity. The company turned much of the evidence about ads over to Senate investigators in September. But it was wasn’t until the Senate published some of the juiciest examples—from ads designed to look like they were part of the Black Lives Matter movement to another in which Satan is shown arm-wrestling Jesus and saying, “If I Win Clinton Wins!”—that the company was forced to admit how much propaganda ran on its site.
“The question was, how had we missed all this?” one Facebook executive told me. The answer was complex. The ads amounted to very little money—a few hundred thousand dollars—and it was not obvious they were coming from Russia. Later, after the ads were discovered, Facebook’s lawyers began to worry that if the ads and posts of private users were made public—even those created by Russian trolls—it could violate Facebook’s own privacy policies.
In September 2017, ten months after the election, the company finally began to concede the obvious. It said those who had manipulated Facebook “likely operated out of Russia,” and it turned over 3,000 of these ads to Congress. It had found evidence that the Internet Research Agency created 80,000 posts on Facebook that 126 million people may have seen—though whether they absorbed the messages is another question. But Facebook was still insisting it had no obligation to notify its users that they had been exposed to the material.
“I must say, I don’t think you get it,” Sen. Dianne Feinstein, the California senator who was usually a great advocate of her state’s economic champions, said during the resulting hearing. “What we’re talking about is a cataclysmic change. What we’re talking about is the beginning of cyberwarfare.” That wasn’t exactly right: Whether it was warfare depended on how you define the term. And if it was cyberwar, it wasn’t the beginning, by a long shot.
By the spring of 2018, Facebook was reeling. Additional disclosures that it had given access to its user profiles to a scholar in 2014, who in turn massaged the data and used it to help Cambridge Analytica, a London company that targeted political ads for the Trump campaign, forced Zuckerberg to a new level of contrition. The problem was that Facebook’s users had never signed up for having their lives and predilections examined, then sold, for such purposes. “We have a responsibility to protect your information,” Zuckerberg declared in ads and a series of carefully scripted television interviews. “If we can’t, we don’t deserve it.”
The more telling concession came out of France, where the company began announcing a radical experiment. It would begin to fact-check photos and videos around elections, it said—just as news organizations have done for decades. Sheryl Sandberg, the company’s chief operating officer and one of the few executives who had serious Washington experience, offered the most candid assessment: “We really believed in social experiences,” she said. “We really believed in protecting privacy. But we were way too idealistic. We did not think enough about the abuse cases.” Yes, it was naïveté. But it was a naïveté that helped drive immense profits—and blinded the company’s top executives to the consequences of how the information it entrusted to it by Facebook’s users could be abused.
* * *
—
Before he became the Pentagon’s reside
nt technology scout and venture capitalist in Silicon Valley, Raj Shah spent twelve years flying an F-16 around Afghanistan and Iraq. Much of that time he wondered why a $30 million aircraft had worse navigation systems than a Volkswagen.
The mapping technology was so ancient that it did not, at a glance, show pilots how close they were to national borders, or the features of cities and towns below them. Worse yet, Shah told me one day as we walked around his Pentagon-created start-up—called DIUx for “Defense Innovation Unit, Experimental”—“I had no way of knowing if I was flying into Iran” by mistake, a potentially fatal error.
When Shah was back home on leave, he often rented a Cessna and zoomed around with a $350 iPad mini strapped to his thigh. With an app called ForeFlight on the mini, he could see exactly where he was, every feature of the landscape below. He could look at the map or a satellite photograph. It tracked him with near precision. “I knew exactly where I was.” As he thought about how he had better mapping on a beat-up iPad than in America’s workhorse fighter, he could come to only one conclusion: “This is screwed up.”
To Shah, the experience exemplified what was wrong with how the Pentagon equipped war fighters. Systems designed with 1970s technology couldn’t be easily upgraded, because the process of testing to make sure they are “military grade” takes years—by which time the technology is out of date. “This is why we have fifty-year-old aircraft carriers,” Shah said, “with thirty-year-old software.”
When Shah’s flying days were coming to an end, he landed back in Silicon Valley, “where everything runs on speed.” From his new perch, the Pentagon’s mode of operating seemed even more ridiculous. “Who wants a four-year-old phone?” he asked me one day.
Shah had some allies in holding this view, including Ashton Carter, who had spent months in Silicon Valley between the time he was deputy secretary of defense and when Obama brought him back to serve as secretary in February 2015. Carter and his chief of staff, Eric Rosenbach, another major architect of the Pentagon’s cyber efforts, were determined to use their two years to change the culture of the Pentagon. They sponsored a “Hack the Pentagon” challenge, setting up prizes for hackers who could find holes in the security of Pentagon programs (the star of the competition, naturally, was an eighteen-year-old high school senior whose mother dropped him off at the Pentagon to pick up his reward.) They tried to get Silicon Valley technologists to spend a year or so in government service, with only partial success. One year was, as one of the experimenters put it, “just long enough for a massive clash of cultures, but not long enough to get much done.”