Cartwright looked at Strategic Command’s arsenal and began to ask a big question: Are these really the weapons that will keep the nation safe in the next half century? There were safety issues: the nuclear arsenal was aging; missile silos were still using five-inch floppy disks. The missileers working inside the silos were dispirited; not only were their command posts damp and out of date, but staffers were running through mind-deadening procedures preparing for an order that would probably never come.
Cartwright was equally concerned about the strategic vacuum. America’s reliance on nuclear deterrence was actually restricting a president’s ability to deal with the kind of adversaries the United States was facing every day, from the Middle East to East Asia. Because the consequences and casualties of using a nuclear weapon were so huge that they were paralyzing, Cartwright began to think strategically about the new cyberweapons that Rumsfeld had put under his command. They presented a huge intellectual puzzle and, as Hayden remembered later, “Hoss was strangely underemployed at Stratcom.” He began thinking about how cyberweapons could expand a president’s choices after decades in which nuclear weapons had limited them.
“The tools available to a president or nation in between diplomacy and military power were not terribly effective,” Cartwright told the US Naval Institute in 2012. He had by then left military service and was only beginning to unspool his thinking on this problem. What American presidents needed, he believed, were more coercive tools that could back up diplomacy. And nuclear weapons did not serve that purpose. No adversary thought an American president would ever reach for a nuclear weapon, except if the survival of the United States were at stake.
In his years at Strategic Command, Cartwright later said, he kept looking for new technologies the military could actually employ and, preferably, exploit so that the United States could prevail in a fight without ever firing a shot. These cyberweapons were what he called “speed-of-light” weapons—repurposed “electronic warfare” weapons that could disable an adversary’s communications or paralyze its defenses. Others were directed energy weapons, such as lasers. Unlike nuclear weapons, these could be used in a first strike.
More important, beyond the damage they could inflict in wartime, cyberweapons had a coercive power in peacetime. Cartwright talked about using these weapons “to reset diplomacy,” or to force a country to realize that it had little choice other than agreeing to negotiate. When he gave his 2012 speech, Cartwright never once made reference to Iran, but he didn’t need to do so. To anyone watching the world scene at the time—a moment when the United States was simultaneously preparing to negotiate with Tehran and to go to war with it—his meaning was obvious.
Soon after Rumsfeld handed cyberwarfare to Strategic Command, a skunk works of sorts popped up there, exploring what it would take to deploy these weapons, how they should be used, and how the military’s role in marshaling them would be different from the NSA’s role. Over time, what emerged from Cartwright’s creation was a prototype of what today is the US Cyber Command, although then it existed largely on paper and was barely staffed.
In 2007, with wars still raging in the Middle East and South Asia, Cartwright moved on to become vice chairman of the Joint Chiefs of Staff. It was a rough transition. He wasn’t an Iraq veteran, a liability at a time that this distinction was cherished as a prerequisite for higher command. Tension developed between Cartwright and the chairman of the Joint Chiefs, Adm. Mike Mullen, and worsened over time. Despite challenges, it was from this post that Cartwright began to put America’s cyber forces in action.
* * *
—
In January of that same year, 2007, the director of national intelligence, John D. Negroponte, presented Congress with the annual worldwide threat assessment, an exercise that the nation’s top intelligence officials understandably despised. It forced them to rank—in public—the major threats to the United States, and often it was only an exercise in telling Congress what it wanted to hear. But as a snapshot of national fears and obsessions at any given moment, it was nonetheless revealing.
When Negroponte settled into the witness chair that January day, he opened with a blunt statement: “Terrorism remains the preeminent threat to the homeland.” Senators nodded in agreement. Dig further into his report, however, and one fact leaped out: cyberattacks did not even make the list. They were totally absent.
Yet even then, the nation’s intelligence chiefs knew well that the daily skirmishing among superpowers was, if anything, intensifying. Chinese attacks on American companies—including military contractors—were ramping up. By 2008, the year after Negroponte testified, Chinese hackers working for the People’s Liberation Army were inside Lockheed Martin’s networks, making off with plans related to the F-35, the world’s most sophisticated, and certainly most expensive, fighter jet. Later that year they hacked the campaigns of Barack Obama and John McCain, rivals for the presidency. Lisa Monaco, who was running the national security division of the Justice Department at the time, remembers clearly the first time she met Obama’s senior staff. “I went out to explain to them that the Chinese were all over their system,” she said with a laugh years later, when she was the Homeland Security Advisor at the White House and overseeing the effort to bolster the nation’s cyber defenses.
But the true wake-up call came on October 24, 2008, with the nation on the brink of Obama’s election. Debora Plunkett remembers it well. A month into a new job running the NSA’s Advanced Network Operations division, she was assigned to develop and deploy tools to determine if anyone was inside, or trying to get inside, the US government’s classified networks.
Plunkett hadn’t taken a conventional route to the NSA. The daughter of a long-distance trucker, she had grown up not far from Fort Meade but had never heard of the agency until after college. Coming off two tough years in forensics with the Baltimore Police Department, she was advised by a friend’s boyfriend who worked for the NSA to take the entrance exam. She was given only a vague description of the agency’s work, but for Plunkett, who loved puzzles, what she heard sounded intriguing. She passed the exam and joined the NSA in 1984.
Over the next quarter century, Plunkett became one of the few African American women to rise within the NSA leadership. “I was quite often the only minority and absolutely the only minority woman in my workspace and organization,” she said. She climbed from the cryptography section to her position running the ANO and soon found herself leading a search for network intruders.
On a brisk fall day at Fort Meade in 2008—just ahead of Obama’s election—Plunkett’s team found something that made her blood run cold: Russian intruders in the Pentagon’s classified networks. This was a new encroachment for the defense department, which had never—until that moment—discovered a breach in what was known as SIPRNet (it had the unwieldy name of “Secret Internet Protocol Router Network”). SIPRNet was far more than an internal network: It connected the military, senior officials in the White House, and the intelligence agencies. In short, if the Russians were in that communication channel, they had access to everything that mattered. Plunkett recalls that “pretty soon we went straight to Alexander,” meaning Gen. Keith Alexander, then the director of the NSA.
Investigators raced to figure out how the Russians had gotten inside. The answer was pretty shocking: The Russians had left USB drives littered around the parking and public areas of a US base in the Middle East. Someone picked one up, and when they put the drive in a laptop connected to SIPRNet, the Russians were inside. By the time Plunkett and her team made their discovery, the bug had spread to all of US Central Command and beyond and begun scooping up data, copying it, and sending it back to the Russians.
It was a bitter lesson for the Pentagon—they were, in fact, easy pickings for attackers using a technique that the CIA and NSA had often used to get into foreign computer systems. “People worked through the night to come up with a solution,” Plunk
ett recalled. “We were able to develop what we thought was a reasonable solution that ended up being a very good solution.” The fix—called Operation Buckshot Yankee—was deployed by the Pentagon later that day. Then, to keep a similar breach from happening again, USB ports on Department of Defense computers were sealed with superglue.
But the damage had already been done. As William Lynn, then deputy secretary of defense, later explained, the intrusion “was the most significant breach of U.S. military computers ever, and it served as an important wake-up call.”
Perhaps so, but not everybody woke up. After leaving the NSA, Plunkett told me that for all her efforts—and they were considerable—she remained amazed by how easily outsiders appeared able to break into government and corporate systems. With every major hack, “folks like me will say—this will be the moment, this is the watershed moment. And it never was,” she added, “because we’re so lax about security and so inconsistent in investing in security.
“We just make it easy for them.”
* * *
—
While Plunkett was trying to fortify the Pentagon’s networks against the Russians, the NSA’s offensive team, working not far away on the Fort Meade campus, was already making centrifuges blow up in Natanz.
Prodded by General Cartwright, Keith Alexander at the NSA, and a range of other intelligence officials, President Bush had authorized a covert effort to inject malicious code into the computer controllers at the underground Iranian plant. Part of the plan was to slow the Iranians and force them to the bargaining table. But an equally important motivation was to dissuade Prime Minister Benjamin Netanyahu of Israel from bombing Iran’s facilities, a threat he was making every few months. Bush took the threat very seriously. Twice before the Israelis had seen threatening nuclear projects under way, one in Iraq, the other in Syria. They had destroyed them both.
Olympic Games was a way to keep the Israelis focused on crippling the Iranian program without setting off a regional war. But getting the code into the plant was no easy task. The Natanz computer systems were “air gapped” from the outside, meaning they had no connections to the Internet. The CIA and the Israelis endeavored to slip the code in on USB keys, among other techniques, with the help of both unwitting and witting Iranian engineers. With some hitches, the plan worked reasonably well for several years. The Iranians were mystified about why some of their centrifuges were speeding up or slowing down and ultimately destroying themselves. Spooked, they pulled other centrifuges out of operation before those met the same fate. They started firing engineers.
At Fort Meade, and the White House, the subterfuge seemed successful beyond anything its creators had hoped. And then all went wrong.
No reporter or news organization exposed Olympic Games. The governments of the United States and Israel managed to do so all by themselves, by mistake. There has since been a lot of finger-pointing about who was responsible, with the Israelis claiming the United States moved too slowly, and the United States claiming the Israelis became impatient and sloppy. But one fact is indisputable: the Stuxnet worm got out into the wild in the summer of 2010 and quickly replicated itself in computer systems around the world.
It showed up in computer networks from Iran to India, and eventually even wound its way back to the United States. Suddenly everyone had a copy of it—the Iranians and the Russians, the Chinese and the North Koreans, and hackers around the globe. That is when it was given the name “Stuxnet,” a blend of keywords drawn from inside the code.
In retrospect, Operation Olympic Games was the opening salvo in modern cyber conflict. But at the time, no one knew that. All that could be said for sure was that a strange computer worm floating around the world had emanated from Iran, and in that summer of 2010 Iran’s nuclear program seemed a natural target.
In the newsroom of the Times, we had been on high alert for any evidence that a cyberweapon, rather than bombs and missiles, was being aimed at Iran’s nuclear complex. In early 2009, just as Obama was preparing to take office, I reported that President Bush had secretly authorized a covert plan to undermine electrical systems, computer systems, and other networks on which Iran relies, in the hopes of delaying the day that Iran could produce a workable nuclear weapon. Eighteen months later, no one was surprised when evidence began to mount that Stuxnet was the code we had been looking for.
Soon an unbeatable team of cyber sleuths—Liam O’Murchu and Eric Chien of Symantec—grew intrigued. They were the odd couple of cyber defense: O’Murchu a boisterous Irishman with a thick brogue who raised the alarm at Symantec, and Chien the quiet engineer who dug in. For weeks the pair ground away at the code. They ran it through filters, compared it to other malware, and mapped how it worked. “It’s twenty times the size of the average piece of code,” but contained almost no bugs, Chien recalled later. “That’s extremely rare. Malicious code always has bugs inside of it. This wasn’t the case with Stuxnet.” He admired the malware as if he were an art collector who had just discovered a never-before-seen Rembrandt.
The code appeared to be partially autonomous; it didn’t require anyone to pull the trigger. Instead, it relied on four sophisticated “zero-day” exploits, which allowed the code to spread without human help, autonomously looking for its target.*1 This fact provided a crucial clue to Chien and O’Murchu: such vulnerabilities are rare commodities, hoarded by hackers, and sold for hundreds of thousands of dollars on the black market. It became clear that Stuxnet couldn’t be the work of an individual hacker, or even a team of hobbyists. Only a nation-state could have the resources—and the engineering time—to assemble such a sophisticated piece of code. “It blows everything else out of the water,” O’Murchu told me later.
Unsuprisingly, the two men grew paranoid about who might be watching them as they watched the code. Half joking, Chien told O’Murchu one day, “Look, I am not suicidal. If I show up dead on Monday, you know, it wasn’t me.”
Stuxnet’s inner workings harbored another clue that Iran’s nuclear program was the malware’s target. The worm seemed to be probing for something, in this case a specific kind of hardware known as a “programmable logic controller” made by Siemens, the German industrial giant. These are specialty computers that control water pumps, air-conditioning systems, and much of what happens in a car. They turn valves on and off, control the speed of machines, and watch over an array of sophisticated, modern-day production operations: In chemical plants, they control the mix. In water plants, they control fluorination and flow. In power grids, they control electricity. And in nuclear enrichment plants, they control the operation of the giant centrifuges that spin at supersonic speeds.
Chien and O’Murchu began publishing their findings in the hope that someone out there was expert in the kind of systems this strange code seemed to be targeting. Their plan worked. One expert in Holland explained to them that part of the code they had published was searching for “frequency converters,” devices used to change an electric current, or sometimes change the voltage.
There aren’t many innocent explanations for sneaking into someone’s infrastructure to change the flow of an electric current. And in Iran’s nuclear facility at Natanz, frequency converters played a critical role: they were part of the control system for nuclear centrifuges. And the centrifuges, the US government’s experts knew from their own bitter experience, were highly sensitive. Because they spun at supersonic speeds, any dramatic change—triggered, say, by a change in current—could send the rotors out of kilter, like a child’s wobbling top. When they became unstable, the centrifuges would blow up, taking out any machinery or people nearby. Uranium gas would be spilled all over the centrifuge hall.
In short, to stop the Bomb, America’s new cyber army had made a bomb—a digital one.
As Iran’s centrifuges were spinning out of control, the operators at Natanz had no idea what was happening. The data that showed up on their screens seemed no
rmal—the speed, the gas pressure. They had no way of knowing that the code was faking them out and suppressing the signs of imminent disaster. By the time the operators figured out something was dangerously wrong, they could not shut down the system. The malware had affected that process too.
There were other clues. Although the malware eventually infected computers around the world, it kicked into gear only when it found a very specific combination of devices: clusters of 164 machines. That number sounded pretty random to malware sleuths, but it set off my mental alarms. The centrifuges at the Natanz nuclear facility—I knew from years of covering Iran’s nuclear program and interviewing inspectors from the International Atomic Energy Agency—were organized in groups of 164.
That left little mystery about the intended target.
The following summer and fall, two Times colleagues, Bill Broad and John Markoff, and I published several stories about the hints emerging from the Stuxnet code. Markoff uncovered stylistic and substantive evidence of Israel’s role in the code writing. Next, we found one of several American calling cards embedded in the code—an expiration date, when the code would drop dead. Teenagers don’t put expiration dates into their code. Lawyers do—for fear that malware could become the digital equivalent of an abandoned land mine in Cambodia, waiting for someone to step on it two decades after it was planted. Finally, Bill Broad discovered the final clue we needed: evidence that the Israelis had built a giant replica of the Natanz enrichment site at their own nuclear weapons site, Dimona. (We didn’t yet know the United States was doing the same thing in Tennessee.) The purpose was clear: both countries were building models to practice their attacks, much as the United States built a model of Osama bin Laden’s house in Abbottabad, Pakistan, around the same time, to practice the impending raid against the world’s most wanted terrorist.
The Perfect Weapon Page 4