Senior U.S. intelligence and technology officials have long warned that the next “Pearl Harbor” may be electronic. As Noah Shachtman of the Brookings Institution think tank’s 21st Century Defense Initiative has said, “But now we know that what they were talking about wasn’t what other people might do to us; it was what we were doing to others.”5
As a result, calls for laws that would give the government more control over the dot-com domain have new, sinister undertones. The legitimate concerns about how this protection scheme would work, or whether it would stifle innovation or compromise civil liberties, now must be paired with a fact: for every public expression of law, there is also a covert purpose being served.
Meanwhile, attempts to draw boundaries around the global cyber “commons” may become next to impossible. That isn’t to say that there won’t be cooperation—there are more than a dozen international organizations that already, in a way, regulate parts of the Internet. Countries actively cooperate on cyber crime. Even the United States and China quietly partner to thwart copyright violators. But from the standpoint of each country’s political economy, there is little incentive to sign treaties that constrain action if the prime mover of those treaties has already violated the sovereignty of another country. (International laws, both formal and customary, obviously allow a country to protect itself using its military, but there is a real argument about whether they allow preemptive strikes.)
In the end, the U.S. officials who approved OLYMPIC GAMES decided that America’s national security interests demanded an action that, if revealed, might hinder its long-term interests. Our enemies in the electronic battlespace will help determine whether it was worth it.6 “I think there is a big difference between government-supported economic espionage (China) and geopolitical covert actions. I am not saying one is better or worse, but they are quite different and probably shouldn’t be conflated,” a former administration official insists. “But it is a distinction without a difference, at least for now.”
One of the country’s most senior experts on cyber warfare, a person who currently serves in a position to influence policy, gave an unequivocal answer to the question of whether the narrative change—from basically assumed to definitely confirmed—would make things more difficult for the U.S. government both militarily and diplomatically. “Certainly. The sad part is that it will be a nightmare for us whether or not it is true,” the official said. “I think Sanger’s article is a critical milestone regardless of its accuracy.”
Long-serving intelligence experts like this one operate on a different time horizon than do the political appointees and staff who directly serve the president. It would surprise many Americans who are critical of the CIA that the Agency often resists requests from the executive branch for covert action because it has learned from mistakes. Generally, covert action should be the action of last resort, when all other alternatives have failed. Covert actions can span several presidencies. CIA directors are often the most hesitant. As former director Richard Helms wrote, “At its best, covert action should be used like a well-honed scalpel, infrequently, and with discretion lest the blade lose its edge.”7 The problem, as former director William Colby wrote, is that covert operations often involve a lot of people, and “one man has the power to frustrate the whole thing.”8
A week or so before Sanger’s story hit the press, researchers in Europe announced the discovery of a highly sophisticated computer bot that sat undetected on several hundred seemingly deliberately chosen personal and business computers. It was dubbed “Flame.” State sponsorship was a given. A former U.S. intelligence official said that the Flame was the NSA’s first major cyber exploitation effort after President Bush signed a finding allowing the intelligence community to do “whatever is necessary to bring down Al Qaeda and its leadership.”9 The virus took years to code and test. In 2008, using conventional spear-fishing techniques by way of email, it was unleashed on several targets, including Iranian proxies in the al-Qaeda network, and more than a thousand suspected peripheral players and financiers. (How did the NSA get their email addresses? Even cursory attempts to answer that question point to the cooperation of companies that store and process email, most of them based in the United States.)
By tracking the software’s progress from targeted computer to, perhaps, the computer of someone theretofore unknown, Flame traces the flow of money and resources and people who, whether for reasons of virtue or vice, associate with terrorists. Given the sophistication of the viruses, it is hard to imagine that the computer scientists and managers who wrote up the extensive read-aheads that go along with any major covert action did not anticipate the reality that each program would operate until—not if—it was publicly disclosed. It is hard to hide anything from anyone on the Internet. But more on Flame in a moment.
On its face, the collective response by Congress to Stuxnet would seem to be an overreaction. But there are institutional reasons such a response is merited. For one, human beings who are asked to keep something secret do not react well to a double standard that allows others to disclose it without consequence. Members of Congress are just such human beings. Their access to secrets of the executive branch is contingent upon whether they prove responsible with that information. Who determines whether Congress is “responsible”? The executive. The same people, in other words, who leaked the details of Stuxnet. (Concerning the legal obligation of the executive branch to brief the legislative branch on covert operations, once a finding has been transmitted to Congress, the CIA can basically tell overseers that the covert action is working, or working well, or not working very well, and get away with providing little supplemental detail.)
Tension between the branches flared up after the bin Laden raid, when the armed services and intelligence committees received very little information that didn’t make its way almost immediately into the press. To some on the congressional intelligence committees, the administration is simply too proud of its own accomplishments and President Obama so sensitive to the notion that he is not tough when it comes to fighting terrorism, that post facto disclosure (for example, successful drone strikes, thwarted terrorist attacks) are seen as legitimate ways of messaging.
The charge is not without evidence. The administration did in fact provide filmmakers Mark Boal and Katherine Bigelow with a special briefing about the raid, and their movie about members of elite special operations forces suddenly had a new ending.∗ Meanwhile, the U.S. Special Operations Command cooperated extensively with Nicholas Schmidle of the New Yorker, allowing his article to accurately channel the thoughts of Navy SEALs who were on the raid’s stealth Black Hawk that night. In both cases, a deputy commander of SEAL Team Six was offered as a source of guidance on the orders of Mike Vickers, who was, at the time, the chief civilian special operations manager in the Pentagon. When a Freedom of Information Act request uncovered internal emails testifying to this fact, the SEAL’s name was redacted.†
In a hyperpartisan state run by men and women seeking validation wherever it might be found, and an aggressive press corps running a twenty-four-hour news cycle watched and read by a society embracing openness with heedless abandon, and technology that allows Libraries of Congress worth of classified material to be moved from the deep state to the public domain in a matter of minutes, it is clear that secrecy as we know it has reached a precipice. The modern state now faces serious implications as a result of leaks not as an aberration, but as inevitability.
Computer scientists at Kaspersky Lab analyzed Flame and compared it with Stuxnet. They discovered a common section of code that proved conclusively that the two viruses were developed in tandem by the same organization. Because the story of Stuxnet leaked, we now know that the NSA is also responsible for Flame.
This makes the work of our cyber warfare group more difficult, because any future cyber weapons will now have to be engineered from scratch in order to allow for deniability, which is essential to covert operations. In the coming years, this will bec
ome a serious problem. In the real world, it would be like having to reinvent the sniper rifle every time we have to quietly kill someone.
∗Marc Ambinder and Mark Boal met once to exchange details and thoughts on Neptune’s Spear. Boal did not tell Ambinder who his sources were.
†The Special Operations Command asked the authors to avoid revealing his real name.
Notes
1. CNN, The Situation Room, Transcript, June 6, 2012, http://www.cnn.com/TRANSCRIPTS/1206/06/sitroom.02.html.
2. Dylan Byers, “Kerry Questions NYT Decision to Run Stories,” Politico, June 12, 2012, http://www.politico.com/blogs/media/2012/06/kerry-questions-nyt-decision-to-run-stories-125498.html.
3. Michael Joseph Gross, “A Declaration of Cyber-War,” Vanity Fair, April 2011, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104.
4. Kim Zetter, “How Digital Detectives Deciphered Stuxnet, the Most Menacing Malware in History,” Wired, July 11, 2011, http://www.wired.com/threatlevel/2011/07/how-digital-detectives-deciphered-stuxnet/all/1.
5. “Cyber Attacks Linked: Security Experts,” Daily Telegraph, June 12, 2012, http://www.windsorstar.com/technology/Cyber+attacks+linked+Security+experts/6767097/story.html.
6. Portions of this chapter were first published online in a blog post for the Atlantic’s website: Marc Ambinder, “Did America’s Cyber Attack on Iran Make Us More Vulnerable?,” June 5, 2012, http://www.theatlantic.com/national/archive/2012/06/did-americas-cyber-attack-on-iran-make-us-more-vulnerable/258120/.
7. Richard Helms and William Hood, A Look over My Shoulder: A Life in the Central Intelligence Agency (New York: Random House, 2003), 184.
8. Marcus Eyth, “The CIA and Covert Operations: To Disclose or Not to Disclose—That Is the Question,” Brigahm Young University Law School, http://www.law2.byu.edu/jpl/Vol%2017.1/Eyth-pdf.pdf.
9. Tabassum Zakaria, “CIA Gets ‘New Leeway’ to Destroy bin Laden Covertly,” Middle East Times, October 26, 2001, http://www.metimes.com/2K1/issue2001-43/reg/cia_gets_new.htm.
CHAPTER 19
The Next Battlespace
On April 30, 2009, during a national security symposium at the Ritz-Carlton in Tysons Corner, Virginia, Melissa Hathaway, then acting senior director for cyber-security policy at the National Security Council, enthused about the “unprecedented transparency” of her soon-to-be-unveiled review of federal cyber policies. President Barack Obama had promised to elevate the issue within the bureaucracy and had suggested a new age of open discussion about the technological and security challenges posed by the age of ubiquitous, instantaneous communication. Hathaway said that the administration would even release a legal appendix to the report that laid out the complex web of authorities governing cyber law, as well as the gaps that Congress had to address.
But when an unclassified version of Hathaway’s report was released several months later, there was no legal white paper. A footnote in the appendix of the main report notes that the legal analysis was not intended to be of the type that would or could influence policy, and the report itself calls for a new interagency legal review team—a team that would produce products for internal, executive-branch-only deliberation. A senior administration official explained that although the cyber policy questions that the lawyers debated were obvious and common, the “mere fact that we recognize them could be of use of the enemy.” In other words, merely because the review sought the formal opinion of lawyers from the Department of Defense, the CIA, Homeland Security, the Justice Department, and the National Security Agency, releasing it might somehow provide those with nefarious intentions a guidebook to exploit the gaps in U.S. law. (It was also true, as another official later explained, that the lawyer responsible for clearing the paper for publication was tied up with other matters—he was also the chief NSC attorney in charge of approving covert action, and simply let the cyber issue slide.)
Hathaway had left the government by then, but her successor, Howard Schmidt, did not understand why the review had to be classified at all. He told a colleague that there was nothing in there that the government hadn’t already acknowledged. Hathaway made it very clear that the White House overruled her decision to release the legal annex. Administration officials dispute the idea that it was her decision to make in the first place.
The partially finished classified legal annex—a copy of which was obtained and read to us by a consultant outside of government—was written for public consumption. It makes scant reference to controversies about whether the government has the authority to, for example, unilaterally shut down a piece of critical cyber infrastructure during a major cyber attack, or what the rules of engagement should be if a nation-state uses a cyber weapon to attack the United States.
The classified review very closely tracks a PowerPoint presentation presented at a Texas Law Review symposium in 2010 by Sean Kanuck, a CIA consultant who would later become the first national intelligence officer for cyberspace. Kanuck’s presentation had to be cleared for release by the CIA. It notes the various declarations of major countries on cyber aggression, as when President Obama declared critical cyber infrastructure to be a national security asset. The presentation notes that if country A attacks country B, the laws of country B will determine, absent an international consensus, what the proper response should be. Kanuck’s unclassified presentation makes a point that the classified review finds too secret to be released: current technology is not sufficient to allow governments to set up, much less monitor, the activities of nation-states in the way they do for arms control treaties. Another obvious and unclassified point that Kanuck makes—another government secret in the White House review—is that the risk of cyber escalation is grave, because a country will be tempted to respond if it thinks another country is behind an attack, and that such escalation could be easily premised on false assumptions. It is not easy to pinpoint the source of an attack without first gathering intelligence. Assuming it’s easy (and, to be clear, it’s not) to attribute the cyber penetration of an American defense contractor to one of China’s hacker schools, it is more difficult by orders of magnitude to prove that the Politburo in Beijing sanctioned the attack.∗
With the exception of cyber warfare capabilities like OLYMPIC GAMES and the location of the central servers through which U.S. government traffic is screened, there aren’t very many secrets associated with cyber security, and certainly not enough to justify the intense secrecy associated with federal cyber-security policy. Serious national security harm could come from the disclosure of particular government vulnerabilities or by revealing, for example, how the U.S. intelligence community tracks and archives jihadist websites, or precisely how it engages in offensive cyber warfare against enemies of the state. But that activity compromises a tiny fraction of what cyber-security policy covers. And America’s strategic adversaries in the cyber domain—China, Russia, and occasionally Israel—know about them in detail, because they engage in the same practices. The U.S. government might well quarantine anyone it identifies as a hacker, so obvious are its cyber secrets to people who spend their days coding for fun and malice. Mike McConnell, the former director of national intelligence and now a senior vice president for the Booz Allen Hamilton consulting firm, wants to declassify almost everything cyber-related. He believes that secrecy significantly distorts the way the public comprehends the cyber problem and provides the wrong types of incentives to Congress. At a time when budgets are crunched, he wants more resources devoted to the cyber threat, which he believes at this point is primarily economic.
The overwhelming bulk of U.S. Internet traffic is commercial. The secrecy associated with cyberspace seeps into the public debate, engenders mistrust of government, and often blocks an honest discussion of what’s at stake. On top of the formal secrecy associated with cyber policy debates, there is an informal, but perhaps more toxic, conspiracy of silence between the government and private industry when it comes to detecting, deterring, and responding to cyber attacks against the stuff that
regular citizens rely on. Until very recently, thanks to a spate of state laws requiring companies to disclose when they’ve been penetrated by hackers, companies have been extremely reluctant to acknowledge that their Internet infrastructure has been compromised. That makes sense for public companies with fiduciary duties to shareholders, or for private companies with images to protect. Similarly, no bank would voluntarily disclose that it had been robbed. But when banks are robbed, customers find out about it because police investigations become part of the public record. Because the Federal Deposit Insurance Corporation insures accounts up to $250,000, customers don’t lose money. Smaller banks have every incentive to spend more money on security up front to prevent or deter robberies in the first place, while large banks are able to spread the losses from a single robbery across other branches.
In the cyber realm, the incentives differ. The Secret Service and the FBI, which investigate most large cyber crimes, don’t disclose their investigations. Companies don’t have to disclose cyber attacks unless data they retain on private citizens is breached. (McConnell’s own Booz Allen Hamilton, which is synonymous in government circles with cyber-security consulting, was conspicuously silent when some of its front-end servers were attacked in 2011. In 2009, Lockheed Martin tried to keep secret a penetration of data banks holding information about the F-35 Lightning II, the most expensive acquisition project in the history of the Air Force. In 2011, it bragged about detecting and defeating another attempt.)1 An obvious consequence of this is that when the press discovers a cyber penetration, the company that didn’t initially disclose it looks as though it had something to hide. Trust atrophies.
For the most part, the public cyber debate stalls because of secrecy. Civil libertarians worry about a so-called Internet kill switch—that is, whether the president can shut down parts of the Internet if it becomes infected in such a way that seriously compromises national security. They’d like legislation to address this. The White House doesn’t think anything else needs to be said about it. Does the president have that authority? Of course he does—he’s had it for seventy-five years, since the 1934 Radio Communications Act, and well before the Defense Department even conceived of such a thing as the Internet. (Indeed, before the United States conceived of a Defense Department.) But the administration won’t admit this—it’s a secret—and so they only have themselves to blame if cyber legislation gets hung up on issues they’re afraid to debate.
Deep State Page 33