by Kim Zetter
Something began to change in 2000, however, when the Pentagon’s network defense task force was suddenly told to add offensive operations to its mission and to develop a doctrine for their use. The change in focus also led to a name change. Instead of Joint Task Force–Computer Network Defense, they were now to be called Joint Task Force–Computer Network Operations. The change was subtle to avoid attracting attention, Sachs says, but internally it signaled the military’s readiness to begin seriously planning offensive operations.
The questions the task force now had to ponder were many. Was an offensive network attack a military action or a covert operation? What were the parameters for conducting such attacks? Taking out computerized communication systems seemed like an obvious mission for an offensive operation, but what about sabotaging the computer controls of a weapons system to misdirect its aim or cause it to misfire?21 And who should be responsible for conducting such operations? Until then, if the Air Force needed an enemy’s radar system taken out, it worked jointly with the NSA’s electronic warfare team. But the NSA was an intelligence outfit whose primary job was intercepting communications. Taking out the computers that controlled an artillery system seemed more the territory of combat units.
With the addition of the offensive mission to the task force, Maj. Gen. James D. Bryan became the task force’s new commander. But Deputy Defense Secretary Hamre made it clear that defense was still the group’s priority, and that offensive operations were to be mere accessories to conventional military operations, not a replacement for them.
That is, until the terrorist attacks on 9/11, which Bryan recalled, “changed the dynamics for us.” Offensive operations suddenly took on more importance, and for the first time, the group began to approach offensive cyberattacks the way they approached kinetic ones—as a means of taking out targets, not just exploiting computers for intelligence-gathering purposes or to retard their performance. “We actually went out into the combatant commands and asked them for their target list,” he later recalled. “And we actually went through the drill of weighting them and analyzing them and prioritizing them on a global scale.”22
US offensive operations advanced further in 2003 when the Pentagon prepared a secret “Information Operations Roadmap” aimed at turning information warfare into a core military competency on par with air, ground, maritime, and special operations.23 The classified report, released with redactions a few years later, noted that a comprehensive process was already under way to evaluate the capabilities of cyberweapons and spy tools and develop a policy for their use. The latter included trying to determine what level of data or systems manipulation constituted an attack or use of force and what qualified as mere intelligence gathering. What actions could be legally undertaken in self-defense, and what level of attribution was needed before the United States could attack back? Also, could the United States use “unwitting hosts” to launch an attack—that is, transit through or control another system to attack an adversary—if the unwitting host faced retribution as a result?
In 2004, to accommodate this increased focus on offensive operations, the Defense Department split its offensive and defensive cyber operations into two divisions, a move that signaled for many the beginning of the militarization of cyberspace. The defensive division became known as Joint Task Force–Global Network Operations, while the offensive division was called the Joint Functional Component Command–Network Warfare. The latter was housed at Fort Meade, home of the NSA, but placed under the US Strategic Command and the leadership of Marine Corps Gen. James E. Cartwright. But the following year, some say, is when the “cult of offense” really began—when Gen. Keith Alexander took over as director of the NSA from Gen. Michael Hayden, and the focus on developing cyberweapons for warfare ramped up. It was during this period that Operation Olympic Games and Stuxnet were hatched.
Six years later, in May 2010, as Stuxnet was spreading wildly on computers around the world and was about to be exposed, the Pentagon recombined its defensive and offensive cyber operations under the newly formed US Cyber Command. The new division was still part of the US Strategic Command but was under the command of NSA director Alexander, giving the spy leader unprecedented authority over both intelligence operations and cyber military ones. Three months after the US Cyber Command was formed, the Pentagon formally recognized cyberspace as the “fifth domain” of warfare after air, land, sea, and space.
This was all just formal recognition, however, of activity that had already been occurring in varying degrees for a decade. Due to the classified nature of offensive operations, however, the public has only had minor hints of these activities as they have leaked out over the years.
In the late ’90s in Kosovo, for example, NATO forces may have used certain cyber techniques “to distort the images that the Serbian integrated air defense systems were generating,” according to John Arquilla, who worked for US Strategic Command at the time.24 President Clinton also reportedly approved a covert cyber operation to target the financial assets of Yugoslavian president Slobodan Miloševic in European banks, though there are conflicting reports about whether the operation actually occurred.25 In 2003, when a similar cyberattack was proposed to freeze the financial assets of Saddam Hussein, however, it was nixed by the secretary of the US Treasury out of concern that an attack like this could have cascading effects on other financial accounts in the Middle East, Europe, and the United States.26
In 2007, the US reportedly assisted Israel with a cyberattack that accompanied its bombing of the Al Kibar complex in Syria by providing intelligence about potential vulnerabilities in the Syrian defense systems. As previously noted, before Israeli pilots reached the facility, they took out a Syrian radar station near the Turkish border using a combination of electronic jamming and precision bombs. But the Israelis also reportedly hacked Syria’s air-defense system using on-board technology for an “air-to-ground electronic attack” and then further penetrated the system through computer-to-computer links, according to US intelligence analysts.27 A recent report from the US Government Accountability Office describes air-to-ground attacks as useful for reaching “otherwise inaccessible networks” that can’t be reached through a wired connection.28
In 2011, during the civilian uprising in Libya, there had also been talk of using cyberattacks to sever that country’s military communications links and prevent early-warning systems from detecting the arrival of NATO warplanes. The plan was nixed, however, because there wasn’t enough time to prepare the attack. The need for a longer lead time is one of the primary drawbacks of digital operations—designing an attack that won’t cascade to nontargeted civilian systems requires advance reconnaissance and planning, making opportunistic attacks difficult.29
More recently, leaks from former NSA systems administrator Edward Snowden have provided some of the most extensive views yet of the government’s shadowy cyber operations in its asymmetric war on terror. The documents describe NSA elite hacker forces at Fort Meade and at regional centers in Georgia, Texas, Colorado, and Hawaii, who provide US Cyber Command with the attack tools and techniques it needs for counterterrorism operations. But the government cyberwarriors have also worked with the FBI and CIA on digital spy operations, including assisting the CIA in tracking targets for its drone assassination campaign.
To track Hassan Ghul, an associate of Osama bin Laden who was killed in a drone strike in 2012, the NSA deployed “an arsenal of cyber-espionage tools” to seize control of laptops, siphon audio files, and track radio transmissions—all to determine where Ghul might “bed down” at night, according to Snowden documents obtained by the Washington Post.30 And since 2001, the NSA has also penetrated a vast array of systems used by al-Qaeda associates in Yemen, Africa, and elsewhere to collect intelligence it can’t otherwise obtain through bulk-data collection programs from internet companies like Google and Yahoo or from taps of undersea cables and internet nodes.
Terrorism suspects aren’t the NSA’s only targets, however. Op
erations against nation-state adversaries have exploded in recent years as well. In 2011, the NSA mounted 231 offensive cyber operations against other countries, according to the documents, three-fourths of which focused on “top-priority” targets like Iran, Russia, China, and North Korea. Under a $652-million clandestine program code named GENIE, the NSA, CIA, and special military operatives have planted covert digital bugs in tens of thousands of computers, routers, and firewalls around the world to conduct computer network exploitation, or CNE. Some are planted remotely, but others require physical access to install through so-called interdiction—the CIA or FBI intercepts shipments of hardware from manufacturers and retailers in order to plant malware in them or install doctored chips before they reach the customer. The bugs or implants operate as “sleeper cells” that can then be turned on and off remotely to initiate spying at will.31 Most of the implants are created by the NSA’s Tailored Access Operations Division (TAO) and given code names like UNITEDDRAKE and VALIDATOR. They’re designed to open a back door through which NSA hackers can remotely explore the infected systems, and anything else connected to them, and install additional tools to extract vast amounts of data from them. The implants are said to be planted in such a way that they can survive on systems undetected for years, lasting through software and equipment upgrades that normally would eradicate them.32 In 2008, the NSA had 22,252 implants installed on systems around the world. By 2011, the number had ballooned to 68,975, and in 2013, the agency expected to have 85,000 implants installed, with plans to expand this to millions. But the embarrassment of riches provided by so many implants has created a problem for the NSA. With so many implants lurking on systems around the world, the spy agency has been unable in the past to take advantage of all the machines under its control. In 2011, for example, NSA spies were only able to make full use of 10 percent of the machines they had compromised, according to one Snowden document. To remedy this, the agency planned to automate the process with a new system code named TURBINE, said to be capable of managing millions of implants simultaneously.33
All of these operations, however—from Kosovo to Syria to Libya, and the ones exposed in the Snowden documents—have focused on stealing or distorting data or using cyber methods to help deliver physical bombs to a target. None involved a digital attack as replacement for a conventional bomb. This is what made Stuxnet so fundamentally different and new.
Stuxnet stands alone as the only known cyberattack to have caused physical destruction to a system. But there are hints that the United States has been preparing for others. In October 2012, President Obama ordered senior national security and intelligence officials to produce a list of foreign targets—“systems, processes and infrastructures”—for possible cyberattack, according to a top-secret Presidential Directive leaked by Snowden.34 Whether the United States actually intends to attack them or just wants to have plans in place in case a situation arises is unclear. But such operations, the directive noted, could provide “unique and unconventional” opportunities “to advance US national objectives around the world with little or no warning to the adversary or target and with potential effects ranging from subtle to severely damaging.”
The surge in offensive operations and the planning for them has been matched by an equal surge in the demand for skilled hackers and attack tools needed by the NSA to conduct these operations. Although most of the implants used by the NSA are designed in-house by the agency’s TAO division, the NSA also budgeted $25.1 million in 2013 for “covert purchases of software vulnerabilities” from private vendors—that is, the boutique firms and large defense contractors who compose the new industrial war complex that feeds the zero-day gray market.35 This trend in government outsourcing of offensive cyber operations is visible in the job announcements that have sprung up from defense contractors in recent years seeking, for example, Windows “attack developers” or someone skilled at “analyzing software for vulnerabilities and developing exploit code.” One listing for defense contractor Northrop Grumman boldly described an “exciting and fast-paced Research and Development project” for an “Offensive Cyberspace Operation (OCO),” leaving little ambiguity about the nature of the work. Others are more subtle about their intentions, such as a listing for Booz Allen Hamilton, the contractor Snowden worked for while at the NSA, seeking a “Target Digital Network Analyst” to develop exploits “for personal computer and mobile device operating systems, including Android, BlackBerry, iPhone and iPad.” Many of the job listings cite both CND (computer network defense) and CNA (computer network attack) among the skills and expertise sought, underscoring the double duty that vulnerability and exploit research can perform in both making systems secure and attacking them.
Who are the people filling these jobs? Sometimes they’re people like Charlie Miller, the mathematician mentioned in chapter 7 who was recruited by the NSA for code and computer cracking. And sometimes they’re former hackers, wanted by law enforcement as much for breaking into US government systems as they are coveted by spy agencies for their ability to do the same against an adversary. A shortage of highly skilled candidates in the professional ranks who can fill the demand for elite cyberwarriors has led the military and intelligence agencies to recruit at hacker conferences like Def Con, where they may have to forgive a hacker’s past transgressions or lower their expectations about office attire and body piercings to attract the choicest candidates. One code warrior employed by a government contractor told an interviewer that he worried that his history hacking US government systems would preclude him from working with the feds, but the staffing company that hired him “didn’t seem to care that I had hacked our own government years ago or that I smoked pot.”36
He described a bit of the work he did as part of a team of five thousand who labored out of an unmarked building in a nondescript office park in Virginia. Workers were prohibited from bringing mobile phones or other electronics into the building or even leaving them in their car.
As soon as he was hired, the company gave him a list of software programs they wanted him to hack, and he quickly found basic security holes in all of them. His group, he said, had a huge repository of zero-day vulnerabilities at their disposal—“tens of thousands of ready-to-use bugs” in software applications and operating systems for any given attack. “Literally, if you can name the software or the controller, we have ways to exploit it,” he said. Patched holes didn’t worry them, because for every vulnerability a vendor fixed, they had others to replace it. “We are the new army,” he said. “You may not like what the army does, but you still want an army.”37
This expansion in government bug-hunting operations highlights an important issue that got little consideration when the DoD task force was first developing its offensive doctrine a decade ago, and that even today has received little public attention and no debate at all in Congress—that is, the ethical and security issues around stockpiling zero-day vulnerabilities and exploits in the service of offensive operations. In amassing zero-day exploits for the government to use in attacks, instead of passing the information about holes to vendors to be fixed, the government has put critical-infrastructure owners and computer users in the United States at risk of attack from criminal hackers, corporate spies, and foreign intelligence agencies who no doubt will discover and use the same vulnerabilities for their own operations.
As noted previously, when researchers uncover vulnerabilities, they generally disclose them to the public or privately to the vendor in question so that patches can be distributed to computer users. But when military and intelligence agencies need a zero-day vulnerability for offensive operations, the last thing they want to do is have it patched. Instead, they keep fingers crossed that no one else will discover and disclose it before they’ve finished exploiting it. “If you’ve built a whole operational capability based on the existence of that vulnerability, man, you’ve just lost a system that you may have invested millions of dollars and thousands of man hours in creating,” Andy Pennington, a cybe
rsecurity consultant for K2Share said at a conference in 2011. Pennington is a former weapons-systems officer in the Air Force whose job before retiring in 1999 was to review new cyberspace technologies and engineer next-generation weapons for the Air Force.38 “You are not going to hire teams of researchers to go out and find a vulnerability and then put it on the web for everybody to see if you’re trying to develop [an attack for it],” he later said in an interview.39 “We’re putting millions of dollars into identifying vulnerabilities so that we can use them and keep our tactical advantage.”
But it’s a government model that relies on keeping everyone vulnerable so that a targeted few can be attacked—the equivalent of withholding a vaccination from an entire population so that a select few can be infected with a virus.
Odds are that while Stuxnet was exploiting four zero-day vulnerabilities to attack systems in Iran, a hacker or nation-state cyberwarrior from another country was exploiting them too. “It’s pretty naïve to believe that with a newly discovered zero-day, you are the only one in the world that’s discovered it,” Howard Schmidt, former cybersecurity coordinator for the White House and former executive with Microsoft, has said. “Whether it’s another government, a researcher or someone else who sells exploits, you may have it by yourself for a few hours or for a few days, but you sure are not going to have it alone for long.”40