Book Read Free

Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Page 47

by Kim Zetter


  The criminal underground will benefit from the wealth of government-funded research and development put into digital weapons and spy tools, as they already have done from Stuxnet and the arsenal of tools used in conjunction with it. After Duqu was discovered in 2011, for example, exploits attacking the same font-rendering vulnerability that it attacked showed up in various readymade toolkits sold in the criminal underground. Within a year after Duqu used it, it was the most commonly targeted vulnerability that criminals used to surreptitiously install banking Trojans and other malware on machines.35 But even when non-state hackers can’t replicate a sophisticated government attack bit-for-bit, they can still learn and benefit from it, as shown by Microsoft’s discovery that it would have taken just three days for criminal hackers to pull off a low-rent version of the Windows Update hijack that Flame performed.

  Brad Arkin, senior director of product security and privacy for Adobe, has said that his company’s primary security concern these days is not the criminal hacker but the high-level, state-sponsored hacker, who comes flush with wealth and a suitcase full of zero-day exploits to attack Adobe’s software. “In the last eighteen months, the only [zero-day holes] found in our software have been found by … carrier-class adversaries,” he said at a conference in 2011. “These are the groups that have enough money to build an aircraft carrier. Those are our adversaries.”36 The exploits used against Adobe products are “very, very expensive and difficult to build,” Arkin said, and once they’re designed and used by nation-state hackers, they trickle down to the crimeware tools.

  The nation’s chief cyberwarrior, NSA’s General Alexander, acknowledged this trend to a Senate committee in 2013. “We believe it is only a matter of time before the sort of sophisticated tools developed by well-funded state actors find their way to groups or even individuals who in their zeal to make some political statement do not know or do not care about the collateral damage they inflict on bystanders and critical infrastructure,” he said.37 Alexander was referring to the well-funded tools that countries like China create to attack the United States, but no one on the committee asked him about the contributions his own agency was making to the pool of tools and techniques that criminal hackers and hacktivists would adopt. Nor did they ask about the ethics and consequences of stockpiling zero-day exploits and withholding information about security vulnerabilities from US system owners so the government can use them to attack the systems of adversaries.

  Michael Hayden notes that there have always been strategic tradeoffs between building offensive capabilities and strengthening defenses. One of the core concepts the government has traditionally used in making trade offs in the kinetic realm—that also applies to the cyber realm—is something known as NOBUS, or Nobody But Us.

  “Nobody but us knows it, nobody but us can exploit it,” he told me. “How unique is our knowledge of this or our ability to exploit this compared to others?… Yeah it’s a weakness, but if you have to own an acre and a half of Cray [supercomputers] to exploit it.…” If it was NOBUS, he said, officials might “let it ride” and take advantage of the vulnerability for a while, at the same time knowing full well “that the longer this goes, the more other people might actually be able to exploit it.”38

  But given the state of computer security today, and the amount of hammering the United States is taking from cyberattacks, Hayden said he was prepared to acknowledge that it might be time to reevaluate this process.

  “If the habits of an agency that were built up in a pre-digital, analog age … are the habits of an agency [that is] culturally tilted a little too much toward the offense in a world in which everybody now is vulnerable,” he said, then the government might want to reassess.

  In a report issued by a surveillance reform board convened by the White House in the wake of the Edward Snowden leaks, board members specifically addressed this issue and recommended that the National Security Council establish a process for reviewing the government’s use of zero days. “US policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on US Government and other networks,” the review board wrote, noting that only “in rare instances, US policy may briefly authorize using a Zero Day for high priority intelligence collection, following senior, interagency review involving all appropriate departments.”39 In almost all instances, they wrote, it is “in the national interest to eliminate software vulnerabilities rather than to use them for US intelligence collection.” The group also recommended that cyber operations conducted by the US Cyber Command and NSA be reviewed by Congress in the same way the CIA’s covert operations are reviewed to provide more accountability and oversight.

  Richard Clarke, former cybersecurity czar under the Bush administration and a member of the panel, later explained the rationale for highlighting the use of zero days in their report. “If the US government finds a zero-day vulnerability, its first obligation is to tell the American people so that they can patch it, not to run off [and use it] to break into the Beijing telephone system,” he said at a security conference. “The first obligation of government is to defend.”40

  In a speech addressing the review board’s report, President Obama ignored both of the panel’s recommendations for handling zero days and for conducting oversight. But during a confirmation hearing for Vice Adm. Michael Rogers in March 2014 to replace the retiring General Alexander as head of the NSA and US Cyber Command, Rogers told a Senate committee that the spy agency already had a mature equities process for handling zero-day vulnerabilities discovered in commercial products and systems and was in the process of working with the White House to develop a new interagency process for dealing with these vulnerabilities. He said it was NSA policy to fully document each vulnerability, to determine options for mitigating it, and to produce a proposal for how to disclose it.41 In dealing with zero days, he said, it was important that the “balance must be tipped toward mitigating any serious risks posed to the US and allied networks.” And in cases where the NSA opts to exploit a zero day rather than disclose it, he said the agency attempts to find other ways to mitigate the risks to US systems by working with DHS and other agencies.

  A month later, news reports indicated that President Obama had quietly issued a new government policy on zero-day vulnerabilities in the wake of the Snowden revelations and the review board’s report.42 Under the new policy, any time the NSA discovers a major flaw in software, it must disclose the vulnerability to vendors and others so the flaw can be patched. But the policy falls far short of what the review board had recommended and contains loopholes.43 It applies only to flaws discovered by the NSA, without mentioning ones found by government contractors, and any flaw that has “a clear national security or law enforcement” use can still be kept secret by the government and exploited. The review board had said exploits should be used only on a temporary basis and only for “high priority intelligence collection” before being disclosed. Obama’s policy, however, gives the government leeway to remain silent about any number of critical flaws as long as they can justify their use. There is also no mention in the policy about what the government plans to do with zero-day vulnerabilities and exploits already in its arsenal of digital weapons.

  ONE ISSUE EVEN the review board didn’t address, however, was the implication of subverting the trust of digital certificates and the Windows Update system to further offensive goals, as Stuxnet and Flame did.

  The ACLU’s Christopher Soghoian has likened the Windows Update hijack to the CIA subverting the trusted immunization system to kill Osama bin Laden. In that case, the spy agency reportedly recruited a doctor in Pakistan to distribute immunization shots to residents in a certain neighborhood so the doctor could surreptitiously collect DNA samples from people living in a walled compound where bin Laden was believed to reside.

  In a similar way, the Windows Update hijack, and other attacks like it, undermine trusted systems and have the potential to create a crisis of confidence tha
t could lead users to reject systems meant to protect them.

  “Automatic security updates are a good thing. They keep us safe. They keep everyone safe,” Soghoian told attendees at a conference after Flame’s discovery.44 “Whatever the short-term advantage of hijacking the Windows Update process, it simply isn’t worth it.”

  But Hayden says that sometimes undermining a trusted system is worth it. He says he would have made the same decision CIA director Leon Panetta made to subvert the immunization system to locate bin Laden. “What I’m telling you is, that [kind of decision-making] happened all the time,” he says. Though he acknowledges that “[sometimes] we can get it wrong.”45

  If the United States was responsible for the Windows Update hijack in Flame, as reports indicate, there are questions about whether the hijack should have required some kind of notification and consent from Microsoft before it was done. US intelligence agencies can’t do things that might put US businesses at risk unless they have high-level legal authorities sign off on the operation and the company consents. They can’t, for example, make IBM an unwitting CIA accomplice by having an agent pose as an IBM employee without informing someone at the company who has fiduciary responsibilities. “The CIA can do it,” says Catherine Lotrionte, a law professor at Georgetown University and a former attorney in the CIA’s Office of General Counsel, “but [the agency has] to notify the CEO, because he or she has fiduciary duties owed to the [company’s] board.”46

  If the use of Microsoft’s digitally signed certificate was deemed an “operational use” of a US company—because it involved using a legitimate Microsoft credential to pass off a rogue file as a legitimate Microsoft file—then Microsoft might have needed to be put on notice. “It depends what is operational use in the technical world,” Lotrionte says. “We know what it looks like when it’s a human—[but] that technical business, that’s a hard one.”

  When the malware was first exposed, some researchers wondered if Microsoft officials might have known about the Windows Update attack beforehand; but others note that if Microsoft had approved the operation, the attackers wouldn’t have needed to go through the trouble of doing an MD5 hash collision to obtain the certificate—unless the MD5 hash gave Microsoft plausible deniability of cooperation.

  “The question is, would Microsoft have allowed this?” Lotrionte asks. “That’s what would concern me. The intelligence community will try everything, and I often wonder why companies put themselves at risk. I’m thinking if it was operational use and if they were put on notice, that’s interesting.”

  Sources knowledgeable about the situation say that Microsoft was not notified and did not provide permission for the operation. “If that happened, it would be the end of the company,” one said. “That’s a gamble nobody [at the company] would take.” He called government subversion of Microsoft’s certification process “irresponsible” and “beyond shocking.”

  “It’s very tricky waters we’ve sailed into,” he said. “Guys who do this type of thing are going to create challenges for the private sector that I just don’t think they’ve thought about.”

  But hijacking the trusted Microsoft system didn’t just undermine the relationship Microsoft had with its customers, it also contradicted the government’s stated commitment to strengthening computer security in the United States.

  In 2011, the White House published its International Strategy for Cyberspace, a comprehensive document laying out the president’s vision for the internet, which emphasized the government’s responsibility to help make networks and systems more secure and resilient. It aimed to do this in part by establishing responsible norms of conduct and creating a system for sharing vulnerability information between public and private sectors to shore up systems. But Jason Healey says the government’s actions call its sincerity into question.

  “If you come out with a policy that subverts Microsoft certificates, subverts Windows Updates to spread malware, it’s difficult to get yourself to a position where cyberspace is safer, more secure and resilient,” he says. “In some ways I feel like the Fort Meade crowd are the Israeli settlers of cyberspace—it doesn’t matter what the official policy is, they can go out and they can grab these hills, and they’re changing the facts on the ground.… If we’re ever going to get defense better than offense, some things should be more sacrosanct than others.…[But] if we have a norm that it’s OK to go after these things, if we’re creating this crisis of confidence … that’s just going to bounce back at us.”

  Healey says a cavalier approach to offensive operations that erodes security and trust in critical systems creates the potential for the information highway to become dense with street skirmishes and guerrilla warfare. “We can think about attacks getting not just better, but way better. Where cyberspace isn’t just Wild West, it’s Somalia.”

  Not everyone would agree with Healey and Soghoian that some systems should be off-limits. There are parallels in the analog world, for example, where the CIA exploits vulnerabilities in door locks, safes, and building security systems to gain access and collect intelligence. No one has ever suggested that the CIA disclose these vulnerabilities to vendors so the flaws can be fixed.

  But without lawmakers or an independent body asking the right questions to protect the long-term interests of security and trust on the internet, discussions about the nation’s offensive operations occur only among insiders whose interests lie in advancing capabilities, not in curbing them, and in constantly pushing the limits of what is possible. “It’s all people that have high-level security clearances [who are making these decisions], and there are probably few people [among them] that have ever worked a day in the real private sector where they had to really defend America’s critical infrastructure,” Healey says. “So it’s very easy for them to make these decisions to keep going farther and farther … because the government accrues all the benefit. If we use a zero-day for Flame, the government gets the benefit of that. It’s the private sector that’s going to get the counterattacks and that’s going to suffer from the norms the US is now creating that says it’s OK to attack.”

  If the White House and Capitol Hill aren’t concerned about how the government’s actions undermine the security of computer systems, they might be concerned about another consequence of the government’s offensive actions. As Stephen Cobb, a senior security researcher with security firm ESET, noted, “When our own government adds to the malware threat it adds to an erosion of trust that undermines the digital economy.”47

  BECAUSE THE GOVERNMENT’S cyber operations are so heavily classified, it’s not clear what kind of oversight—by the military or by lawmakers—currently occurs to prevent mishaps, or what kinds of investigations, if any, are conducted after mishaps occur.

  Hayden says the oversight is extensive. “When I was in government, cyberweapons were so over-watched, it was my view it would be a miracle if we ever used one.… It was actually an impediment getting in the way of the appropriate and proper use of a new class of weapons, it was so hard to get consensus.”

  But in 2009, long after Stuxnet had already been launched against systems in Iran, the National Academy of Sciences wrote that the “policy and legal framework for guiding and regulating US cyberattack capabilities was ill-formed, undeveloped, and highly uncertain.”48 Despite a decade of cyberoffensive planning and activity, little had been resolved regarding the rules of engagement for digital warfare since the first task force had been created in 1998.

  The Pentagon and White House finally took steps to address this in 2011—more than three years after Stuxnet was first launched—when the Defense Department reportedly compiled a classified list of all the cyberweapons and tools at its disposal and began to establish a long-overdue framework for how and when they could be used.49 The military regularly compiled a list of approved conventional weapons, but this was the first time cyberweapons were included on the list, a senior military official told the Washington Post, calling it the most significant developm
ent in military cyber doctrine in years.

  Then in 2012, the president signed a secret directive establishing some policies for computer network attacks, the details of which we know about only because Edward Snowden leaked the classified document.50 Under the directive, the use of a cyberweapon outside a declaration of war requires presidential approval, but in times of war, military leaders have advance approval to take quick action at their discretion. Digital attacks have to be proportional to the threat, as well as limit collateral damage and avoid civilian casualties—parameters that still leave the military a lot of discretion.51 Any digital operation that could disrupt, destroy, or manipulate computers or is “reasonably likely to result in significant consequences” also requires presidential approval. Significant consequences include loss of life, damage to property, and serious economic impact, as well as possible retaliation against the United States or adverse effects on foreign policy.

  Presidential authorization is also required to plant a logic bomb in a foreign system or a beacon marking it for later attack. But is not needed for espionage operations that are conducted for the sake of simply collecting data or mapping a network, unless the operation involves a worm or other malware that could spread. Notably, before taking action, the military has to weigh the possible effects an operation might have on the stability and security of the internet, and whether it would establish unwelcome norms of international behavior. Though some might argue that Stuxnet and Flame had already violated this guideline and established unwelcome norms of behavior, Herbert Lin, a cybersecurity expert with the National Research Council, points out that all the directive says is that military leaders have to ask questions about whether an operation might establish unwelcome norms, not that they can’t proceed with it anyway. “Establishing an undesirable norm may in fact have been a price they were willing to pay to set back the Iranian nuclear program,” he says of Stuxnet and Flame.52

 

‹ Prev