Book Read Free

Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Page 46

by Kim Zetter


  Critical infrastructure has always been a potential target in times of war. But civilian infrastructure in the United States has long enjoyed special protection due to the country’s geographical distance from adversaries and battlefields. That advantage is lost, however, when the battlefield is cyberspace. In a world of networked computers, every system is potentially a front line. There are “no ‘protected zones’ or ‘rear areas’; all are equally vulnerable,” Gen. Kevin Chilton, commander of the US Strategic Command, told Congress.15

  The laws of war prohibit direct attacks on hospitals and other civilian infrastructure unless deemed a necessity of war, with military leaders subject to war crimes charges should they violate this. But the protections provided by law crumble when attribution is a blur. Since a hack from a cyber army in Tehran or Beijing can be easily designed to look like a hack from Ohio, it will be difficult to distinguish between a nation-state attack launched by Iran and one launched by a group of hackers simply bent on random mayhem or civil protest. Stuxnet was sophisticated and came with all the hallmarks of a nation-state attack, but not every attack would be so distinguishable.16

  Some have argued that nation-state attacks would be easy to spot because they would occur in the midst of existing tension between nations, making the identity of the aggressor clear—such as the volley of denial-of-service attacks that disabled government websites in Georgia in 2008 in advance of a Russian invasion of South Ossetia. But even then it would be easy for a third party to exploit existing tension between two nations and launch an anonymous attack against one that appeared to come from the other in order to ignite a combustible situation.17

  In November 2013, Israel held a simulated exercise at Tel Aviv University that illustrated the difficulties of identifying an attacker, particularly when third parties enter a conflict with the intention of escalating hostilities between others. Using what were described as extreme but realistic scenarios, the war game pitted Iran and Iran-backed Hezbollah in Lebanon and Syria against Israel, and began with a series of simulated physical skirmishes against Israel that escalated into cyberattacks that threatened to pull the United States and Russia into the conflict to defend their allies.

  The simulation began with an explosion at an offshore drilling platform, with rockets lobbed over the border from Lebanon into Northern Israel and blasts in Tel Aviv, and was followed by network disruptions that paralyzed a hospital in Israel. The cyberattacks were traced to an Iranian server, but Iran denied responsibility, insisting the Israelis were trying to put the blame on it in order to generate Western support for a strike against Tehran. Then the network attacks spread to the United States, forcing Wall Street trading to halt and shutting down air traffic control at JFK Airport. The White House declared a state of emergency after two planes crash-landed and killed 700 people. This time the attacks were traced first to a server in California, but then, puzzlingly, to Israel.

  When the game ended, Israel was preparing to launch physical attacks against Hezbollah in Syria and Lebanon—over the cyberattacks attributed to them and Iran—and tensions between the United States and Israel had risen to a dangerous boil over questions about who was responsible for the cyberattacks against the United States.18 “If we hadn’t stopped when we did, the entire region could have been engulfed in flames,” said Haim Assa, the game-theory expert who designed the exercise.

  The simulation was instructive to participants on a number of levels. The United States “realized how difficult if not impossible it is to ascertain the source of attack,” retired US Army Gen. Wesley Clark, who participated in the exercise, said. And an Israeli official noted “how quickly localized cyber events can turn dangerously kinetic when leaders are ill-prepared to deal in the cyber domain.” To this end, they learned that the best defense in the digital realm is not a good offense but a good defense, because without a properly defended critical infrastructure, leaders were left with little room to maneuver in their decision making when an attack occurred. When civilian systems were struck and citizens were killed, leaders were under pressure to make quick decisions, often based on faulty and incomplete conclusions.19

  IT’S EASY TO see why militaries and governments are embracing cyberweapons. Aside from offering anonymity and a perceived reduction in collateral damage, cyberweapons are faster than missiles, with the ability to arrive at their destination in seconds, and can be tweaked on the fly to combat counterdefenses. If a zero-day vulnerability gets patched, attackers can draw from a reserve of alternative exploits—as Stuxnet’s developers did—or change and recompile code to alter its signatures and thwart detection.

  “Cyber, in my modest opinion, will soon be revealed to be the biggest revolution in warfare, more than gunpowder and the utilization of air power in the last century,” Israeli Maj. Gen. Aviv Kochavi has said.20

  But cyberweapons have limited use. If tightly configured to avoid collateral damage in the way Stuxnet was, each one can be deployed only against a small set of targets without being reengineered. And unlike a bunker-busting bomb or stealth missile, a cyberweapon can instantly become obsolete if the configuration of a target system or network changes. “I am not aware of any other weapons systems in the history of warfare that can be disabled by their targets with a click of a mouse button,” Marcus Ranum notes.21 And any time a cyberweapon gets exposed, it isn’t just that weapon that gets burned, but any other weapons that use the same novel techniques and methods it employed. “At this point, we can be sure that anyone who builds a gas centrifuge cascade is going to be a little bit more careful about their software than usual,” said Thomas Rid, a war studies scholar at King’s College, London.22

  But another problem with digital weapons is that they can be difficult to control. A good cyberweapon should operate in a predictable manner so that it has a controlled impact and produces expected results each time it’s deployed, causing little or no collateral damage. It needs precision design so that it executes only on command or automatically once it finds its target; and it should be recallable or have a self-destruct mechanism in case conditions change and a mission needs to be aborted. Andy Pennington, the former Air Force weapons system officer cited in an earlier chapter, likens an uncontrollable cyberweapon to a biological agent out of control. “If you don’t have positive control over the weapon … you don’t have a weapon, you’ve got a loose cannon. We created conventions and said we’re not going to use biological and chemical warfare weapons, because we do not have accurate targeting, we do not have access control, they’re not recallable and they’re not self-destruct-capable.”23

  Stuxnet had some controls built into it, but lacked others. It was a targeted, precision weapon that unleashed its payload only on the specific systems it was designed to attack. And it had a time-release mechanism so that it initiated its sabotage only when certain conditions on the target machines were met. But once unleashed, Stuxnet couldn’t be recalled, and it had no self-destruct mechanism—it only had an infection kill date that prevented it from spreading beyond a certain date three years in the future. And although the earliest versions of Stuxnet had limited spreading capabilities, the March 2010 version was clearly a “loose cannon,” albeit a defused one, since although it spread uncontrollably to thousands of machines that weren’t its target, it didn’t sabotage them.

  Would other digital weapons be as well designed or as lucky, though? Collateral damage in cyberspace has a longer reach than in the physical realm. A bomb dropped on a target might cause collateral damage, but it would be local. Computer networks, however, are complex mazes of interconnectivity, and a cyberweapon’s path and impact once unleashed aren’t always predictable. “We do not yet have the ability to scope collateral damage for all cyberattacks,” Jim Lewis of the Center for Strategic and International Studies has noted. “For attacks that disable networks, there could be unpredictable damage not only to the target, but also to noncombatants, neutrals or even the attacker, depending upon the interconnections of the target ne
twork or machine. This makes the political risk of unintended consequences unpredictable (an attack on a Serbian network, for example, damages NATO allies’ commercial activities) and carries with it the risk of escalating a conflict (an attack on North Korea damages services in China).”24

  DESPITE THE APPARENT march toward digital warfare that Stuxnet initiated, it’s fair to ask what the likelihood is that a catastrophic digital event will ever occur. Defense Secretary Leon Panetta has said the United States is in a “pre-9/11 moment,” with adversaries plotting and preparing for the right opportunity to launch destructive cyberattacks on its systems. But Thomas Rid has called cyberwarfare “more hype than hazard”—the “shiny new thing” that has caught the attention of militaries like a gleaming new train set opened on Christmas morning. In reality, he thinks, it will have much less impact than people imagine.25 Any future use of digital weapons will likely be as an enhancement to conventional battle, not as a replacement for it. Critics of digital doomsayers also point to the fact that no catastrophic attack has occurred to date as evidence that the warnings are overblown.

  But others argue that no passenger jets had been flown into skyscrapers, either, before 9/11. “I think to … say it’s not possible, it’s not likely, is really way too early. All sorts of things could happen over the next couple of years,” says Jason Healey, head of the Cyber Statecraft Initiative at the Atlantic Council in Washington, DC, who was an original member of the military’s first cyber taskforce. “As more systems get connected to the internet, and cyberattacks progress from simply disrupting ones and zeros to disrupting things made of concrete and steel, things will change, and the days when no one has died from a cyberattack or the effects of a cyberattack will be over.”26

  Some think the threat is overblown because most actors capable of pulling off an attack would be dissuaded by the risk of a counterstrike. In fact, some wondered after Stuxnet was discovered if it had been intentionally burned by Israel or the United States to send a message to Iran and other countries about the digital attack capabilities of these two countries. The fact that it had remained undetected for so long and was only discovered by an obscure antivirus firm in Belarus led some to believe that Stuxnet had not been discovered so much as disclosed. Gen. James Cartwright, former vice chairman of the Joint Chiefs of Staff—the man said to have played a large role in the Olympic Games operation in the United States—was in fact an advocate of making declarations about US cyber capabilities in the service of deterrence.

  “For cyber deterrence to work,” Cartwright said in 2012, “you have to believe a few things: One, that we have the intent; two, that we have the capability; and three, that we practice—and people know that we practice.”27 Cartwright has since been investigated by the Justice Department for suspicion of leaking classified information about Stuxnet to the New York Times, though as of this writing he has not been charged with any wrongdoing and has denied the allegations.

  But while deterrence of this sort might work for some nations—as long as they believe an attack could be attributed to them—irrational actors, such as rogue states and terrorist groups, aren’t deterred by the same things that deter others. “The day a terrorist group gets cyberattack capabilities, they will use them,” Jim Lewis told Congress in 2012.28

  Lewis expects that in the future, limited digital conflicts that disrupt military command-and-control systems may arise between the United States and Russia or China, but these countries likely will not attack critical infrastructure, “because of the risk of escalation.” But once countries like Iran and North Korea acquire cyberattack capabilities, a strike against civilian targets in the United States will be more likely. As US forces strike targets in their countries, they will feel “little or [no] constraint against attacking targets in ours,” he wrote in a 2010 paper.29 And threats of retaliation made by the United States to deter such attacks would have little effect on such groups since “their calculus for deciding upon an attack is based on a different perception of risks and rewards,” he noted. Likewise, as smaller countries and non-state insurgents acquire the digital means to strike distant targets, “disruptions for political purposes and even cyber attacks intended to damage or destroy could become routine,” he says. The Taliban in Afghanistan or Al-Shabaab in Somalia have little chance of launching a conventional retaliatory strike against the US homeland, but when they eventually acquire, or hire, the ability to launch effective cyberstrikes, this will change. “These strikes will be appealing to them as it creates the possibility to bring the war to the U.S. homeland,” Lewis notes. Although they may not acquire the ability to pull off massive attacks, “harassment attacks” aimed at specific targets like Washington, DC, Lewis says, will certainly be within their means, and depending on the severity of the attack or its cascading effects, essential systems and services could be lost for extended periods of time.

  With cyberweapons in the hands of others, the United States may also find itself having to recalculate the risk of blowback when planning conventional attacks, Lewis notes. In 2003, US forces invading Iraq met with little resistance, but what if Iraq had possessed cyberweapons that it launched in retaliation? “These would not have changed the outcome of the invasion but would have provided a degree of vengeance [for Iraq],” he says.30

  If there are disagreements about the likelihood of digital attacks against critical infrastructure occurring, there are also disagreements over the level of damage that such attacks can cause. Leon Panetta and others have warned about digital Pearl Harbors and cyber 9/11s that will strike fear throughout the land. But others note that the kind of digital destruction envisioned by the doomsayers isn’t as easy to pull off as it seems. Conducting a disruptive attack that has long lasting effects “is a considerably more complex undertaking than flying an airplane into a building or setting off a truck full of explosives in a crowded street,” notes W. Earl Boebert, a former cybersecurity expert at Sandia National Laboratories, whose job in part was to research such scenarios. Networks and systems can be brought down, but they can also be brought back up relatively quickly. “Considerable planning is required to raise the probability of success to a point where a rational decision to proceed can be made,” he writes.31 Though one can argue that the 9/11 attacks required at least as much planning and coordination as a destructive cyberattack would require, a well-planned digital assault—even a physically destructive one—would likely never match the visual impact or frightening emotional effect that jets flying into the Twin Towers had.

  DESPITE THE RISKS and consequences of using digital weapons, there has been almost no public discussion about the issues raised by the government’s offensive operations. Critics have pointed out that the Obama administration has been more open about discussing the assassination of Osama bin Laden than discussing the country’s offensive cyberstrategy and operations. When questions about the rules of engagement for digital attacks were raised during the confirmation hearing for Gen. Keith Alexander to be made head of US Cyber Command in 2010, Alexander refused to address them in public and said he would only answer in a closed session.32 And although there are numerous doctrinal manuals in the public domain that cover conventional warfare, the same is not true for digital warfare. Even some who have built their careers on secrecy have noticed the extreme secrecy around this issue. “This may come as a surprise, given my background at the NSA and CIA and so on, but I think that this information is horribly over-classified,” former CIA and NSA director Gen. Michael Hayden has said. “The roots to American cyberpower are in the American intelligence community, and we frankly are quite accustomed to working in a world that’s classified. I’m afraid that that culture has bled over into how we treat all cyber questions.”33

  Without more transparency, without the willingness to engage in debate about offensive operations, there is little opportunity for parties who don’t have a direct interest in perpetuating operations to gauge their success, failure, and risks.

  “Stuxnet let
the genie out of the lamp in terms of how you could do this kind of attack. You can now target all kinds of other devices,” says one former government worker. “Where does it end? It doesn’t seem like there’s any oversight of these programs. Sadly, the scientists are not pulling back the reins. They’re excited that someone is giving them money to do this research. I don’t think I ever saw anyone question what was being done. I don’t think there was a lot of consciousness about it.”

  There have been no public discussions about the repercussions of the digital arms race launched by Stuxnet, or about the consequences of releasing weapons that can be unpredictable and can be turned back against the United States.

  In a report to Congress in 2011, the intelligence community noted that the defenders of computer networks in the United States are perpetually outgunned by attackers and can’t keep pace with the changing tactics they deploy. Exploits and exploitation methods evolve too quickly for detection methods and countermeasures to keep up, a problem that will only grow worse as nations develop and deploy increasingly sophisticated attack methods. Until now, the evolution of computer attacks has been driven by innovations in the criminal underground, but this will change as nation-state attacks like Stuxnet and Flame begin to drive future advancements. Instead of government hackers learning novel techniques from the underground, the underground will learn from governments. And as countermeasures for digital weapons are developed, the need to produce even more advanced weapons will grow, pushing further innovations in weaponry. One US official has referred to Stuxnet as a first-generation weapon, on par with “Edison’s initial light bulbs, or the Apple II,” suggesting that more sophisticated designs have already replaced it.34

 

‹ Prev