Lights Out

Home > Other > Lights Out > Page 5
Lights Out Page 5

by Ted Koppel


  It is difficult to focus the attention of power industry executives on speculative threats when there are so many existing problems to deal with. It can be almost impossible to convince investors or boards of directors to siphon off limited profits to prevent a crisis that, in their opinion, may never happen. However, industry leaders are at least pragmatic enough to consider the possibility of a successful cyberattack against the grid. Scott Aaronson acknowledged that “the mandate for perfect security implemented by imperfect network operators is a recipe for disaster,” but he was not prepared to concede that a cyberattack could take down an entire electric grid. Industry representatives won’t predict it. They don’t expect it, they insist. But “I have been conditioned,” said Aaronson, “to say nothing is impossible.”

  Doubts over security of the power grid cover a broad range, from that cautious “nothing is impossible” to the full-throated warning of “not if, but when.” Uncertainty has given rise to a vast cybersecurity industry, with more than its share of distinguished former government employees. It is a long-established custom in Washington for men and women who have devoted some of their best years to the military, service in one or more administrations, congress, or an intelligence agency, to capitalize on their experience once they leave public service; these may have been whom Aaronson had in mind when he followed up his “nothing is impossible” comment with this additional observation: “But I am suggesting that [taking down a grid] is not nearly as simple as I think some people, who may have services they’d like to sell, would have people believe.”

  Among the experts I’ve consulted who now provide advice on cybersecurity in the commercial marketplace are two former secretaries of homeland security, two former White House advisors on cybersecurity, one former director of operations at Cyber Command, and one former director of the National Security Agency. None of these people has suggested that there is anything simple about sabotaging a grid; they say only that the capability exists, an opinion shared by specialists who have no financial interest in the field. When I asked Janet Napolitano, who is now president of the University of California, what she thinks the chances are that a nation-state or independent actor could knock out one of our power grids, she replied, “Very high—80 percent, 90 percent. You know, very, very high.”

  Still, the implication that the threat of a cyberattack against the grid has been exaggerated by people with “services they’d like to sell” was enough to make Richard Clarke bristle: “I don’t sell to the electric power companies.” As for the companies that do engage his services, “[they] call me after they’ve been breached. We get called up by people who have already had a problem, and that’s every industry. It’s big companies. It’s banks that have spent hundreds of millions of dollars a year on it; still can’t get it done. The idea that people like Keith [Alexander] and I are conjuring up cyberattacks which don’t exist is laughable. I mean, every company that you can talk to has been breached. Just ask them.”

  I have tried, wherever possible, to keep all sources in this book on the record. It was difficult, however, to convince a top executive from one of the larger electric companies to publicly discuss the likelihood of a successful cyberattack on a power grid under any circumstances. When one finally did, I agreed to his condition that he be identified only as a senior executive for one of the nation’s major power companies; let me add only that he speaks for the industry with considerable authority. He, too, was willing to concede that hackers can get into the system. “But,” he added, “I almost guarantee you they’re not going to be able to create widespread damage. In order to create widespread damage—I mean, when you say ‘take down the grid,’ you know, Long Island could go out. It could go black. But the rest of the United States wouldn’t.”

  As noted in an earlier chapter, local distributors of electricity, even those in major urban areas, are not governed by any national regulatory standards, leading to the bizarre consequence that distribution of electricity—even to national security establishments such as the NSA, for example—is regulated only on a state-by-state basis. With no federal standards and fewer resources than the investor-owned corporations that generate and transmit power (the so-called bulk electric systems), those local companies tend to be more vulnerable. If enemies wanted to launch a truly devastating cyberattack, couldn’t they go after the local distribution system?

  It’s a question I posed to Jim Fama, vice president of energy delivery at Edison Electric. Fama acknowledged that the distribution end has become more vulnerable as the entire system—from generating to distributing electricity—is increasingly digitized, but he argued that it was an unlikely target for a massive attack. “If you wanted to attack control mechanisms, utility control mechanisms,” he said, “you would get a much bigger bang for the buck if you were to go after the bulk power system.”

  But that’s precisely the point that cybersecurity experts such as Richard Clarke are making. Every time that Homeland Security or the Federal Energy Regulatory Commission has hired a forensic expert, he told me, they have found connections linking publicly accessible webpages to a given power company’s administrative network, and through this to its operational network. “A lot of our companies,” Fama insisted, “have isolated their SCADA systems and their EMS [electric management systems]. They’re not connected to the billing and the customer information data. They’re segregated.” Note that Jim Fama’s claim that operational networks are fully isolated is not for the entire industry, or even for his entire trade association, but only for “a lot of our companies.”

  The senior power company executive I spoke with heads one of the largest and, he insists, best-protected electric power companies in the country, if not the world. In our initial conversation he steered me away from the vulnerability of those SCADA systems. Where security matters most in the generation and transmission of power, he said—that is, in the bulk electric system—companies have upgraded their EMS. I mentioned that a number of top intelligence specialists believe that the Russians, the Chinese, and probably the Iranians are already inside the grid (as we’ll examine in a later chapter). The executive rejected this notion out of hand. He did not believe, he told me, that anyone had cracked the EMS.

  “If they have,” he argued, “they would have done something. Especially Russia.”

  “Why?”

  “Because they would want to disrupt commerce. They would want to disrupt American life. Do I think they’re inside EMS? No, I don’t. We go through extraordinary security there. We have air[-gap] protection. We regularly wipe servers.”

  Did I remember those Russian matryoshka or nesting dolls? he wondered. He was analogizing to a system in which no two operational parts are physically connected—while they sit one inside the other, they never touch. “One of the most valuable defenses is air. And that means that in order to go from A to B, there is no network. There is no physical connection to get through, and you have absolute security.”

  That, I pointed out, is just what the Iranian nuclear technicians believed as they watched television monitors showing normal operations, while in reality thousands of their centrifuges were spinning out of control. Air-gapping works unless and until an employee infects the system by bringing in a personal device—a thumb drive, say—from the outside.

  “Oh, right. That’s scary stuff,” the executive conceded. He assured me, though, that there are “a lot” of standard tests, “both manual and automatic,” to ensure that actual operations mirror system readouts. “The nature of the grid is such that, number one, its interconnectedness gives it more resiliency, and number two, we have automatic controls and protection regimes so that if problems develop we can actually isolate the problem.”

  Here was a top executive from one of the nation’s largest power companies arguing that the grid’s interconnectedness gives it more resiliency, even while government officials such as Craig Fugate, the FEMA administrator, had taken the diametrically opposite position. Fugate, remember, sai
d that the electric power industry has sacrificed resiliency in the name of efficiency.

  My conversation with the senior power company executive ended on a perfectly amicable note, but with minds essentially unchanged. “I don’t mean to convey 100 percent confidence,” he told me. “I’m just giving you what I believe.” It was a bit of a surprise, then, when a few weeks after our conversation I received a call from that executive. Following our talk, he had chaired a meeting of the Electric Sector Coordinating Council. That’s the group of power company CEOs who meet periodically with senior White House officials. The administration, he said, was particularly concerned about the vulnerability of the SCADA systems. “And I said, ‘Yeah, I get SCADA’—just the same thing I said to you. ‘I’d be much more concerned about EMS than I would SCADA,’ and we got into a discussion on that.” EMS, after all, is the system that directly controls the generation and transmission of electricity. “The meeting breaks up, and one guy, who I respect, came over to me and said, ‘You know, SCADA could be a problem. We shouldn’t dismiss SCADA.’ ”

  Then the administration official asked the executive, “You remember Aurora?” He was referring to an incident in 2007 when the Idaho National Laboratory conducted something called the Aurora Generator Test. Staffers acting as hackers caused the circuit breakers on a giant diesel generator to open and close rapidly out of phase, in a process called “pinging,” until the generator actually tore itself apart.

  “He was afraid that if the bad guys got in [to a SCADA system] and did some sort of mass pinging attack, that you could create some problems, even before you knew you had a problem, and that that could provide imbalances in the big [EMS] system.” It was an important proof of how much physical damage can be done through network hacking, and it raised the specter of how much greater the destruction could be if the network was penetrated more widely. For a cyberattack to inflict lasting and widespread damage, “it would have to be something massive,” the senior power company executive maintained, “hitting lots of critical infrastructure somehow.” But a high-ranking official—“somebody that I respect”—had just impressed upon him the reality of SCADA’s vulnerability, so much so that he felt he “owed it” to me to acknowledge the seriousness of the threat. It was essentially the same thing that experts such as Richard Clarke had told me, but this was coming from the CEO of one of the nation’s largest electric power companies.

  It was an honorable admission to make. It also highlighted the fact that there is anything but unanimity within the industry on the issue of risk assessment.

  6

  What Are the Odds?

  The industry is equipped to lose as much as two hundred billion dollars. It would not be a great day, but life would go on.

  — AJIT JAIN, BERKSHIRE HATHAWAY

  More than fifty years as a reporter, all too often as a war correspondent, have ingrained in me a healthy respect for the law of unintended consequences: When the United States first sent troops into Viet Nam in the early 1960s, or Afghanistan following 9/11, or Iraq in 2003, there was little expectation that those “limited incursions” would expand into such life- and treasure-sapping wars. Each conflict conjures its own images of unforeseen devastation. There is now ample evidence that the law of unintended consequences applies as remorselessly in the realm of cyberspace as it does anywhere else.

  We live in a world of fine print and mellow-voiced warnings gently bathed in soft music and issued over soothing, totally unrelated visual images. When the Internet first nuzzled its way into our lives, we came to know it as “the Web”—an evocative concept, carrying the promise of a free exchange of ideas without the encumbrances of time or space. But a web can also trap, limit, and smother. It can be liberating and dangerous at one and the same time. And, of course, it has evolved.

  The Internet was never designed to help criminals steal credit card data from the files of 110 million Target customers. But in 2014 it was used to do just that.

  It wasn’t designed to unleash a computer virus on Aramco’s corporate PCs. But that’s how it was employed back in August 2012, erasing spreadsheets and emails throughout the enormous Saudi company and replacing them, as then Defense Secretary Leon Panetta told an audience of security executives, with “an image of a burning American flag.” The attack, Panetta reported, involved a complex virus called Shamoon, which rendered more than thirty thousand computers wholly useless. “In effect,” one U.S. Army cyber specialist told me, “it turned those computers into thirty thousand bricks.” It wasn’t entirely clear at the time, but the National Security Agency has since concluded that the attack was Iran’s answer to Stuxnet.

  The architects of the Internet are not likely to have envisioned cyber criminals gaining surreptitious entry to the private files of a major bank, but JPMorgan’s chief executive, Jamie Dimon, told shareholders in his annual letter of 2014 that “cybersecurity attacks are becoming increasingly complex and more dangerous.” The bank, he noted, would be spending $250 million on cybersecurity that year.

  In the spring of 2013 Dimon told me that JPMorgan had already spent more than $600 million on cybersecurity. Notwithstanding that vast and ongoing expenditure, Dimon’s warning proved all too prescient: in the summer of 2014 hackers compromised eighty-four million JPMorgan files.

  These and many other businesses have concluded that the advantages of the Internet are nevertheless worth whatever vulnerabilities may emerge as by-products. The electric power industry has made the same calculation. However dangerous the consequences of conducting our businesses and operating our infrastructure on the Internet, we are simply incapable of functioning without it. Quoted in a comprehensive Washington Post article titled “A Flaw in the Design,” computer-science pioneer Peter G. Neumann neatly summarizes the problem of security on the Internet. “People always say we can add it on later. [But] you can’t add security to something that wasn’t designed to be secure.”

  There is precedent. It is hardly the first time that society has embraced a new technology without understanding its shortcomings. More than a century ago, who could have imagined an America teeming with more than 250 million cars and trucks, as it is today? How could our forebearers have anticipated traffic jams and air pollution? More to the point, perhaps, would our national leaders, state governors, city mayors, and town councils have permitted the development of the car if they had known that by the early 1970s more than fifty thousand of us would die on our roads in a single year? By the time that statistic became a reality, the bargain had been struck, sealed, and so interwoven into our culture, our daily lives, that no one would seriously propose eliminating the automobile. What we have since done is to reduce as best we can its potential for harm. It took a very long time before the industry acceded to the need for reduced speed limits, seat belts, air bags, and crumple zones. It took a coalition of enraged mothers whose children had died at the hands of drunk drivers to overcome the efforts of the alcohol industry and increase the penalties for driving while under the influence. We now endure a more modest twenty-five thousand highway fatalities annually. Still, we, as a nation, are prepared to pay a terrible, ongoing price in destroyed and damaged lives so that the rest of us may enjoy the advantages of progress.

  There are isolated reports, in the wake of Edward Snowden’s revelations about the global intrusiveness of the National Security Agency, that some parliamentarians in Germany and a branch of Russian intelligence are considering a return to typewriters and paper files. I read such stories wistfully but without any expectation that the movement will spread. The world is locked into a state of cyber dependency.

  According to one military cyber specialist, “We think we’re in total control of the computer, all the time. It does what I tell it and it’s controlling all my pay, billing accounts; it’s controlling hot water flow or the fuel flow. We have no visualization of what harm could be done by someone who has intruded into that system.” He also pointed to a generational “security consciousness gap,” noti
ng that our children take the safety of the digital landscape even more for granted. “They expect immediate access to information, and they don’t fathom that there’s anything risky about it. So as the adversary’s threat is going up, our security consciousness is going down.”

  The Internet, as we now know it, carries a trove of inherent dangers. Those dangers are just beginning to reveal themselves, and their scale and scope may someday call into question our easy acceptance of its benefits.

  More than a hundred years ago, long before the power of the Internet gave it the force of commonplace reality, Mark Twain commented on the uneven nature of any competition between rumor and fact, gossip and reality, observing that “a lie can travel halfway around the world while the truth is putting on its shoes.” Twain was speaking figuratively, but there are today numerous online sites displaying what are called “digital attack maps” on which you can, literally, watch “lies” traveling halfway around the world in a microsecond. These maps show, in real time, what are known as distributed denial-of-service (DDoS) attacks. Perhaps because it is the easiest form of cyberattack, it is also the most pervasive. Here’s how Keith Alexander, former NSA director, explained a DDoS attack to me: “You remember your kids in the backseat yelling so you and your wife couldn’t talk? That’s a distributed denial-of-service attack. That can be done with them not knowing much about your facility, only throwing packets of data at you. Overloading the system so that you can’t conduct business. If you’re a stockbroker or a company that makes its living on the network, that’s a huge problem.”

 

‹ Prev