Book Read Free

Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Page 30

by Kim Zetter


  Centrifuges at Natanz each had three valves that controlled the movement of gas in and out of them, plus auxiliary valves that controlled the movement of gas in and out of the cascade and between rows of centrifuges in a cascade. Albright and his staff ran through various scenarios to determine what would happen if certain valves were opened or closed with malicious intent for extended periods of time, and in each scenario the outcome was likely damaged or destroyed centrifuges.

  It was clear to Albright that they had finally found the answer to the puzzling numbers they had seen in the IAEA reports. In statements made to the press, Ahmadinejad had insisted that the damage done to centrifuges by the virus sent by the West was limited. But to Albright, the numbers that appeared in IAEA reports around the time that Iran said the virus had struck appeared to indicate that at least 1,000 centrifuges might have been damaged or replaced during that period.

  Albright published a paper discussing his thoughts that appeared to resolve the Natanz question once and for all. Then, shortly after he did, the New York Times came out with a story that seemed to resolve Stuxnet’s most enduring mystery—who had created and launched it. The story surprised no one in its findings. The paper reported that Stuxnet was a joint operation between Israel and the United States, with a little bit of assistance, witting or otherwise, from the Germans and the British.18

  According to the story, which relied on anonymous sources, the worm had been written by US and Israeli coders and tested at Israel’s Dimona complex in the Negev Desert—the site that developed Israel’s own illicit nuclear weapons program in the 1960s. Dimona was enlisted to set up a test-bed of Siemens controllers and centrifuges, which were identical to the IR-1s at Natanz, to measure the effectiveness of the worm at destroying the spinning devices. But a US lab also played a role in the tests. In 2004, the Oak Ridge National Laboratory in Tennessee had obtained some P-1 centrifuges, the type that Iran’s IR-1s were modeled on, and the British, who were partners in the Urenco consortium that had created the original centrifuge designs, may have played a role. When testing was completed, the United States and Israel worked together to target the machines in Iran.

  When asked about the role the United States might have played in Stuxnet, Gary Samore, Obama’s chief adviser on weapons of mass destruction and arms control, simply smiled at a Times reporter and said, “I’m glad to hear they are having troubles with their centrifuge machines, and the US and its allies are doing everything we can to make it more complicated.”19

  The news of US involvement in developing and releasing the digital weapon should have created a stir in Washington and in other government circles beyond. But it was largely met with silence, despite the fact that it raised a number of troubling questions—not only about the risks it created for US critical infrastructures that were vulnerable to the same kind of attack, but about the ethical and legal considerations of unleashing a destructive digital attack that was essentially an act of war. Ralph Langner had been right in signing off his original post about Stuxnet the way he did. With confirmation, albeit unofficial, that Israel and the United States were behind the attack, the world had now formally entered the age of cyberwarfare.

  * * *

  1 This and all quotes from Chien are from author interviews in 2010 and 2011.

  2 “STL” stands for Statement List programming language.

  3 Chien had no idea why Siemens wasn’t more responsive. It was possible the company didn’t consider the issue an urgent one, since only about a dozen Siemens customers reported being infected by Stuxnet. It was also possible Siemens wasn’t used to dealing with in-depth questions about its software. The Symantec researchers weren’t asking questions that could be answered easily by product reps; they were fundamental engineering questions about how the Siemens code worked. This required the company to track down programmers who’d worked on the Step 7 system. But it’s also possible that Siemens was relatively quiet on Stuxnet because the company didn’t want to stir up discussions about its business in Iran. The company had recently found itself in hot water after a shipment of its controllers was seized in Dubai on its way to Iran for the uranium enrichment program. Another shipment of Siemens turbo processors was intercepted in Hamburg by export authorities as it was on its way to Iran. Both of these shipments violated European Union export controls prohibiting the sale of dual-use equipment to Iran without a permit. Siemens claimed it didn’t know the shipments were headed to Iran, but the incidents eventually forced the company’s CEO to announce in January 2010 that Siemens would not initiate any new business with Iran after mid-2010. When Stuxnet was discovered in Iran a few months later, Siemens’s relative silence about the code may have been in part an effort to not stir up a discussion about how its controllers got to be at the uranium enrichment plant in the first place. There were Siemens workers who urged the company to take a more active role in examining Stuxnet, but they were silenced. Siemens in effect wanted the issue to go away and had hoped that Symantec and other researchers would give up.

  4 Because Iran had been the victim of sabotage in 2006 when parts purchased from Turkey for its nuclear program were reportedly sabotaged (see this page), Iranian officials may have decided they needed to manufacture their own frequency converters to avoid saboteurs who were targeting the supply chain and manipulating ones they bought abroad.

  5 See this page.

  6 Eric Chien, “Stuxnet: A Breakthrough,” Symantec blog, November 12, 2010, available at symantec.​com/​connect/​blogs/​stuxnet-​breakthrough.

  7 “Iranian Nuclear Scientist Killed in Motorbike Attack,” BBC, November 29, 2010, available at bbc.​co.​uk/​news/​world-​middle-​east-​11860928.

  8 William Yong and Robert F. Worth, “Bombings Hit Atomic Experts in Iran Streets,” New York Times, November 29, 2010.

  9 Ibid.

  10 Ibid.

  11 Dieter Bednarz and Ronen Bergman, “Israel’s Shadowy War on Iran: Mossad Zeros in on Tehran’s Nuclear Program,” Spiegel Online, January 17, 2011, available at spiegel.​de/​international/​world/​israel-​s-​shadowy-​war-​on-​iran-​mossad-​zeros-​in-​on-​tehran-​s-​nuclear-​program-​a-​739883.​html.

  12 “Iran’s Chief Nuclear Negotiator: ‘We Have to Be Constantly on Guard,’ Der Spiegel, January 18, 2011.

  13 Shahriari and Abassi were not the first Iranian scientists targeted. In 2007, Ardeshire Hassanpour, a nuclear physicist working at the uranium conversion plant at Esfahan died under mysterious circumstances, though his death was reported as an industrial accident. Then, ten months before Shahriari’s death, a colleague of his, Massoud Alimohammadi, was killed in a car bombing attack. Iran accused the Mossad of masterminding the attack on Alimohammadi, but questions arose later when news reports revealed he was not a nuclear scientist at all but a quantum field theorist. In December that year, a twenty-six-year-old kickboxer named Majid Jamali Fashi was arrested for the crime and later told a bizarre story on Iranian TV of having been recruited and trained by the Mossad, after visiting Turkey in 2007. He said he was paid $30,000 up front for the assassination and promised $20,000 more after the attack. Iranian news agencies reported that Fashi was executed by hanging in May 2012. In a 2014 interview, Alimohammadi’s widow said that her husband had indeed been secretly working on Iran’s nuclear program. See Scott Peterson, “Covert War Against Iran’s Nuclear Scientists: A Widow Remembers,” Christian Science Monitor, July 17, 2014.

  14 As a further intimidation tactic, an Iranian official revealed in a 2014 interview that the Mossad had once ordered a bouquet of flowers to be sent from an Iranian florist to the family of an Iranian nuclear engineer with a card expressing condolences over his death. The engineer was still alive and well, however. The spy agency, he said, also created videos of fake Iranian news broadcasts showing the images of murdered Iranian scientists and sent the videos to the still-living scientists as a warning. See “How West Infiltrated Iran’s Nuclear Program, Ex-Top Nuclear Official Ex
plains,” Iran’s View, March 28, 2014, www.​iransview.​com/​west-​infiltrated-​irans-​nuclear-​program-​ex-​top-​nuclear-​official-​explains/​1451.

  15 Yong and Worth, “Bombings Hit Atomic Experts in Iran Streets.”

  16 Dagan was reportedly pushed out by Prime Minister Netanyahu and Defense Minister Ehud Barak because he opposed an air strike against Iran.

  17 Yong and Worth, “Bombings Hit Atomic Experts in Iran Streets.”

  18 William J. Broad, John Markoff, and David E. Sanger, “Israeli Test on Worm Called Crucial in Iran Nuclear Delay,” New York Times, January 15, 2011.

  19 Ibid.

  CHAPTER 14

  SON OF STUXNET

  As spring arrived in 2011, the story of Stuxnet seemed to be winding down. Symantec had resolved the mystery of the devices the digital weapon attacked, Albright had made the final connection between Stuxnet and the centrifuges at Natanz, and although the US government still hadn’t made a formal admission of responsibility for the attack, the New York Times had confirmed what everyone suspected—that the United States and Israel were behind it.

  Symantec, for its part, was ready to move on. The researchers had spent half a year tearing apart the code and had produced a seventy-page dossier of all their findings. They were relieved to finally be done with it. But they hadn’t put the project aside for long when startling new evidence emerged in Europe—evidence suggesting that Stuxnet was just one in an arsenal of tools the attackers had used against Iran and other targets.

  BOLDIZSÁR BENCSÁTH TOOK a bite from his sandwich and stared at his computer screen. The software he was trying to install on his machine was taking forever to load, and he still had a dozen things to do before the Fall 2011 semester began at the Budapest University of Technology and Economics, where he taught computer science. Despite the long to-do list, however, he was feeling happy and relaxed. It was the first day of September and was one of those perfect, late-summer afternoons when the warm air and clear skies made you forget that cold autumn weather was lurking around the corner.

  Bencsáth, known to his friends as Boldi, was sitting at his desk in the university’s Laboratory of Cryptography and System Security, aka CrySyS Lab, when the telephone interrupted his lunch. It was Jóska Bartos, CEO of a company for which the lab sometimes did consulting work.1

  “Boldi, do you have time to do something for us?” Bartos asked.

  “Is this related to what we talked about before?” Bencsáth said, referring to a previous discussion they’d had about testing new services the company planned to offer customers.

  “No, something else,” Bartos said. “Can you come now? It’s important. But don’t tell anyone where you’re going.”

  Bencsáth wolfed down the rest of his lunch and told his colleagues in the lab that he had a “red alert” and had to go. “Don’t ask,” he said as he ran out the door.

  A while later, he was at Bartos’s office, where a triage team had been assembled to address the problem they wanted to discuss. “We think we’ve been hacked,” Bartos said.

  They’d found a suspicious file on a developer’s machine that had been created late at night when no one was working. The file was encrypted and compressed so they had no idea what was inside, but they suspected it was data the attackers had copied from the machine and planned to retrieve later. A search of the company’s network found a few more machines that had been infected as well. The triage team felt confident they had contained the attack but wanted Bencsáth’s help determining how the intruders had broken in and what they were after. The company had all the right protections in place—firewalls, antivirus, intrusion-detection and -prevention systems—and still the attackers got in.

  Bencsáth was a teacher, not a malware hunter, and had never done such forensic work before. At the CrySyS Lab, where he was one of four advisers working with a handful of grad students, he did academic research for the European Union and occasional hands-on consulting work for other clients, but the latter was mostly run-of-the-mill cleanup work—mopping up and restoring systems after random virus infections. He’d never investigated a targeted hack before, let alone one that was still live, and was thrilled to have the chance. The only catch was, he couldn’t tell anyone what he was doing. Bartos’s company depended on the trust of customers, and if word got out that the company had been hacked, they could lose clients.

  The triage team had taken mirror images of the infected hard drives, so they and Bencsáth spent the rest of the afternoon poring over the images in search of anything suspicious. By the end of the day, they’d found what they were looking for—a combination keystroke logger/infostealer that was designed to record passwords and other keystrokes on infected machines, as well as steal documents and take screenshots. It also catalogued any devices or systems that were connected to the machines so the attackers could build a blueprint of the company’s network architecture. The malware didn’t immediately siphon the stolen data from infected machines but instead stored it on the machines in a temporary file, like the one the triage team had found. The file grew fatter each time the infostealer sucked up data, until at some point the attackers would reach out to the machine to retrieve it from a command-and-control server in India.2

  By now it was the end of the day, so Bencsáth took the mirror images and the company’s system logs with him, after they had been scrubbed of any sensitive customer data, and over the next few days scoured them for more malicious files, all the while being coy to his colleagues back at the lab about what he was doing. The triage team worked in parallel, and after several more days they had uncovered three additional suspicious files—including a kernel-mode driver, and another driver that was found on some infected systems but not others.

  When Bencsáth examined the kernel driver, his heart quickened—it was signed with a valid digital certificate from a company in Taiwan. Wait a minute, he thought. Stuxnet used a driver that was signed with a certificate from a company in Taiwan. That one came from RealTek Semiconductor, but this certificate belonged to a different company, C-Media Electronics. The driver had been signed with the certificate in August 2009, around the same time Stuxnet had been unleashed on machines in Iran.

  Could the two attacks be related? he wondered. He mulled it over for a minute, but then dismissed it. Anyone could have stolen C-Media’s signing key and certificate, he reasoned, not just the attackers behind Stuxnet.

  Then a member of the triage team noticed something else about the driver that seemed familiar—the way it injected code into a certain process on infected machines. “I know only one other attack that does this,” he told Bencsáth. He didn’t have to say the name; Bencsáth knew he was talking about Stuxnet. But Bencsáth dismissed this connection too, since he was pretty sure the technique wasn’t unique to Stuxnet.

  Twice more over the next few days, Bencsáth and the triage team found something in the attack code that reminded them of Stuxnet. But each time they convinced themselves it was just a coincidence. There was just no way lightning would strike twice, they reasoned. Besides, there was no sign that this new attack was targeting PLCs.

  After working on the project for a week, Bencsáth began wondering if anyone else had been infected with the files, so he decided to see if he could smoke out other victims, or the attackers themselves, with a sly test. On September 8, he posted hashes for the malicious files on his personal website, boldi.​phishing.​hu, along with a cryptic note: “Looking for friends [or] foes of 9749d​38ae9​b9ddd​8ab50​aad67​9ee87​ec to speak about. You know what I mean. You know why.” His site, an odd compendium of fish recipes and culinary reviews of canned fish (the domain name, phishing, was a pun on the computer security term for malicious e-mail), was the perfect cover for posting the covert message, since the only way someone would find the hashes was if they specifically did a Google search looking for them—either another victim who found the same files on their machine and was searching the internet for information about them,
or the attackers themselves, who might want to see if any victims had found the files and were discussing them online. If someone did visit his site in search of the hashes, Bencsáth would be able to see their IP address.

  Unfortunately, he got no nibbles on his bait, so he deleted the hashes after a few days.

  By now the fall semester had begun, and Bencsáth got busy with other things. He had classes to teach and office hours with students to keep. He also had a research paper to deliver at a conference in Dubrovnik. But through it all, the attack nagged at him in the back of his mind. When he returned to Budapest after the conference, he and the triage team decided to compare the code of one of the drivers they had found on their machines with one of the drivers that had been used with Stuxnet—just to settle once and for all that the two attacks weren’t related. When they put the codes into a hexadecimal (hex) editor to examine them side-by-side, however, they got a big surprise. The only difference between them was the digital certificates used to sign them.

  Bencsáth immediately called Bartos, the company’s CEO, and told him he needed to bring the other members of the CrySyS Lab onto the investigation. This wasn’t a simple hack anymore; it looked like it might be a nation-state attack with national-security implications. Bartos agreed, but only on condition that Bencsáth not reveal the company’s name to any of his colleagues. The only people aside from Bencsáth who knew the company had been hacked was the local government Computer Emergency Response Team, and they had been notified only because of the nature of the company’s business.3

  Bencsáth made plans to tell his colleagues the following Monday. Over the weekend, he collected all the technical literature he could find on Stuxnet—including the lengthy dossier Symantec had prepared—and reread it to refresh his memory. When he reached the part discussing the encryption routines that Stuxnet used to conceal its code, he pulled up the encryption routines for the new attack and got another surprise. They were nearly identical. The new attack code even used one of the same decryption keys that Stuxnet used.4

 

‹ Prev