Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Home > Other > Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon > Page 37
Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon Page 37

by Kim Zetter


  10 Its weakness has been known since at least 2004.

  11 Generally a certificate is generated and signed within seconds after a request is submitted to Microsoft’s servers. The attackers could have been able to gauge how long it took Microsoft to issue signed certificates by submitting a number of certificate requests to the company to detect a pattern. But one former Microsoft employee suggested to me that the attackers could also have been sitting on Microsoft’s internal network watching the requests come in to see exactly how long it took for requests to arrive from outside and be processed. There’s no evidence this is the case, however.

  12 In addition to all of this work, they also had to modify the certificate to use it to install their malware on Windows Vista machines, since in its original form it would not have been accepted by any system using Vista or a later version of the Windows operating system. The modification involved getting rid of an extension on the certificate. They didn’t remove the extension, which might have caused it to fail the computer’s code-signing check; instead, they “commented out” a bit on the certificate—surrounded it with markers to make the machine simply ignore the extension. This allowed it to work on Vista machines. Only 5 percent of the machines that Kaspersky saw infected with Flame had Windows Vista installed, however. Most of the machines were using Windows 7 or Windows XP.

  13 According to sources, Microsoft tried to investigate who had submitted the requests and how many requests for a certificate came in from this entity, but too much time had passed between when the certificate was issued—in February 2010—and when Flame was discovered in 2012. Microsoft’s logs get rewritten over time, and the logs for that time period were no longer available.

  14 Dutch cryptographer and academic Marc Stevens, who with colleague Benne de Weger developed one of the first practical MD5 hash collision attacks for research purposes in 2007, described the Flame attack as “world-class cryptoanalysis” that broke new ground and went beyond the work they and others had done with collisions. Stevens and de Weger were part of a group of researchers, including Alexander Sotirov, who demonstrated a similar, though technically different, collision attack in 2008 at the Chaos Computer Club Congress—a hacker conference held annually in Germany. They used a cluster of two hundred Playstation 3s to do their computational work to generate an identical hash for a certificate. Their certificate masqueraded as a different company, not as Microsoft. When they conducted their experiment, however, they kept guessing the wrong timestamp and had to generate a hash four times before they got it right. When the Flame attack was discovered in 2012, Sotirov estimated that it was ten to a hundred times more difficult to pull off than the attack he and his colleagues had done. Slides for the presentation by Sotirov and his colleagues can be found at events.​ccc.​de/​congress/​2008/​Fahrplan/​attachments/​1251_md5-​collisions-​1.​0.​pdf.

  15 It should be noted that after going through all of this trouble to obtain their rogue certificate, the attackers should not have been able to use it to sign their malicious code. But they were able to do so because Microsoft had failed to implement certain restrictions so that the certificates it issued for TS Licensing would be designated for “software licensing” purposes only.

  16 This low-rent certificate would allow the malware to at least slip past Windows XP machines, though not Windows Vista machines, which had stronger security.

  17 Some would say, however, that this attack was even worse than subverting the Microsoft Windows Update servers to deliver malicious software, because in subverting those servers, although the attackers would be able to send malicious software to customers from Microsoft’s servers, customer machines would reject the code if it wasn’t also signed by Microsoft. But by undermining Microsoft’s certificate process to sign their malicious code, the attackers didn’t need Microsoft’s Update servers. They could deliver their malware to machines from any server and pass it off as legitimate Microsoft code.

  18 Ellen Nakashima, “U.S., Israel Developed Flame Computer Virus to Slow Iranian Nuclear Efforts, Officials Say,” Washington Post, June 19, 2012.

  19 With Duqu, the attackers had launched their cleanup operation after news of the malware broke, but the fact that the team behind Flame launched their cleanup about ten days before news of Flame broke, suggested they had known in advance that their cover was about to be blown. The Kaspersky researchers had likely tipped them off inadvertently when they connected a test machine infected with Flame to the internet. As soon as the machine went online, the malware reached out to one of Flame’s command servers. The attackers must have realized the machine wasn’t on their list of targets and may even have identified it as a Kaspersky machine and concluded that Flame’s days were numbered. In a panic, they wiped the command servers and sent out a kill module, called Browse32, to infected machines to erase any trace of the malware so victims would never know they had been infected.

  The cleanup campaign was successful for the most part. But Browse32 had a fatal flaw; it left behind one telltale file, ~DEB93D.tmp, that gave it away. This was a temporary file that got created whenever Flame performed a number of different operations on an infected machine. Once the operation was done, Flame was supposed to delete the temp file automatically. Because of this, the attackers hadn’t put it on the list of files that Browse32 was supposed to delete, since they weren’t expecting it to be on machines. In a twist of fate, however, if the Browse32 kill module arrived to a machine while Flame was still performing one of the operations that had created the temp file, the kill module erased Flame before it could delete the temporary file. Kaspersky found the orphan temp file abandoned on hundreds of systems that had been infected with Flame. It was this file, in fact, left behind on a machine in Iran, that led the Kaspersky researchers to stumble across Flame in the first place.

  20 This wasn’t the only mistake they made. They also botched the cleanup operation on the servers they could access. They had created a script called LogWiper.sh to erase activity logs on the servers to prevent anyone from seeing the actions they had taken on the systems. Once the script finished its job, it was also supposed to erase itself, like an Ouroboros serpent consuming its own tail. But the attackers bungled the delete command inside the script by identifying the script file by the wrong name. Instead of commanding the script to delete LogWiper.sh, they commanded it to delete logging.sh. As a result, the LogWiper script couldn’t find itself and got left behind on servers for Kaspersky to find. Also left behind by the attackers were the names or nicknames of the programmers who had written the scripts and developed the encryption algorithms and other infrastructure used by Flame. The names appeared in the source code for some of the tools they developed. It was the kind of mistake inexperienced hackers would make, so the researchers were surprised to see it in a nation-state operation. One, named Hikaru, appeared to be the team leader who created a lot of the server code, including sophisticated encryption. Raiu referred to him as a master of encryption. And someone named Ryan had worked on some of the scripts.

  21 The attackers seemed to have managed their project like a tightly run military operation, with multiple teams handling carefully compartmentalized tasks. There was a management team that oversaw the operation and chose the victims; there were coders who created the Flame modules and a command-and-control team who set up and managed the servers, delivered the Flame modules to infected machines, and retrieved stolen data from machines; and finally there was an intelligence team responsible for analyzing the stolen information and submitting requests for more files to be purloined from machines that proved to have valuable data. It was exactly the kind of setup that the Snowden documents suggested the NSA had.

  The team operating the command servers had limited visibility into the overall operation and may not even have known the true nature of the missions their work facilitated. The process for uploading new modules to infected machines was tightly controlled so that neither they nor any outsiders who might gain access to
the servers could alter the modules or create new ones to send to infected machines. The command modules, for example, were delivered to the servers prewritten, where they got automatically parsed by the system and placed in a directory for delivery to victims by the server team, who only had to press a button to send them on their way. Data stolen from victims was also encrypted with a sophisticated algorithm and public key. The private key to decrypt it was nowhere to be found on the server, suggesting that the data was likely passed to a separate team who were the only ones capable of decrypting and examining it.

  22 The protocols were identified as Old Protocol, Old E Protocol, SignUp Protocol, and Red Protocol.

  23 Two names—Flame and Flamer—appeared in different parts of the code. Kaspersky decided to call the malware Flame, but Symantec opted to call it Flamer in their report about it.

  24 It was possible that at one point Gauss might have contained the same Windows font exploit that Duqu had used to install itself on machines, though there was no sign of it. If it had been used, the attackers might have removed it after Microsoft patched the vulnerability it exploited in 2011.

  25 The attackers were checking to see whether a very specific program was installed on the machine, a program that was probably unique to the region in which it was located. The target program was unknown, but the Kaspersky researchers say it began with an odd character, and they believed, therefore, that the program might have had an Arabic or Hebrew name.

  26 The discovery of SPE left two of the four pieces of malware used with the Flame servers undiscovered—SP and IP. Raiu guessed that SP was likely an early version of SPE that was not encrypted.

  27 Gauss’s files were named after elite mathematicians and cryptographers, but SPE adopted a more populist approach, using names such as Fiona, Sonia, Tiffany, Elvis, and Sam.

  28 When malicious files are submitted to VirusTotal, the website will send a copy of the file to any of the antivirus companies whose scanner failed to detect the file, though it will also sometimes send files that do get detected as well. The VirusTotal record for the submission of this early Stuxnet file shows that the file was submitted at least twice to the site, on November 15 and 24. Both times, only one out of thirty-six virus scanners on the site flagged the file as suspicious, which would have been good news for the attackers. Oddly, there is information missing in the submission record that generally appears in the records of other files submitted to the site. The category indicating the total number of times the file was submitted to the site is blank, as is the category indicating the source country from where the file was submitted, which might have provided valuable intelligence about the location of the attackers, if they were the ones who submitted the file, or about the first victim, if the file was submitted by someone infected with the file. It’s not clear whether that information was intentionally scrubbed from the record. VirusTotal was founded by a team of engineers in Spain, but Google acquired it in September 2012, just a couple of months before Symantec stumbled across this early Stuxnet version. Google did not respond to queries about why the data was missing from the record.

  29 It was even easier to see from the configuration data in this version of Stuxnet than in later versions that Stuxnet was seeking the precise setup at Natanz. The code indicated it was seeking a facility where the systems it was targeting were labeled A21 through A28. Natanz had two cascade halls, Hall A and Hall B. Only Hall A had centrifuges in it when Stuxnet struck. The hall was divided into cascade rooms, or modules, that were each labeled Unit A21, A22, and so on, up to A28.

  30 Stuxnet 0.5 had an infection kill date of July 4, 2009. Once this date arrived, it would no longer infect new machines, though it would have remained active on machines it already infected unless it got replaced by another version of Stuxnet. The next version of Stuxnet was released June 22, 2009, just two weeks before Stuxnet 0.5’s kill date.

  31 Like later versions of Stuxnet, this one had the ability to update itself on infected machines that weren’t connected to the internet. It did this through peer-to-peer communication. All the attackers had to do was deliver an update from one of the command servers to a machine that was connected to the internet, or deliver it via a USB flash drive, and other machines on the local network would receive the update from that machine.

  32 One other note about this version is that it had a driver file that caused a forced reboot of infected Windows machines twenty days after they were infected. It’s interesting to note that Stuxnet was discovered in 2010 after machines in Iran kept crashing and rebooting. Although the version of Stuxnet found on those machines was not 0.5, it raises the possibility that this version of Stuxnet or its driver might have been lurking on those machines and caused them to reboot repeatedly. Although VirusBlokAda never found Stuxnet 0.5 on the machines, they may simply have missed it.

  33 After discovering Stuxnet 0.5 in their archive, the Symantec researchers did a search for it and found a number of errant and dormant infections in Iran but also in the United States, Europe, and Brazil.

  34 The servers were set up in the United States, Canada, France, and Thailand. The command servers were designed to masquerade as an internet advertising firm called Media Suffix to conceal their true intention if someone were to gain access to them. The domains for the servers—smartclick.org, best-advertising.net, internetadvertising4u.com, and ad-marketing.net—each had the same home page for the fake advertising company, which had a tagline that read “Deliver What the Mind Can Dream.” The home page read: “The internet is widely becoming the hottest advertising and marketing medium in the world. MediaSuffix focuses extremely in the internet segment of advertising. MediaSuffix is ready to show your company how to capitalize on this unbelievable growing market. Don’t be left behind.… We offer clients an unparalleled range of creative answers to the varied needs of our clients.”

  35 Stuxnet 0.5 may have been unleashed earlier than November 2007, but this is the first record of its appearance. According to the compilation date found in the Stuxnet component submitted to VirusTotal, it was compiled in 2001, though Chien and O’Murchu believe the date is inaccurate.

  36 Author interview conducted with Chien, April 2011.

  CHAPTER 16

  OLYMPIC GAMES

  In 2012, Chien may have been contemplating the dark and complicated future Stuxnet wrought, but four years earlier, the architects of the code were contemplating a different dark future if Iran succeeded in building a nuclear bomb.

  In April 2008, President Ahmadinejad took a much-publicized tour of the enrichment facilities at Natanz, to mark the second anniversary of the plant’s operation, and in the process gave arms-control specialists their first meaningful look inside the mysterious plant. Wearing the white lab coat and blue shoe booties of plant technicians, Ahmadinejad was snapped by photographers as he peered at a bank of computer monitors inside a control room, flashed an ironic “peace” sign at the cameras, and led an entourage of stern-looking scientists and bureaucrats down two rows of gleaming, six-foot-tall centrifuges standing erect at attention like military troops in full dress trotted out for inspection.

  The president’s office released nearly fifty images of the tour, thrilling nuclear analysts with their first peek at the advanced IR-2 centrifuges they had heard so much about. “This is intel to die for,” one London analyst wrote of the images.1

  But among the retinue accompanying Ahmadinejad on his visit to Natanz was the Iranian defense minister—an odd addition to the party given Iran’s insistence that its uranium enrichment program was peaceful in nature.

  Iranian technicians had spent all of 2007 installing 3,000 centrifuges in one of the underground halls at Natanz, and during his visit Ahmadinejad announced plans to begin adding 6,000 more, putting Iran in the company of only a handful of nations capable of enriching uranium at an industrial level. It was a sweet triumph over the many obstacles Iran had faced in the past decade—including technical difficulties, procurement hurdles and sanctions
, and all of the political machinations and covert sabotage that had been aimed at stopping its program. The success of the enrichment program now seemed assured.

  But Natanz wasn’t out of the woods just yet. Producing enriched uranium at an industrial scale required thousands of centrifuges spinning at supersonic speed for months on end with little or no interruption.2 And while Ahmadinejad was taking his victory lap among the devices, something buried deep within the bits and bytes of the machines that controlled them was preparing to stir up more trouble.

  IT WAS LATE in 2007 when President Bush reportedly requested and received from Congress $400 million to fund a major escalation in covert operations aimed at undermining Iran’s nuclear ambitions. The money was earmarked for intelligence-gathering operations, political operations to destabilize the government and stimulate regime change, and black-ops efforts to sabotage equipment and facilities used in the nuclear program.3 The latter included the experimental efforts to manipulate computer control systems at Natanz.

  Although Bush’s advisers had reportedly proposed the digital sabotage sometime in 2006, preparations for it had begun long before this, possibly even years before, if timestamps in the attack files are to be believed—the malicious code blocks that Stuxnet injected into the 315 and 417 PLCs had timestamps that indicated they had been compiled in 2000 and 2001, and the rogue Step 7 .DLL that Stuxnet used to hijack the legitimate Siemens Step 7 .DLL had a 2003 timestamp.4

 

‹ Prev