Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon
Page 8
Aside from the complex ways Stuxnet loaded its files and bypassed security software, it used an extensive checklist to ensure all conditions were ideal on a machine before unleashing its payload. It also carefully tracked all of the resources it used on a machine and made sure to free up each as soon as it was no longer needed to reduce the amount of processing power Stuxnet consumed on the machine—if Stuxnet used too much power, it ran the risk of slowing the machine down and being discovered. It also overwrote many of the temporary files it created on a machine once they were no longer needed. All software programs create temporary files, but most don’t bother to delete them, since they’d just be overwritten by the temporary files other applications create. The attackers didn’t want Stuxnet’s files lingering on a system for long, however, because it raised the risk that they’d be seen.
But despite all the extra effort the attackers put into their code, there were several parts that seemed oddly underdesigned. O’Murchu wasn’t the only one who thought so. As the Symantec researchers published their findings about Stuxnet over several weeks, members of the security community began grumbling online about the code’s many failings, insisting that its authors weren’t nearly the elite cadre of hackers original reports made them out to be. Their technical prowess was inconsistent, some said, and they made a number of mistakes that allowed investigators to see more easily what they were trying to do.
Stuxnet, for example, would have been much more difficult to decipher had the attackers used better obfuscation to thwart the researchers’ forensic tools—such as more sophisticated encryption techniques that would prevent anyone except the target machines from unlocking the payload or even identifying that Stuxnet was targeting Siemens Step 7 software and PLCs. Stuxnet also used weak encryption and a standard protocol to communicate with its command-and-control servers instead of custom-written ones that would have made it more difficult for researchers to establish their sinkhole and read the malware’s traffic.
Cryptographer Nate Lawson’s comments dripped with disdain when he wrote in a blog post that Stuxnet’s authors “should be embarrassed at their amateur approach to hiding the payload” and their use of outmoded methods that criminal hackers had long since surpassed. “I really hope it wasn’t written by the USA,” he wrote, “because I’d like to think our elite cyberweapon developers at least know what Bulgarian teenagers did back in the early 90s.”6 The mix of state-of-the-art tactics and Hacker 101 techniques made Stuxnet seem like a “Frankenstein patchwork” of wellworn methods, others said, rather than the radical skunkworks project of an elite intelligence agency.7
But O’Murchu had a different take on Stuxnet’s inconsistencies. He believed the attackers deliberately used weak encryption and a standard protocol to communicate with the servers because they wanted the data traveling between infected machines and the servers to resemble normal communication without attracting unusual attention. And since communication with the servers was minimal—the malware transmitted only limited information about each infected machine—the attackers didn’t need more advanced encryption to hide it. As for securing the payload better, there may have been limitations that prevented them from using more sophisticated techniques, such as encrypting it with a key derived from extensive and precise configuration data on the targeted machines so that only those machines could unlock it.8 The targeted machines, for example, may not have had the same exact configuration, making it difficult to use a single payload encryption key, or there may have been concerns that the configuration on the machines could change, rendering such a key useless and preventing the payload from triggering.
Stuxnet’s failings may also have been the consequence of time constraints—perhaps something caused the attackers to launch their code in a rush, resulting in last-minute work that seemed sloppy or amateurish to critics.
But there was another possible explanation for the patchwork of techniques used in the threat—Stuxnet was likely created by different teams of coders with different skills and talents. The malware’s modular nature meant development could have been done by different teams who worked on various parts simultaneously or at different times. O’Murchu estimated it took at least three teams to code all of Stuxnet—an elite, highly skilled tiger team that worked on the payload that targeted the Siemens software and PLCs; a second-tier team responsible for the spreading and installation mechanisms that also unlocked the payload; and a third team, the least skilled of the bunch, that set up the command-and-control servers and handled the encryption and protocol for Stuxnet’s communication. It was possible the division of responsibilities was so well defined and the teams so compartmentalized that they never interacted.
But although each of the teams had varying levels of skill and experience, they were all at least uniform in one thing—none of them had left any clues behind in the code that could be easily used to track them. Or so it seemed.
ATTRIBUTION IS AN enduring problem when it comes to forensic investigations of hack attacks. Computer attacks can be launched from anywhere in the world and routed through multiple hijacked machines or proxy servers to hide evidence of their source. Unless a hacker is sloppy about hiding his tracks, it’s often not possible to unmask the perpetrator through digital evidence alone.
But sometimes malware writers drop little clues in their code, intentional or not, that can tell a story about who they are and where they come from, if not identify them outright. Quirky anomalies or footprints left behind in seemingly unrelated viruses or Trojan horses often help forensic investigators tie families of malware together and even trace them to a common author, the way a serial killer’s modus operandi links him to a string of crimes.
Stuxnet’s code was more sterile than the malware Chien and O’Murchu usually saw. But two things about it did stand out.
Chien was sifting through the notes they had taken on Stuxnet’s initial infection dance one day, when something interesting caught his eye—an infection marker that prevented Stuxnet from installing itself on particular machines. Each time Stuxnet encountered a potential new victim, before it began the process of decrypting and unpacking its files, it checked the Windows registry on the machine for a “magic string” composed of a letter and numbers—0x19790509. If it found the string, Stuxnet withdrew from the machine and wouldn’t infect it.
Chien had seen “inoculation values” like this before. Hackers would place them in the registry key of their own computers so that after unleashing attack code in a test environment or in the wild, it wouldn’t come back to bite them by infecting their own machine or any other computers they wanted to protect. Inoculation values could be anything a hacker chose. Generally, they were just random strings of numbers. But this one appeared to be a date—May 9, 1979—with the year listed first, followed by the month and day, a common Unix programming format for dates. Other number strings that appeared in Stuxnet, and that the researchers knew for certain were dates, were written in the same format.
Chien did a quick Google search for the day in question and was only half surprised when one of the results revealed a connection between Israel and Iran. The 1979 date was the day a prominent Iranian Jewish businessman named Habib Elghanian was executed by firing squad in Tehran shortly after the new government had seized power following the Islamic Revolution. Elghanian was a wealthy philanthropist and respected leader of the Iranian Jewish community until he was accused of spying for Israel and killed. His death marked a turning point in relations between the Jewish community and the Iranian state. For nearly forty years, while Mohammad Reza Shah Pahlavi had been in power, Iranian Jews had enjoyed a fairly amicable relationship with their Muslim neighbors, as did the Islamic nation with the state of Israel. But Elghanian’s execution, just three months after the revolution ousted the shah, was a “Kristallnacht” moment for many Persian Jews, making it clear that life under the new regime would be very different. The event sparked a mass exodus of Jews out of Iran and into Israel and helped fuel hostility betwe
en the two nations that persists today.
Was the May date in Stuxnet a “Remember the Alamo” message to Iran from Israel—something like the missives US soldiers sometimes scribbled onto bombs dropped on enemy territory? Or was it an effort by non-Israeli actors to implicate the Jewish state in the attack in order to throw investigators off their trail? Or was it simply a case of Chien having an active imagination and seeing symbols where none existed? All Chien could do was guess.
But then the Symantec team found another tidbit that also had a possible link to Israel, though it required more acrobatic leaps to make the connection. This one involved the words “myrtus” and “guava” that appeared in a file path the attackers left behind in one of the driver files. File paths show the folder and subfolders where a file or document is stored on a computer. The file path for a document called “my résumé” stored in a computer’s Documents folder on the C: drive would look like this—c:documentsmyresume.doc. Sometimes when programmers run source code through a compiler—a tool that translates human-readable programming language into machine-readable binary code—the file path indicating where the programmer had stored the code on his computer gets placed in the compiled binary file. Most malware writers configure their compilers to eliminate the file path, but Stuxnet’s attackers didn’t do this, either by accident or not. The path showed up as b:myrtussrcobjfre_w2k_x86i386guava.pdb in the driver file, indicating that the driver was part of a project the programmer had called “guava,” which was stored on his computer in a directory named “myrtus.” Myrtus is the genus of a family of plants that includes several species of guava. Was the programmer a botany nut, Chien wondered? Or did it mean something else?
Chien searched further for information about myrtus and found a tangential connection to another prominent event in Jewish history, when Queen Esther helped save the Jews of ancient Persia from massacre in the fourth century BCE. According to the story, Esther was a Jewish woman who was married to the Persian king Ahasherus, though the king did not know she was Jewish. When she learned of a plot being hatched by the king’s prime minister, Haman, to kill all the Jews in the Persian Empire with the king’s approval, she went to the king and exposed her identity, begging the king to save her and her people. The king then had Haman executed instead and allowed the Jews in his empire to battle all the enemies that Haman had amassed for their slaughter, resulting in a victory for the Jews and 75,000 of their enemies dead. The Purim holiday, celebrated annually by Jewish communities around the world, commemorates this deliverance of Persian Jews from certain death.
On its face, the story appeared to have no relevance to Stuxnet at all. Except that Chien found a possible connection in Esther’s Hebrew name. Before changing her name and becoming the queen of Persia, Esther had been known by the name Hadassah. Hadassah in Hebrew means myrtle, or myrtus.
The parallels between ancient and modern Persia were not hard to draw, in light of current events. In 2005, news reports claimed that Iranian president Mahmoud Ahmadinejad had called for Israel to be wiped off the face of the map. Though subsequent reports determined that his words had been mistranslated, it was no secret that Ahmadinejad wished the modern Jewish state to disappear, just as Haman had wanted his Jewish contemporaries to disappear centuries before.9 And on February 13, 2010, around the same time that Stuxnet’s creators were preparing a new version of their attack to launch against machines in Iran, Rav Ovadia Yosef, an influential former chief rabbi of Israel and a political powerhouse, drew a direct line between ancient Persia and modern Iran in a sermon he gave before Purim. Ahmadinejad, he said, was the “Haman of our generation.”
“Today we have a new Haman in Persia, who is threatening us with his nuclear weapons,” Yosef said. But like Haman and his henchmen before, he said, Ahmadinejad and his supporters would find their bows destroyed and their swords turned against them to “strike their own hearts.”10
None of this, however, was evidence that the “myrtus” in Stuxnet’s driver was a reference to the Book of Esther. Especially when read another way, as some later suggested, myrtus could easily have been interpreted as “my RTUs”—or “my remote terminal units.” RTUs, like PLCs, are industrial control components used to operate and monitor equipment and processes. Given that Stuxnet was targeting Siemens PLCs, it seemed just as possible that this was its real meaning.11 But who could say for sure?
The Symantec researchers were careful not to draw any conclusions from the data. Instead, in a blog post written by Chien and a colleague, they said simply, “Let the speculation begin.”12
* * *
1 Despite the fact that Conficker spread so rapidly and so successfully, it never really did anything to most of the machines it infected, leaving an enduring mystery about the motives for creating and unleashing it. Some thought the attackers were trying to create a giant botnet of infected machines to distribute spam or conduct denial-of-service (DoS) attacks against websites—a later variant of Conficker was used to scare some users into downloading a rogue antivirus program. Others feared it might install a “logic bomb” on infected systems that would cause data to self-destruct at a future date. But when none of these scenarios materialized, some thought Conficker might have been unleashed as a test to see how governments and the security industry would respond. The attack code morphed over time and used sophisticated methods to remain several steps ahead of researchers to prevent them from stamping out the worm altogether, leading some to believe the attackers were testing defenses. After Stuxnet was discovered, John Bumgarner, chief technology officer for U.S. Cyber Consequences Unit, a consulting firm with primarily government clients, claimed Conficker and Stuxnet were created by the same attackers, and that Conficker was used as a “smokescreen” and a “door kicker” to get Stuxnet onto machines in Iran. As proof, he cited the timing of the two attacks and the fact that Stuxnet used one of the same vulnerabilities Conficker had used to spread. But Symantec and other researchers who examined Stuxnet and Conficker say they found nothing to support Bumgarner’s claim. What’s more, the first version of Conficker avoided infecting any machines in Ukraine, suggesting this may have been its country of origin.
2 Melissa wasn’t the first prolific attack, however. That honor is reserved for the Morris worm, a self-propagating program created by a twenty-three-year-old computer science graduate student named Robert Morris Jr., who was the son of an NSA computer security specialist. Although many of Stuxnet’s methods were entirely modern and unique, it owes its roots to the Morris worm and shares some characteristics with it. Morris unleashed his worm in 1988 on the ARPAnet, a communications network built by the Defense Department’s Advanced Research Projects Agency in the late 1960s, which was the precursor to the internet. Like Stuxnet, the worm did a number of things to hide itself, such as placing its files in memory and deleting parts of itself once they were no longer needed to reduce its footprint on a machine. But also like Stuxnet, the Morris worm had a few flaws that caused it to spread uncontrollably to 60,000 machines and be discovered. Whenever the worm encountered a machine that was already infected, it was supposed to halt the infection and move on. But because Morris was concerned that administrators would kill his worm by programming machines to tell it they were infected when they weren’t, he had the worm infect every seventh machine it encountered anyway. He forgot to take into account the interconnectedness of the ARPAnet, however, and the worm made repeated rounds to the same machines, reinfecting some of them hundreds of times until they collapsed under the weight of multiple versions of the worm running on them at once. Machines at the University of Pennsylvania, for example, were attacked 210 times in twelve hours. Shutting down or rebooting a computer killed the worm, but only temporarily. As long as a machine was connected to the network, it got reinfected by other machines.
3 Self-replicating worms—Conficker and Stuxnet being the exception—are far rarer than they once were, having largely given way to phishing attacks, where malware is delivered via e-
mail attachments or through links to malicious websites embedded in e-mail.
4 Once virus wranglers extract the keys and match them to the algorithms, they also write a decryptor program so they can quickly decrypt other blocks of code that use the same algorithm. For example, when they receive new versions of Stuxnet or even other pieces of malware that might be written by the same authors and use the same algorithms, they don’t have to repeat this tedious process of debugging all of the code to find the keys; they can simply run their decryptor on it.
5 In some versions of Stuxnet the attackers had increased the time period to ninety days.
6 Nate Lawson, “Stuxnet Is Embarrassing, Not Amazing,” January 17, 2011, available at rdist.root.org/2011/01/17/stuxnet-is-embarrassing-not-amazing/#comment-6451.
7 James P. Farwell and Rafal Rohozinski, “Stuxnet and the Future of Cyber War,” Survival 53, no. 1 (2011): 25.
8 One method for doing this, as Nate Lawson points out in his blog post, is to take detailed configuration data on the targeted machine and use it to derive a cryptographic hash for a key that unlocks the payload. The key is useless unless the malware encounters a machine with the exact configuration or someone is able to brute-force the key by reproducing all known combinations of configuration data until it achieves the correct one. But the latter can be thwarted by deriving the hash from an extensive selection of configuration data that makes this unfeasible. Stuxnet did a low-rent version of the technique Lawson describes. It used basic configuration data about the hardware it was seeking to trigger a key to unlock its payload, but the key itself wasn’t derived from the configuration data and was independent of it. So once the researchers located the key, they could simply unlock the payload with the key without needing to know the actual configuration. Researchers at Kaspersky Lab did, however, later encounter a piece of malware that used the more sophisticated technique to lock its payload. That payload has never been deciphered as a result. See this page.