Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Home > Other > Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon > Page 32
Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon Page 32

by Kim Zetter


  Raiu had joined the company in 2000 at the age of twenty-three, when it had just a few dozen employees. He was hired to work on its Prague project, the name the company gave the next-generation antivirus engine it was building.

  Growing up in Communist Romania, Raiu’s passion hadn’t been computers but chemistry. He was fascinated by the combustible reaction of certain chemicals when mixed and by the fundamental knowledge that chemistry imparted about the nature and structure of the world. But when one of his experiments nearly blew up his parents’ apartment, they bought him a locally made PC clone to steer him toward less lethal pursuits. It wasn’t long before he’d taught himself programming and, while still a teenager, designed an antivirus engine from scratch called RAV.

  His work on RAV began when his high school network got infested with a virus the school’s antivirus scanner didn’t detect. Raiu spent a night writing signatures and crafting a detection tool for it. Over time, he added more code and features, and eventually began distributing it for free under the name MSCAN. When word of his creation got out, a Romanian entrepreneur hired him to work for his company, GeCAD Software, which began marketing his program under the name RAV, for Romanian Anti-Virus. It quickly became the company’s top-selling product, reliably beating out competitors in test after test, which drew the attention of Microsoft. In 2003, the software giant acquired RAV from GeCAD, but by then Raiu had already jumped ship to work for Kaspersky.14

  Kaspersky Lab was relatively unknown at the time in the United States, where Symantec and McAfee dominated the antivirus market. As a Russian firm, Kaspersky faced a battle of mistrust in the West—particularly since founder Eugene Kaspersky had been schooled in a KGB-backed institute and had served in Russia’s military intelligence. But the company slowly made a name for itself in eastern Europe and elsewhere, particularly in the Middle East, where the United States and US firms faced a similar battle of mistrust.

  Raiu began with Kaspersky as a programmer, but in 2004 when the company launched a research team to investigate and reverse-engineer malware, Raiu joined the group. In 2010, he became its director, overseeing research teams on several continents. Now, with the discovery of Duqu, several of these teams went into action.

  The technical work was led by Gostev, a whippet-thin analyst with short, light brown hair and a slight stoop that was suggestive of all the hours he spent bent in concentration over a computer. As he and his colleagues picked through the code, they were struck by a number of things.

  One particularly interesting part was the component the attackers used to download additional payload modules to a victim’s machine to siphon data. Unlike every other Duqu and Stuxnet module, this one was written not in C or C++ but in a language Gostev and Raiu had never seen before. They tried for weeks to identify it and even consulted experts on programming languages, but still couldn’t figure it out. So they put out a call for help on their blog and were finally able to conclude, piecing bits of clues together, that the attackers had employed a rarely used custom dialect of C, along with special extensions to contort the code and make it small and portable.15 It was a programming style common to commercial software programs produced a decade ago, but not to modern-day programs, and certainly not to malware. It was clear these weren’t hot-shot coders using the latest techniques, but old-school programmers who were cautious and conservative. Sometimes C++ could produce compiled code that was unpredictable and executed in unintentional ways. So the attackers had chosen C instead, Raiu surmised, to give them the greatest control over their malicious code, then modified it during compilation to make it more compact and easy to deliver to victims.16

  Their constraints on Duqu extended to its spreading mechanisms. In this regard Duqu was as tightly controlled as Stuxnet had been uncontrolled. The attack didn’t appear to have any zero-day exploits to help it spread, and it also couldn’t spread autonomously as Stuxnet did. Instead, once on a machine, it would infect other machines only if the attackers manually sent instructions from their command server to do so.17 Duqu was also much stealthier in communicating with its command servers than Stuxnet had been.18 The communication was encrypted with a strong encryption algorithm known as AES, to prevent anyone from reading it, and was also tucked inside a .JPEG image file to help conceal it. And unlike Stuxnet, which struck more than 100,000 machines, researchers would eventually uncover only about three dozen Duqu infections.19

  The victims were scattered among various countries and ranged from military targets to manufacturers of industrial equipment, such as pipes and valves. All of them appeared to have been carefully targeted for their “strategic assets”—products they produced or services they rendered.20 Not surprisingly, many of the victims Kaspersky uncovered had a connection to Iran; either they had an office in the Islamic Republic or they had some kind of trade relationship with Iran. The only victim so far that didn’t appear to have a connection to Iran was the company in Hungary that discovered the attack.

  Based on information gleaned from log files provided by some of the victims, the attackers appeared to be particularly interested in swiping AutoCAD files—especially ones related to industrial control systems used in various industries in Iran. AutoCAD, which stands for computer-aided design, is software used for drafting 2D and 3D architectural blueprints and designing computer boards and consumer products; but it’s also used for mapping the layout of computer networks and the machinery on plant floors. The latter would come in handy for someone planning to bomb a factory or launch a digital attack like Stuxnet.

  The attackers were systematic in how they approached their victims, compiling new attack files for each target and setting up separate command servers throughout Europe and Asia so that only two or three infected machines reported to a single server. This segmentation no doubt helped them track different operations and sets of victims, but it also ensured that if any outsider got access to one of the servers, their view of the operation would be very limited. The servers, in fact, turned out to be proxy machines—way stations for the attackers to redirect stolen data to other machines—to further prevent anyone from seeing the entire operation or tracking stolen data back to the attackers. Data from the victim in Hungary, for example, was first sent to a server in India before being redirected to one in the Philippines, where it was sent somewhere else. Data from victims in Iran went to a server in Vietnam before going to Germany and somewhere beyond. The researchers tried to follow the trail, but after hitting three different proxies in a row each time, they figured they’d never reach the end of the trail, and gave up.

  With help from some of the companies that hosted the servers, however, Kaspersky obtained mirror images of five of the machines, including one in Vietnam that controlled infections in Iran. They discovered that on October 20, two days after Symantec had gone public with news of Duqu, the attackers had conducted a massive cleanup operation in a panicked attempt to scrub data from the servers. Why it took them two days to respond to the news was unclear.21 But in their haste to eliminate evidence, they left behind traces of logs that provided Kaspersky with clues about their activity.22 The logs showed, for example, that the attackers had signed into one of the command servers in Germany in November 2009, two years before Duqu was discovered. This suggested that Duqu was likely in the wild for at least that long. Perhaps, the Kaspersky researchers posited, Duqu was really a precursor to Stuxnet, not a successor to it, as Symantec assumed. It wouldn’t be long before they found the evidence to support this.

  IT WAS INITIALLY unclear to anyone how Duqu infected machines. Stuxnet had used the .LNK exploit embedded on USB flash drives to drop its malicious cargo. But the CrySyS Lab had found no dropper on machines at Bartos’s company and no zero-day exploits, either. After Symantec published its paper about Duqu, however, Chien asked Bencsáth to have the Hungarian victim search their systems again for anything suspicious that occurred around August 11, the date the infection occurred. That’s when they found an e-mail that had come in then w
ith a Word document attached to it. The attachment was 700k in size—much larger than any documents the company usually received—which drew their attention. Sure enough, when the CrySyS team opened the e-mail on a test system in their lab, Duqu’s malicious files dropped onto it.23

  Given that the attack code had gone undetected until now, the CrySyS guys suspected a zero-day exploit was at play. Bencsáth sent the dropper to the Symantec team, who determined that it was indeed exploiting a zero-day buffer-overflow vulnerability in the TrueType font-parsing engine for Windows. The font-parsing engine was responsible for rendering fonts on-screen. When font code for a character appeared in a Word document, the engine consulted the proper font file to determine how the character should look. But in this case when the engine tried to read the font code, a vulnerability in the parsing engine triggered the exploit instead.

  The exploit was quite “badass,” in the words of one researcher, because a normal exploit attacking a buffer-overflow vulnerability generally got hackers only user-level access to a machine, which meant they needed a second vulnerability and exploit to get them administrative-level privileges to install their malicious code undeterred.24 But this exploit cut through layers of protection to let them install and execute malicious code at the kernel level of the machine without interference. Buffer-overflow vulnerabilities that could be exploited at the kernel level are rare and difficult to exploit without causing the machine to crash, but the Duqu exploit worked flawlessly. It was several orders of magnitude more sophisticated than the .LNK exploit Stuxnet had used. The .LNK exploit had been copied by cybercriminals in no time after Stuxnet was exposed in July 2010, but this one would take months before anyone would successfully replicate it.25

  The exploit was notable in itself, but the attackers had also embedded a couple of Easter eggs in their code—perhaps to taunt victims. The name they gave the fake font that executed their attack was Dexter Regular, and in the copyright notice for the fake font they wrote—“Copyright © 2003 Showtime Inc. All rights reserved. Dexter Regular.”26

  It was clear they were referencing the popular TV show Dexter, which was then airing on the Showtime network. But the show didn’t begin airing until October 1, 2006, which made the 2003 copyright date seem odd. It was unclear if the Easter egg had any meaning or if it was just a joke. But the reference did appear to have one parallel to Stuxnet. The Showtime series focused on Dexter Morgan, a forensic scientist and vigilante killer who only murdered criminals, making him a killer with a moral code—a murderer who killed for the sake of the greater societal good. At least that’s how Dexter saw it. Arguably, it was also the way the United States and Israel might have viewed the cyberattack on Iran, or the attacks on Iran’s nuclear scientists—as a means to a greater good.27

  The font name and copyright date offered a bit of distraction for the researchers, but the more notable part of the dropper was its compilation date—February 21, 2008—providing a clue about how long Duqu might have been around. Not long after this dropper was found, Kaspersky got its hands on a second one, found on a machine in Sudan, that had been compiled even earlier.28

  Sudan had close military ties to Iran—it received $12 million worth of arms from Iran between 2004 and 2006—and was a vocal supporter of Iran’s nuclear program. In 2006, Iran had publicly vowed to share its nuclear expertise with Sudan. Sudan was also a target of UN sanctions. Duqu’s victim in Sudan was a trade services firm that had been infected in April 2011, four months before the infection in Hungary. The malicious code arrived via a phishing attack using the same Dexter zero-day exploit that was used in Hungary. The malicious e-mail, purporting to come from a marketing manager named B. Jason, came from a computer in South Korea, though the machine had likely been hacked to send the missive.29 “Dear Sir,” the e-mail read, “I found the details of your company on your website, and would like to establish business cooperation with your company. In the attached file, please see a list of requests.” The attached document contained a handful of survey questions as well as an image of a green Earth with plants sprouting from its top. When the victim opened the attachment, the Dexter exploit sprang into action and deposited its illicit cargo onto the victim’s machine.

  The dropper that installed Duqu in this case had been compiled in August 2007, which further confirmed that Duqu had been around for years before its discovery in Hungary. This wasn’t the only evidence supporting that early timeline, however. The researchers also found evidence that Duqu’s infostealer file had existed years earlier as well. They only stumbled upon this clue because of a mistake the attackers had made.

  When Duqu’s self-destruct mechanism kicked in after thirty-six days, it was supposed to erase all traces of itself from infected machines so a victim would never know he had been hit. But the Kaspersky team discovered that when Duqu removed itself, it forgot to delete some of the temporary files it created on machines to store the data it stole. One of these files, left behind on a machine in Iran, had been created on the machine on November 28, 2008.

  Kaspersky and Symantec had always suspected that prior to Stuxnet’s assault on the centrifuges in Iran, the attackers had used an espionage tool to collect intelligence about the configuration of the Siemens PLCs. The information could have come from a mole, but now it seemed more likely that a digital spy like Duqu had been used.

  It seemed plausible that the Stuxnet attackers might also have used Duqu to steal the digital signing keys and certificates from RealTek and JMicron, since this was the tool they had used against the certificate authority in Hungary.

  If Duqu had indeed been in the wild infecting systems undetected since 2007, or longer, its sudden discovery in Hungary in 2011 seemed strange. Why now? Raiu wondered. He concluded that it must have been a case of hubris and a bad choice of target. After remaining stealthy for so long, the attackers grew confident that they’d never get caught. They likely considered Stuxnet’s discovery the previous year an anomaly that occurred only because the digital weapon had spread too far. But Duqu was carefully controlled and its targets handpicked, which made its discovery less likely. Except, in Hungary, the attackers finally picked the wrong target. The Hungarian certificate authority was much more security conscious than the trading companies and manufacturers Duqu had previously hit. And this was Team Duqu’s failing.30

  Though Stuxnet and Duqu shared some of the same code and techniques, Raiu and his team ultimately concluded that they had been built by separate teams from the same base platform, a platform they dubbed “Tilde-d”—because both Stuxnet and Duqu used files with names that began with ~D.31

  In fact, Kaspersky discovered evidence that an arsenal of tools might have been built from the same platform, not just Stuxnet and Duqu. They found at least six drivers that shared characteristics and appeared to have been built on the Tilde-d platform. Two of them had been used in the known Stuxnet attacks, and a third one was the driver that had been used with Duqu.32 But they also found three “phantom drivers” that were discovered by themselves, without any Stuxnet or Duqu files with them, making it difficult to determine if they had been used with either of these attacks or with different attacks altogether. All three of the drivers used algorithms and keys that were the same as or similar to those that the Stuxnet and Duqu drivers used, making it clear they were connected to the Tilde-d team.

  The first of these was the driver that had been found in July 2010 by the Slovakian antivirus firm ESET and was signed with the JMicron certificate.33 Because the driver was found days after the news of Stuxnet broke, everyone assumed it was related to Stuxnet, though it was not found on any system infected with Stuxnet. The driver was a hybrid of the Stuxnet and Duqu drivers, using code that was nearly identical to the Stuxnet driver and some of the same functions and techniques that the Duqu driver used. But it also used a seven-round cipher for its encryption routine instead of the four-round cipher that Stuxnet’s driver used, making it more complex. This made Raiu and Gostev suspect it was designed for a d
ifferent variant of Stuxnet or different malware altogether.

  The second phantom driver was discovered when someone submitted it to VirusTotal.34 It was compiled on January 20, 2008. It also had a seven-round cipher, suggesting that it and the JMicron driver might have been created for use with the same attack—perhaps with a different version of Stuxnet or something else altogether.

  The third mystery driver was also submitted to VirusTotal, from an IP address in China on May 17, 2011, months before Duqu infected the Hungarian machines in August.35 This driver used a four-round cipher like the Stuxnet drivers and an identical encryption key; it was also compiled the same day the Stuxnet drivers were compiled and was signed with the RealTek certificate that had been used to sign Stuxnet’s drivers, though it was signed March 18, 2010, instead of January 25, 2010, the date the Stuxnet drivers were signed. March 18 was just weeks before the attackers unleashed their April 2010 variant of Stuxnet, but for some reason they didn’t use this driver with that assault. Instead, they reused the driver from the June 2009 attack. This suggested that the third phantom driver might have been prepared for a different attack.

  The burning questions for Gostev and Raiu, of course, were what attacks were the phantom drivers created for and who were their victims? Were they evidence that other undetected Stuxnet attacks had occurred prior to June 2009 or after April 2010?

  It seemed the story of Stuxnet was still incomplete.

 

‹ Prev