by Kim Zetter
He first honed his skills as a teenager in France breaking “crackme” files—code games that programmers wrote for one another to test their reverse-engineering skills. Coders would write small programs coated in an encrypted shell, and reverse-engineers would have to crack it open and bypass other protections to unearth the secret message hidden inside, then send it back to the author to prove that they had solved it. Viruses and worms were just another type of crackme file in one sense, though some were more sophisticated than others. The only difference now was that Falliere got paid to crack them.
Falliere was born and raised near Toulouse in southern France, home of the Airbus aerospace corporation and a center for satellite technology. In a region dominated by engineers, aeronautical and otherwise, it seemed natural that Falliere would be drawn to technology. But his early influences actually veered toward the mechanical. His father was an automobile mechanic who owned and operated his own garage. Falliere’s introduction to computers in high school, however, led him in a different direction—to study computer science at the National Institute for Applied Sciences in France. The spread of the prolific Code Red worm in 2001, which struck more than 700,000 machines, got him interested in computer security. While still in college, he wrote several security articles for a small French technical magazine, as well as a paper for SecurityFocus, a security website that Symantec owned.1 In late 2005 while doing his master’s program in computer science, he was told he needed a six-month internship under his belt to complete it. So he reached out to his contacts at SecurityFocus, who referred him to Chien. The timing couldn’t have been more fortuitous. Symantec was still in the midst of its Dublin hiring spree, and Chien was desperate to find experienced reverse-engineers. He told Falliere that rather than a six-month internship at Symantec he could offer him a full-time job instead. “How much do you want to make?” he asked Falliere.
“I don’t need any money,” Falliere told him. “Just an internship.”
“Are you crazy?” Chien replied. “I’ll send you an offer in an e-mail. Just accept it.”
A few weeks later, Falliere was settled in Dublin. He adjusted to his new life fairly quickly, but after two years of constant plane rides back to France to see his girlfriend, he asked for a transfer to Paris, where Symantec had a sales and marketing office. He turned out to be the only technical person in the office, which left him feeling isolated at times, but also helped focus him on his work.
His desk, in an office shared with two colleagues, was an orchestrated mess of technical papers and books scattered around a test machine that he used to run malware and a laptop containing the debugger software that he used to analyze code. A cylinder-shaped Rubik’s puzzle was the only personal item on the desk, which he fingered like worry beads whenever he butted up against an unwieldy patch of code that resisted cracking.
Though Falliere was a whiz at reverse-engineering, he was actually doing very little of it when Stuxnet came along. Over time, he’d become Symantec’s de-facto tool guy, whipping together programs and tools to make deciphering malware more efficient for other analysts. The job snuck up on him over time. He began by tweaking forensic tools for himself that he found clunky and inefficient, then began doing it for colleagues as well, even creating new tools after they began submitting requests. Eventually, he was spending more time working on tools than deciphering code. He only jumped on the occasional malware threat if Chien made a special request, which he did in the case of Stuxnet.
FALLIERE BEGAN HIS analysis of the payload by studying the Siemens Step 7 software. The Step 7 software that Stuxnet attacked was Siemens’s proprietary application for programming its S7 line of PLCs. It ran on top of the Windows operating system and allowed programmers to write and compile commands, or blocks of code, for the company’s PLCs. The system wasn’t complete without the Simatic WinCC program, a visualization tool used for monitoring the PLCs and the processes they controlled. PLCs, connected to monitoring stations via a facility’s production network, were in a constant state of chatter with the machines, sending frequent status reports and updates to give operators a real-time view of whatever equipment and operations the PLC controlled. The Siemens .DLL was central to both the Step 7 and WinCC programs, serving as middleman to create commands for the PLCs or receive status reports from them. That’s where Stuxnet’s rogue .DLL came in. It did everything the real .DLL was designed to do, and more.
To understand how the doppelgänger .DLL worked, Falliere had to first understand how the Step 7 system and the legitimate .DLL worked. He searched online for experts to consult, and even thought about reaching out to Siemens for help, but he didn’t know whom to call there. The Step 7 .DLL was just one in a galaxy of .DLLs the Siemens software used, and to locate the two or three programmers behind the code who knew it well enough to help would take just as long as it would take for him to figure it out on his own. And in the end, there was a certain amount of pride to be had in cracking it himself.
To reverse the .DLL files—the original and the doppelgänger—Falliere opened them in a disassembler, a tool designed for translating binary code into assembly language, which was one step back from binary. The disassembler allowed him to add notations and comments to the code or rearrange sections to make it easier to read. He worked on small bits of code at a time, labeling each with a description of the function it performed as he went along.
As researchers typically did when examining complex malware like this, Falliere combined static analysis (viewing the code on-screen in a disassembler/debugger) with dynamic analysis (observing it in action on a test system, using the debugger to stop and start the action so he could match specific parts of the code with the effect it was having on the test machine). The process could be excruciatingly slow under the best of circumstances, since it required jumping back and forth between the two machines, but it was all the more difficult with Stuxnet due to its size and complexity.
It took two weeks of documenting every action the .DLL took before Falliere finally confirmed what he’d suspected all along, that Stuxnet was kidnapping the Siemens .DLL and putting the doppelgänger in its place to hijack the system. It did this by changing the name of the Siemens .DLL from s7otbxdx.DLL to s7otbxsx.DLL and installing the rogue .DLL with the original’s name in its place, essentially stealing its identity. Then when the system called up the Siemens .DLL to perform any action, the malicious .DLL answered instead.
Once the rogue .DLL was in place, what it did was quite remarkable.
Whenever an engineer tried to send commands to a PLC, Stuxnet made sure its own malicious command code got sent and executed instead. But it didn’t just overwrite the original commands in a simple swap. Stuxnet increased the size of the code block and slipped its malicious code in at the front end. Then to make sure its malicious commands got activated instead of the legitimate ones, Stuxnet also hooked a core block of code on the PLC that was responsible for reading and executing commands. A lot of knowledge and skill were required to inject the code seamlessly in this way without “bricking” the PLCs (that is, causing them to seize up or become nonfunctional), but the attackers pulled it off beautifully.
The second part of the attack was even more ingenious. Before Stuxnet’s malicious commands went into action, the malware sat patiently on the PLC for about two weeks, sometimes longer, recording legitimate operations as the controller sent status reports back to monitoring stations. Then when Stuxnet’s malicious commands leapt into action, the malware replayed the recorded data back to operators to blind them to anything amiss on the machines—like a Hollywood heist film where the thieves insert a looped video clip into surveillance camera feeds. While Stuxnet sabotaged the PLC, it also disabled automated digital alarms to prevent safety systems from kicking in and halting whatever process the PLC was controlling if it sensed the equipment was entering a danger zone. Stuxnet did this by altering blocks of code known as OB35 that were part of the PLC’s safety system. These were used to monitor criti
cal operations, such as the speed of a turbine the PLC was controlling. The blocks were generated every 100 milliseconds by the PLC so that safety systems could kick in quickly if a turbine began spinning out of control or something else went wrong, allowing the system or an operator to set off a kill switch and initiate a shutdown. But with Stuxnet modifying the data the safety system relied on, the system was blind to dangerous conditions and never had a chance to act.2
The attack didn’t stop there, however. If programmers noticed something amiss with a turbine or other equipment controlled by the PLC and tried to view the command blocks on the PLC to see if it had been misprogrammed, Stuxnet intervened and prevented them from seeing the rogue code. It did this by intercepting any requests to read the code blocks on the PLC and serving up sanitized versions of them instead, minus the malicious commands. If a troubleshooting engineer tried to reprogram the device by overwriting old blocks of code on the PLC with new ones, Stuxnet intervened and infected the new code with its malicious commands too. A programmer could reprogram the PLC a hundred times, and Stuxnet would swap out the clean code for its modified commands every time.
Falliere was stunned by the attack’s complexity—and by what it implied. It was suddenly clear that Stuxnet wasn’t trying to siphon data out of the PLC to spy on its operations, as everyone had originally believed. The fact that it was injecting commands into the PLC and trying to hide that it was doing so while at the same time disabling alarms was evidence that it was designed not for espionage but for sabotage.
But this wasn’t a simple denial-of-service attack either. The attackers weren’t trying to sabotage the PLC by shutting it down—the PLC remained fully functional throughout the attack—they were trying to physically destroy whatever process or device was on the other end of the PLC. It was the first time Falliere had seen digital code used not to alter or steal data but to physically alter or destroy something on the other end of it.
It was a plot straight out of a Hollywood blockbuster film. A Bruce Willis blockbuster, to be exact. Three years earlier, Live Free or Die Hard had imagined such a destructive scenario, albeit with the typical Hollywood flair for bluster and creative license. In the film, a group of cyberterrorists, led by a disgruntled former government worker, launch coordinated cyberattacks to cripple the stock market, transportation networks, and power grids, all to distract authorities from their real aim—siphoning millions of dollars from government coffers. Chaos ensues, along with the requisite Die Hard explosions.
But Hollywood scenarios like this had long been dismissed by computer security pros as pure fantasy. A hacker might shut down a critical system or two, but blow something up? It seemed improbable. Even most of the explosions in Die Hard owed more to physical attacks than to cyber ones. Yet here was evidence in Stuxnet that such a scenario might be possible. It was leaps and bounds beyond anything Falliere had seen before or had expected to find in this code.
For all of its size and success, Symantec was in the end just a nerdy company, in the business of protecting customers. For fifteen years the adversaries they had battled had been joy-riding hackers and cybercriminals or, more recently, nation-state spies hunting corporate and government secrets. All of them were formidable opponents to varying degrees, but none were bent on causing physical destruction. Over the years, malware had gone through a gradual evolution. In the early days, the motivations of malware writers remained pretty much the same. Though some programs were more disruptive than others, the primary goal of virus writers in the 1990s was to achieve glory and fame, and a typical virus payload included shout-outs to the hacker’s slacker friends. Things changed as e-commerce took hold and hacking grew into a criminal enterprise. The goal wasn’t to gain attention anymore but to remain stealthy in a system for as long as possible to steal credit card numbers and bank account credentials. More recently, hacking had evolved into a high-stakes espionage game where nation-state spies drilled deep into networks to remain there for months or years while silently siphoning national secrets and other sensitive data.
But Stuxnet went far beyond any of these. It wasn’t an evolution in malware but a revolution. Everything Falliere and his colleagues had examined before, even the biggest threats that targeted credit card processors and Defense Department secrets, seemed minor in comparison. Stuxnet thrust them into an entirely new battlefield where the stakes were much higher than anything they had dealt with before.
There had long been a story floating around that suggested something like this might have occurred before, but the tale has never been substantiated. According to the story, in 1982 the CIA hatched a plot to install a logic bomb in software controlling a Russian gas pipeline in order to sabotage it. When the code kicked in, it caused the valves on the pipeline to malfunction. The result was an explosive fireball so fierce and large that it was caught by the eyes of orbiting satellites.3
Back in Culver City, Chien wondered if there had been unexplained explosions in Iran that could be attributed to Stuxnet. When he searched the news reports, he was startled to find a number of them that had occurred in recent weeks.4 Toward the end of July, a pipeline carrying natural gas from Iran to Turkey had exploded outside the Turkish town of Dogubayazit, several miles from the Iranian border. The blast, which shattered windows of nearby buildings, left a raging blaze that took hours to extinguish.5
Another explosion occurred outside the Iranian city of Tabriz, where a 1,600-mile-long pipeline delivered gas from Iran to Ankara. Yet a third explosion ripped through a state-run petrochemical plant on Kharg Island in the Persian Gulf and killed four people.6 Weeks later, a fourth gas explosion occurred at the Pardis petrochemical plant in Asalouyeh, killing five people and injuring three.7 It occurred just a week after Iranian president Mahmoud Ahmadinejad had visited the plant.
The explosions didn’t all go unexplained. Kurdish rebels claimed responsibility for the ones at Dogubayazit and Tabriz, and the Iranian news agency, IRNA, attributed the Kharg Island fire to high-pressure buildup in a central boiler.8 The explosion at Pardis was blamed on a leak of ethane that ignited after workers began welding a pipeline. But what if one or more of the explosions had actually been caused by Stuxnet? Chien wondered.
This was much more than anyone on the team had bargained for when they first began deconstructing Stuxnet weeks earlier. If Stuxnet was doing what Chien and his colleagues thought it was doing, then this was the first documented case of cyberwarfare.
Chien, O’Murchu, and Falliere convened on the phone to discuss their options. They still didn’t know what exactly Stuxnet was doing to the PLC or even the identity of its target, but they knew they had to reveal what they’d learned about its payload so far. So on August 17, 2010, they went public with the news that Stuxnet wasn’t an espionage tool as everyone had believed but a digital weapon designed for sabotage. “Previously, we reported that Stuxnet can steal code … and also hide itself using a classic Windows rootkit,” Falliere wrote in his typical understated tone, “but unfortunately it can also do much more.”9
To illustrate Stuxnet’s destructive capability, they referenced the 1982 attack on the Siberian pipeline. Their words had been carefully parsed by the company’s PR team, but there was no denying the shocking nature of what they implied. As soon as the post went public, they waited on edge for the community’s response. But instead of the dramatic reaction they thought they would get, all they got in return was, in Chien’s words, “silence like crickets.”
Chien was confused by the lack of response. After all, they were talking about digital code that was capable of blowing things up. They had assumed, at the very least, that once they published their findings, other researchers would publish their own research on Stuxnet. That was the way malware research worked—whenever new attack code was uncovered, teams of competing researchers at different firms worked to decipher the code simultaneously, each one racing to be the first to publish their results. As soon as one team published, the others quickly weighed in to deliv
er their own findings. If multiple groups arrived at the same results, the duplicate work served as an informal peer-review process to validate all of their findings. The silence that greeted their post about Stuxnet, then, was unusual and disconcerting—Chien began to wonder if they were the only team examining the payload or if anyone else even cared about it.
For a brief moment, he questioned their decision to devote so much time to the code. Had everyone else seen something that made them dismiss it as insignificant, something that Chien and his team had completely missed? But then he reviewed everything they had discovered in the past few weeks. There was no possible way they could have been wrong about the code, he concluded—either about Stuxnet’s importance or its aggressive intentions.
As for continuing their research, there was no question anymore that they had to press on. If anything, their work on the code seemed more urgent than before. They had just announced to the world that Stuxnet was a digital weapon designed for physical destruction. But they still hadn’t identified the malware’s target. Having made a public declaration about the code’s destructive aim, they worried that the attackers might suddenly feel pressure to accelerate the mission and destroy their target. That is, if they hadn’t already done so.
And apparently, they weren’t the only ones concerned about the possibility of things blowing up. Five days after they published their announcement, the steady stream of traffic still coming into their sinkhole from Stuxnet-infected machines in Iran suddenly went dark. It seemed that someone in the Islamic Republic had taken note of their news. To prevent the attackers or anyone else from remotely accessing the infected machines and doing some damage, someone in Iran had finally got wise and given the order to sever all outbound connections from machines in that country to Stuxnet’s two command-and-control domains.