As Anonymous began to share the media spotlight surrounding Cablegate, Aaron Barr became increasingly preoccupied with the group. It represented a tempting case study for the kind of analysis he hoped to validate: Although Anons fiercely guarded their true names, they openly congregated and planned their actions in online chat rooms and crowd-sourced documents using pseudonyms. Despite all its proxies and masks, perhaps the entire social graph of Anonymous could be infiltrated and charted.
Barr had been planning on giving a talk at the BSides security conference in San Francisco in March, in which he’d use clues built from a Web of online relationships to reveal human flaws in the security of a nuclear facility in Pennsylvania and the army intelligence group INSCOM. He had titled the talk “Who Needs the NSA When We Have Social Media?”
But by January, Barr was determined to make a bigger splash than the usual slide show of security vulnerabilities could generate. He needed one that would get HBGary Federal into the headlines and flush out business leads. So he added a third target.
“I am going to focus on outing the major players of the Anonymous group, I think,” he wrote to two other staffers at HBGary Federal in January 2011.
“After all—no secrets, right? :)”
In early 2011, I contacted each of the four dozen companies that had publicly signed up to submit proposals to DARPA’s open casting call for antileak technologies. And I soon got a taste of the immensely tedious task that Peiter “Mudge” Zatko faces.
About two-thirds of the companies, a crowd of generic-sounding contractors with names like Securonix, IntelliGenesis, Trustifier, IT Solutions Partners, and Applied Visions, didn’t respond or declined to talk. Of those that did give me a description of their ideas, deciphering anything unique about their approaches involved wading into bland white-paper-speak filled with phrases like a “solution [that] automatically adjusts weights assigned to user and/or peer group behavior models based on the life cycle of the user” and “a risk-based analysis [of] an abstraction matrix along with a decision model that conforms to a government-, department-, or office – information policy.”
Perhaps the most candid response came from a firm called Teledyne that had already bowed out of the program. Mark Anderson, a director of information sciences at the company, complained that without access to past investigations, companies like his had little hope of guessing at a workable way of combing through data for culprits. And he pointed out that the request for proposals seemed to focus on online activities, but ignored the Luddite end of the leaking spectrum. “How would one detect a bad guy exfiltrating data on a memory device using a low tech technique (like swallowing it)?” Anderson wrote to me. “Don’t get me wrong, it’s a really important problem. I am just skeptical that we can employ cyber techniques instead of old-school human detective techniques.”
When I called Alan Paller, the avuncular research director of the cybersecurity education organization the SANS Institute, he began our conversation with a gloomy line. “I prefer to focus on the problems that have solutions.”
When I pressed him, Paller admitted that there is indeed a solution to the problem of leaks. “Lock it all up, and don’t let anyone read anything. But that flies in the face of everything we know about organizational effectiveness.”
In fact, the cybersecurity industry has tried a more practical version of Paller’s fix before. Beginning in 2007, practically every major software vendor from McAfee to Symantec to Trend Micro spent hundreds of millions of dollars acquiring companies in the so-called Data-Leak Prevention (DLP) industry—software designed to locate and tag sensitive information on a firm’s servers, and then guard against its departure at the edges of the network. “Data-centric security” briefly became a buzz phrase full of promise as companies realized that antivirus and firewalls weren’t enough to cure their information ailments. Even in 2010, a study by Forrester Research showed that about a quarter of firms in the United States, UK, Canada, France, and Germany were implementing leak-focused software, and another third were considering the option.
Unfortunately, Data-Leak Prevention never quite worked. In modern companies and agencies, where the will to let employees “connect the dots” between data points has defeated any impulse to wall off various parts of the network, information is simply created too quickly and moved around too often for a mere filter to catch it. Even after the DLP acquisition craze, insider data theft kept flowing: A study in 2009 by the privacy-focused Ponemon Institute found that about 60 percent of employees admit to taking sensitive data before leaving a job.
One reason the bleeding hasn’t stopped, particularly in the public sector: Since the 9/11 Commission determined that a lack of data sharing between intelligence agencies blinded the government’s counterterrorism efforts, Uncle Sam has kept his focus on stopping the next terror attack rather than preventing the next leak.
In the wake of Cablegate, the White House would issue an order to more closely restrict and monitor who had access to classified materials, and the armed forces would establish new rules about how physical media like CDs could be used on SIPRNet machines. But the Senate’s first official reaction to the scandal (after Senator Joseph Lieberman’s suggestion of a new law that would make revealing an intelligence source a federal crime) was to hold a hearing not on how to better restrict information, but how to make sure it was not restricted. Lieberman’s introduction to the hearing began with references to the World Trade Center attacks and the improvements in intelligence in the nearly ten years since.
“Now I fear the WikiLeaks case has become a rallying cry for an overreaction, for those who would take us back to the days before 9/11 when information was considered the property of the agency that developed it and was not to be shared,” Lieberman said. “The bulk of the information illegally taken and given to WikiLeaks would not have been available had that information not been on a shared system, some argue. But to me this is putting an ax to a problem that requires a scalpel.”
In other words, some in Washington refuse to fall into Assange’s trap: The WikiLeaks founder predicted that leaks would halt communications within conspiratorial institutions and paralyze their ability to conspire. But the government, perhaps wisely, would rather let the data leak than stop its flow.
All that information sharing makes Data-Leak Prevention tough to put into practice. How to seal an agency’s edges when they’re meant to be porous? So the cybersecurity industry has evolved instead to embrace another tactic: Network forensics, the process of constantly collecting every fingerprint on a company’s servers to trace an intruder or leaker after the fact—not simply the moment of the leak, but the entire story of the leaker’s behavior in the days or even months before and after.
NetWitness, one prominent start-up in that budding field, saw its revenue grow seventy-eight-fold from 2007 to 2010, for instance, before it was acquired by software giant EMC. But even NetWitness’s software generally gathers information about network activities and makes it easily available to queries—it doesn’t explain it. “There’s nothing in current technology that can do this in an automated fashion,” says Shawn Carpenter, principal forensic analyst at the company. “You need a Columbo.”
Since DARPA tapped Mudge in early 2010, he has aimed to build that leak-sniffing robo-Columbo. Though his role is mainly to function as referee in a global contest of ideas, he’s also been reviving the methods he developed years earlier as the chief scientist at an insider-threat-focused security start-up called Intrusic. As he described it to me in our interview, Mudge’s project seeks to identify what he calls “malicious missions”: That means any insider activity aimed at stealing data from inside a company’s firewall, whether it’s a Dell PC remotely hijacked by a Chinese cyberspy or Bradley Manning. His system would monitor networks in real time for just the sort of data-stealing behavior he would have performed himself in his years playing digital offense.
Mudge is intensely aware of the potential for false positives: Given that the CINDER system would have to function on networks used by hundreds of thousands of employees, even a 1 percent error rate could lead to mistaken accusations against thousands of users on a regular basis. “It’s as if we’re trying to come up with a medical test for some kind of super-AIDS,” Zatko says with a cheerful inattention to political correctness. “If you incorrectly report that ten thousand people have super-AIDS, they’re going to have a very bad day at the office.”
To cut down those false alarms, no single act would signal a leak; instead, Zatko says his detection system would link acts in a probabilistic chain that would trigger an alert only if it could put together an entire string of events that pointed to purposeful data theft. “You put all these things together into the different components of the mission,” says Mudge. “I’m looking for these new rhythms, new tells, new interrelations and requirements.”
The public request for proposals that Mudge released at CINDER’s launch lists a series of possible actions, what it terms “. First comes reconnaissance, exploring file directories or scanning networks to map their architecture. Then comes analyses of files, searching their contents or reading their metadata, the hidden information that describes the files for the operating system and other applications. Then the leaker would need to gather the files together and prepare them for exfiltration, burning them to a CD, printing them, or encrypting them for transmission. And finally comes the leak itself, the moment when the insider walks out of the building with the physical material in hand, pushes it out by e-mail, or spills it onto the Web.
Even after the initial leak, Mudge argues, the “tells” might continue. He points to the case of Robert Hanssen, a former FBI agent currently serving a life sentence in a Colorado supermax prison for giving intelligence information to the Soviets over two decades. In 2002, he confessed to selling the USSR $1.4 million in secrets, from signals intelligence methods to the fact that the FBI had dug an eavesdropping tunnel under the Soviet embassy in Washington, D.C.
Every few days, Hanssen would stop his normal activities and make a single query to a server across the network, a pattern he repeated for almost ten years. That server, Mudge says, held the counterintelligence database. Hanssen was searching for himself, a routine check to see if he’d finally been found out.
In mid-January 2011, HBGary Federal’s Aaron Barr set about trying to deanonymize Anonymous.
With a software developer at the security firm named Mark Trynor, Barr had built a tool designed to scrape users’ social networking pages and aggregate the data for analysis. Facebook enforces a “real name policy”: No pseudonyms allowed. That meant if Barr could map Anons’ Facebook identities to the ones used in the group’s IRC instant messaging forums, he could pin down their real-world identities. “One of the goals is to tie as many of the IRC nicks to FB profiles as possible,” he wrote in a report on his research.
He and Trynor had first used the software to analyze the social media profiles of members of the Colombian insurgent group FARC. But now Barr wanted to apply the same data collection and analysis to the world’s most vindictive group of hacktivists.
This time Barr’s coder balked. And the two entered into a back-and-forth debate that dragged on for much of that chilly January afternoon.
“Every time you use this it just erodes the American sense of personal liberty for what you believe is security,” Trynor wrote to Barr in an e-mail.
“We don’t have liberty or security . . . so what is the point,” Barr responded.
“Jefferson would be proud of you,” the developer shot back, citing Barr’s favorite president.
“Jefferson was an idealist that lived in a very different time.”
“What’s wrong with striving to reach idealist principles?”
“Nothing is wrong with it. But doing it recklessly is as bad as those wanting to squash ideals. The unions started with a good idea and then got corrupted because power does that to everyone,” Barr wrote, referencing HBGary Federal’s earlier work for the Chamber of Commerce. “With WikiLeaks and Anonymous they corrupted faster. I believed in what WikiLeaks did when they released the helicopter video. I now believe they are a menace.”
Trynor doggedly refused to cede the argument to the CEO. “What does it take for evil men to rise to power again?” he asked, paraphrasing the eighteenth-century English politician Edmund Burke in defense of the hacktivists.
“But dude, who’s evil?” wrote Barr. “US Gov? WikiLeaks? Anonymous?
Its all about power. The WikiLeaks and Anonymous guys think they are doing the people justice by, without much investigation or education, exposing information or targeting organizations? BS. It’s about trying to take power from others and give it to themselves.
I follow one law. Mine.
In fact, Barr’s research was testing the boundaries of morality beyond merely harvesting data scraps from Facebook pages. He had also created a false persona that he used in chat rooms and social networks to infiltrate Anonymous’ ranks and gain the hackers’ trust: He called himself Julian Goodspeak on Facebook and CogAnon when participating in Anonymous’ IRC conversations.
Within days he had identified what he believed were the three “leaders” of Anonymous, who went by the pseudonyms CommanderX, Q, and Owen. (In fact, that trio influenced only a small fraction of Anonymous’ activities—the largely anarchic movement had countless subgroups.) In total, Barr prepared a list of a hundred names of Anonymous participants around the world. He believed that CommanderX, for instance, was a Californian named Benjamin Spock de Vries.
But as Barr dug in deeper, his coding assistant began to raise questions about more than the morality of their work. He also started to question Barr’s judgment and cast doubt on the social media research’s results.
“You keep assuming you’re right,” warned Trynor. “And basing that assumption off of guilt by association.”
“Noooo,” wrote Barr. “It’s about probability based on frequency. . . . C’mon you’re way smarter at math than me.”
“Right, which is why I know your numbers are too small to draw [this] conclusion, but you don’t want to accept it,” Trynor repeated. “Your probability based on frequency right now is a gut feeling. Gut feelings are usually wrong.”
“Dude, I don’t just go by gut feeling. . . . I spend hours doing analysis and come to conclusions that I know can be automated . . . so put the taco down and get to work!”
“I’m not doubting that you’re doing analysis. I’m doubting that statistically that analysis has any mathematical weight to back it. I put it at less than .1% chance that it’s right . . . mmmm . . . taco!”
Barr pushed ahead. He was confident enough in his findings that he began to tout them to John Woods, his contact at Hunton & Williams, in the hope of pushing the law firm to move ahead with the two projects it was dangling in front of HBGary Federal’s nose. Woods referred Barr to Booz Allen, writing that the defense contractor would likely be interested. Emboldened, Barr contacted the Financial Times and a slew of government agencies including the FBI and the director of National Intelligence.
But internally, HBGary Federal’s staff was doubting the wisdom of Barr’s brazenness. “He’s on a bad path,” wrote Trynor to HBGary Federal president Ted Vera. “He’s talking about his analytics and that he can prove things statistically, but he hasn’t proven anything mathematically, nor has he had any of his data vetted for accuracy, yet he keeps briefing people and giving interviews. . . . I feel his arrogance is catching up to him again and that has never ended well . . . for any of us.”
“Yeah, my spider senses are tingling too,” wrote Vera.
In an e-mail chain among HBGary and HBGary Federal execs on the eve of Barr’s meeting with the FBI, they debated the wisdom of releasing Barr’s full data set to the publi
c.
“Danger, danger Will Robinson,” wrote Vera. “You could end up accusing a wrong person. Or you could further enrage the group.”
HBGary’s founder, the well-known security researcher Greg Hoglund, on the other hand, believed Barr should go ahead full-bore with his outing of Anonymous. “Jesus man, these people are not your friends, they are three steps away from being terrorists,” he wrote. “Just blow the balls off of it!”
The Financial Times published a story about Barr’s research with the headline “Net Closing Around Cyber Activists.” Barr sent a link to the story to his Booz Allen contact and to Hunton & Williams’s Woods, who wished him luck with the meetings with federal agencies.
HBGary’s own Greg Hoglund e-mailed Barr a congratulatory note. Its subject line read, “You are the dark star,” probably meaning the Death Star from the Star Wars films. And he quoted the evil emperor from those movies: “Oh, I’m afraid the deflector shield will be quite operational when your friends arrive. . . .”
If one arbitrary fact of reality separates the fates of the two hackers Peiter Zatko and Julian Assange, it was that Zatko, unlike his Australian counterpart, was lucky enough to have violated the Internet’s commandments largely before they were written.
This Machine Kills Secrets Page 22