by Shane Harris
This wasn’t an abstract process. It was physical. This was how computers retrieved information from databases. Information was stored on a physical medium—a disk. That disk was no different from the bookshelves. Each piece of information resided in a specific location on the disk. And when the computer wanted to find it, the disk had to spin to just the right point, and another mechanism had to retrieve it so the user could see it. The process was essentially no different for an NSA analyst than for an ordinary home computer user calling up a document from his word processor. That document was stored on a hard drive, which was just another disk. Behind his simple point and click, a set of electrical and mechanical steps played out. People saw something similar when they played a jukebox. They fed the machine a dime and selected a song, then watched the gears spin as a metal arm dropped down, pulled the right record out of a rotating stack, grabbed onto it, and then set the record on the turntable.
The computers at the NSA—the giant library, the big jukebox—spent a lot of time and energy fetching and reshelving information. It could take several minutes, if not hours, for analysts to call up all the data they needed. This was where the real-time aspect of terrorist hunting broke down. This was the bottleneck.
But there was another way to store and retrieve information, and it was also well known to anyone who’d ever used a computer. A word processor, an e-mail in-box, a Web browser, any application used to perform work, was made possible by something called random access memory. RAM was a storage system too, but it was nothing like a library. It didn’t rely on the physical sequence of moving parts to deal with data.
RAM, as its name implied, used a “random” structure to organize information. Once something was put into RAM, it was simply there, available for the taking. It was as if all the books in the library had been put in one big room instead of being dispersed to far-flung stacks. And unlike the disk-based librarians, with their inviolate sequences, the librarians in RAM were all on speed, zipping around and grabbing books in an instant compared with their slower counterparts. If disk world was like a jukebox, RAM was an iPod.
But RAM had its problems. For starters, it was highly “unstable,” from an engineering perspective. RAM could not save information without a constant source of electricity. Turn off the computer, or lose power, and anything stored in RAM was lost. This was why home computers and databases alike used disk-based storage; it retained information regardless of whether the machine was on or not. RAM was also more expensive than disk memory. Its market price could fluctuate wildly, depending on global demand. That was one reason why computers mostly used RAM to run applications—word processors, Web browsers, or the tools that intelligence analysts used to work on stored information. Its random structure gave these memory-intensive programs the running room they needed to operate smoothly, without having to rely on those pesky, slow librarians. RAM was not a storage mechanism.
But the NSA thought it could be. From the agency’s perspective, RAM’s instability and cost were surmountable obstacles. Money was no barrier for an agency with a multibillion-dollar annual budget. And as for the power supply, the 350-acre Fort Meade campus was the single largest customer of Baltimore Gas & Electric, consuming the same amount of electricity as the city of Annapolis. Back in the late 1990s officials had started to worry about whether the power would run out. But in the heat of the global terrorist hunt, the issue slipped down the priority list. As far as the agency was concerned, electricity constraints were not going to stop technological progress.
And so, beginning in 2004, the NSA began a shift toward “in-memory” databases that were built entirely with RAM. The agency would place oceans of telecom data into these new systems and hope at last to have their real-time terrorist tracker. It was an unprecedented move for such a large organization. It was extravagantly expensive. And the agency protected its new approach like a national secret.
The NSA had found its breakthrough.
Though they were expensive and unstable, in-memory databases had one undeniable advantage over their disk-based cousins—speed. And that was just what the NSA wanted.
In-memory databases were experimental, contemplated mostly within the cloisters of computer geekdom. But early on, engineers could see their promise. In 2001 a group of database builders in Washington State decided to test the speed of a disk-based database and one built entirely in memory. Each system was told to retrieve and store thirty thousand individual records, a straightforward and simple computing task. It was a mere sliver of the NSA’s workload, but the test yielded staggering results.
While the traditional machine took sixteen seconds to retrieve, or “read,” the records, the in-memory version did it in one second. But more stunning, it took the traditional computer almost one hour to store, or “write,” the records onto its disks. The in-memory machine stored the information in 2.5 seconds.
In-memory databases were the NSA’s best shot at real-time analysis. So how to build the system? Simple enough. Just construct a computer with lots and lots and lots of RAM. Or harness together many computers with the same attributes.
Simple in theory. Even the most opaque computer engineers didn’t mince words about what this amalgamation of hardware would look like. Huge. A supercomputer in Maryland, which comprised more than a thousand linked machines and was used by the National Weather Service to predict hurricane paths, took up seven thousand square feet of floor space. And that machine wasn’t working in memory. Who could say how much real estate this NSA supereye would need?
Few agencies had the tens or hundreds of millions of dollars required to install giant computer farms in their basement, much less pay the power bill for cooling them. (From its previous experiences with supercomputers and large databases, the NSA engineers knew that a horde of machines in one room generated extraordinary amounts of heat. The agency had to design specially cooled rooms to keep the machines from melting down.)
But the in-memory system had another flaw. One that the BAG and all other terrorist-hunting devices shared.
It lacked what data engineers called a logic layer, a kind of vocabulary that told a computer what the cacophony of phone records and e-mails, words and numbers running through its brain actually meant, and more important, what they meant in relation to one another.
In the human world objects had names, and names had meaning. There was something called a plate. It sat on a table, and a person ate food off it. One could teach a computer to recognize “plate.” It was flat, often white, usually round. Its edges were slightly curved. But how did a computer know that “plate” had a relationship with something called “silverware” that was actually a set of dissimilar-looking objects that for some reason seemed to pop up next to plate all the time, and always in a group? And what was this thing that looked like “plate” but was called “platter”?
Humans understood plates and silverware perfectly well, what they were used for and how they worked together. They knew why a knife wasn’t a spoon, and when a plate was actually a platter. And they understood why those distinctions mattered. But a computer had to be taught all of this. It could not learn on its own. A machine had no experience, no residual memories. So the NSA would have to create them.
Computers needed this human logic layer. Without it the NSA could never achieve the kinds of early-warning insights Hayden had dreamed of, or Poindexter for that matter. The switch to in-memory computing was a legitimate breakthrough. But on its own it could not produce better analysis. The NSA might be able to swallow the ocean. But what good was that if it could never digest it?
Consumer marketers had been grappling with this problem for a generation. In their trade there was an old story, probably apocryphal, that seemed to illustrate the holy grail of insight that the NSA was reaching for. Clerks in a convenience store, the story went, noticed that men often bought beer at the same time they bought diapers. And the clerks noticed that they usually came in at night, alone. They started to wonder whether the men
had been sent out by their wives for an emergency diaper run and decided they might as well pick up a six-pack for their trouble.
The store manager went through his receipts and confirmed that the clerks’ observations were correct. Sales of beer and diapers were higher later at night, when men did the shopping. After dinner or later in the evening, sales of both products rose.
The store manager had just discovered a logic layer, the connection between two distinct objects. He started stocking diapers next to the beer. Sales of both skyrocketed.
At the most basic level, this was the NSA’s quest. This was the end state of total information awareness. A set of rules, a pattern, that defined human behavior.
The richer the logic layer, the more patterns it could detect. And the more patterns, the more relationships. Did a man arrested for cocaine possession in Los Angeles have any connection to a suspected terrorist recently stopped at the Mexico border crossing? Did a man buying five hundred pounds of fertilizer need them for his gardening business or to build a bomb?
As the NSA forged ahead with in-memory databases its counterterrorism experts, as well as those in other agencies across the government, set out to try to answer these questions. They chased the same elusive dream as Poindexter’s red team.
They had a slim chance of success. How could one account for the variances of human behavior? People were logical creatures—most of the time. But they often behaved illogically, and in ways that confounded explanation. Was there really a model for terrorism like there was for a hurricane, or a cold front, or the sales of beer and diapers? Detecting terrorism wasn’t purely science. It was also an art.
Poindexter knew that. So did his critics. They vilified him for asking whether such a system could work. But at the NSA, Hayden and others were listening. And they quietly followed suit. They picked up where Poindexter had left off.
The result was chilling. Even without a logic layer, NSA’s technological breakthrough meant the agency could see an entire network, and everything moving on it, in real time. They were one step closer to total information awareness.
CHAPTER 24
EXPOSED
In May 2004, Fran Townsend celebrated her first anniversary at the White House. She’d been the president’s point person on terrorism, but always as a deputy to Condoleezza Rice, whom the president cherished not only as his national security adviser but as a personal friend. That immovable layer had separated Townsend from the commander in chief. But she was about to move up. That spring Bush tapped her as his assistant for counterterrorism and homeland security. She reported directly to him now.
Not long after Townsend moved into her new West Wing office, an NSA employee came to see her, someone she knew from her days working surveillance warrants at Justice. But this wasn’t a social call. It was time to clear Townsend into the program.
Up to now she had only a notion that the NSA was working outside its customary boundaries. The first clue came from her old friend Jim Comey, the deputy attorney general. He approached Townsend at the White House during the crisis over the spying program’s legality. Before a meeting in the Oval Office, which was also attended by Mike Hayden, Comey asked Townsend, who was still a deputy, if she had ever heard of the code name Stellar Wind.
Townsend said no, she hadn’t, which Comey found deeply unsettling. The White House had, indeed, kept the circle tight, so tight that the president’s terrorism adviser sat outside of it. Townsend had never seen her friend so ashen, so worried. She knew that Comey met with the president, but she didn’t know what they discussed.
It became much clearer after the NSA employee brought Townsend into that tiny circle. After she’d been read in, Townsend had only one question about the program: “Has the Department of Justice said it’s legal?”
Yes, the NSA employee replied.
That was good enough for her.
One of Townsend’s new duties was playing intelligence traffic cop. She had to make sure that the information the NSA collected from the surveillance program made its way to the FBI. If the program was legal, then Townsend was less concerned about the details of what the NSA gathered, or how, than about what the agency did with that information. Were the leads getting passed on to the appropriate domestic law enforcement agency for follow-up?
Townsend also understood how important the program was to the intelligence community. That fact would be driven home repeatedly over the course of the next year, and partirularly when the new head of the NSA, Lieutenant General Keith Alexander, started showing up at the White House for daily briefings on what the agency was learning about terror networks.
Alexander took over at the agency in August 2005. His experience overseeing the Information Dominance Center and the intelligence command at Fort Belvoir made him a natural choice for the signals job. Alexander had deep technical expertise and long-standing contacts within the community. (He’d also attended those Genoa demos put on by Poindexter years earlier.)
But Alexander also had a contentious relationship with Mike Hayden, the man he was replacing. The two had sparred over how much access Alexander’s analysts, particularly those working in the IDC, should have to raw signals data. Alexander wanted Hayden to bend the pipes toward his people, but Hayden had resisted. Now Alexander was the head plumber.
Hayden had left Fort Meade earlier in the year, moving steadily and considerably up the bureaucratic ladder, as was his way. In April 2005, he became the principal deputy director of national intelligence, the number two man in the community. He was the first person ever to hold the post, which was part of the new Office of the Director of National Intelligence, created in response to the 9/11 Commission. It had recommended a new upper management to corral the restive agencies and force them to cooperate.
Depending on how one chose to view the assignment, Hayden had either been handed the keys to the kingdom or been made to walk the plank. The deputy slot was a political post and required the Senate’s approval. That would mean public hearings in which the dirty laundry of the war on terror might have an airing. Still, the fact that Bush had tapped Hayden, his leader in the digital global terrorist hunt, was a clear signal of the president’s confidence and his approval.
Even if Hayden made it through confirmation unscathed, he’d be taking on a thankless job. As the deputy, he was guaranteed to spend most of his tenure warring over budgets as the new office asserted its dominance. Congress might have slapped the word “director” onto Hayden’s forthcoming title, but that didn’t make it so. The defense secretary still controlled the vast majority of all intelligence dollars, and the new DNI’s office didn’t have the authority to overrule him. Indeed, the new law didn’t give the DNI the necessary authorities to enforce the policies he might choose to implement.
Still, Hayden was getting a promotion and, with it, a huge boost to his military clout: Bush put him up for his fourth star, the highest rank that the service could bestow. That made Hayden not only a full general but the highest-ranking military intelligence officer in all the armed forces. And Hayden could claim one more bragging right: He was the first career Air Force intelligence officer ever to earn a fourth star. Hayden didn’t resign from the Air Force, and that troubled some lawmakers. The new national intelligence office was supposed to be independent. In Washington that usually meant run by civilians. As if to ease concerns about militarization, the Bush administration picked a counterweight of sorts for the top slot.
John Negroponte, a career diplomat whose only experience with intelligence was reading it, seemed an unlikely pick. Negroponte had been the U.S. ambassador to Iraq and had previously represented the country at the United Nations and in Honduras. The Central American post, which he held in the first term of the Reagan administration, had endeared him to John Poindexter, who thought that Negroponte was one of the few people in the State Department who supported the president’s goals in Nicaragua. Negroponte carried his own stains from the Iran-Contra affair, but they were insufficient to derail his
nomination to the top intelligence job.
Though a capable diplomat, Negroponte was out of his depth in his new role. He quickly developed a habit of leaving the office early. It was customary for a deputy to take over day-to-day management of any large organization. But Hayden was no ordinary backup man. He brought an incomparable résumé and deep institutional knowledge. Negroponte might have been at the top on paper, but there was no mistaking who was really in charge.
Hayden had a new job. Townsend a promotion. Poindexter had begun his third act. The spring of 2004 kicked off a game of musical chairs that lasted well into the next year. This was a predictable and time-honored Washington custom, particularly heading into a second presidential term. But 2005 would be a year of surprises. The first one came in June, when news broke of an intelligence program that few had ever heard of and that those in the know presumed was dead and buried.
Able Danger had surfaced.
Congressman Curt Weldon had been hearing things. There was a lot more going on at the Information Dominance Center than even he, its biggest supporter in Congress, had known.
The first hints came in May. Weldon had a meeting with an Army reserve officer working at the Defense Intelligence Agency named Tony Shaffer. He’d been asked to meet with the congressman to help drum up funds for a new data analysis program the Navy wanted to launch. But once Weldon had Shaffer in his office on the Hill, the conversation turned to Able Danger.
Weldon had heard about the program a few days earlier, from Scott Philpot, the Navy officer who had first brought Erik Kleinsmith into the secret operation and tapped his team of analysts to run intelligence on Al Qaeda. Weldon wanted to know what happened to all the work the team had done. He asked Shaffer to fill in some details. So he gave the congressman the briefing he’d presented two years earlier to staff members of the 9/11 Commission.