Deep State

Home > Other > Deep State > Page 34
Deep State Page 34

by Marc Ambinder


  It turns out that the NSA has some pretty nifty tools to use in terms of protecting cyberspace. In theory, it could probe devices at critical Internet hubs and inspect the patterns of data packets coming into the United States for signs of coordinated attacks. It took the government a very long time to declassify another important cyber document: the Comprehensive National Cyberspace Initiative (CNCI), which is a road map for policy. It describes in general terms how the government plans to spend $40 billion to secure the Internet.2 The main protection policy, informally known as Einstein 3, addresses the threats to government data that run through private computer networks. In declassifying the CNCI, the government admitted that the NSA would perform deep packet inspection on private networks.∗ Basically, the NSA provides the Department of Homeland Security (DHS) with the equipment and personnel to do to the packet inspection; the DHS (using NSA personnel) analyzes the patterns, sanitizes the data, and sends the information back to Fort Meade, where the NSA can figure out how to respond to threats discovered.3 This cyber shield does not (and cannot, by law) be applied to regular Internet traffic.

  The NSA has gathered a significant amount of intelligence on the ways sophisticated cyber actors—usually nation-states and, more often than not, China—have written their code. Sometimes the NSA is able, through its SIGINT collection, to get advance notice of a major attack on a major company. It has very recently begun sharing this information with the FBI, which in turn shares it (or a sanitized form of it) with the companies that might be affected. But it is NSA policy to keep its information private. They’re an intelligence agency. They gather information in secret and use it to outfox the enemy. If the NSA were to share with the public what it knows about China’s cyber capabilities, for example, then China would know what the NSA knows and would adjust its tactics accordingly, thus potentially rendering the Defense Department’s Internet space more vulnerable. That’s the argument, anyway.

  The logical flaw is immediately apparent: the NSA apparently assumes that China won’t already realize that their cyber attacks are ineffective. The NSA has either creatively spoofed them (by “allowing” China into a system and feeding it false data), or China might just assume that the United States has randomly varied its defenses. The NSA, in other words, assumes a static enemy. It also completely ignores the real problems—the vulnerability of critical infrastructure in private hands; the vulnerabilities of banks; the holes in major companies—each susceptible to government-sanctioned (or government-sponsored) cyber intrusions.

  It’s undeniable that Congress and the public probably wouldn’t be comfortable knowing that the NSA has its hardware at the gateways to the Internet. And yet there may be no other workable way to detect and defeat major attacks. Thanks to powerful technology lobbies, Congress is debating a bill that would give the private sector the tools to defend itself, and it has been slowly peeling back the degree of necessary government intervention. As it stands, the DHS lacks the resources to secure the dot-com top-level domain even if it wanted to. It competes for engineering minds with the NSA and with private industry; the former has more cachet and the latter has better pay.

  Some private-sector companies are good corporate citizens and spend money and time to secure their networks. But many don’t. It’s costly, both in terms of buying the protection systems necessary to make sure critical systems don’t fail and also in terms of the interaction between the average employee and the software. Security and efficiency diverge, at least in the short run.

  If the NSA were simply to share with the private sector en masse the signatures its intelligence collection obtains about potential cyber attacks, cyber security could measurably improve in the near term. But outside the space of companies who regularly do business with the intelligence community and the military, few companies have people with the clearances required by the NSA to distribute threat information. Also, because the NSA’s reputation has been tarnished by its participation in orderless surveillance, and because telecoms are wary of cooperating with the NSA beyond the scope of the law, companies are afraid to even admit that they’ve asked the agency for technical advice. As a senior executive at Google admitted to us, “People don’t really trust the NSA, and it will raise suspicions that we’re letting them look at their search data, and other things. It’s not in our interest.” And though Google’s cooperation with the NSA is well known in national security circles, “Our average customer does not know it and there is no reason for us to disclose how we secure our assets.”

  In 2011, the government disclosed that it had extended, on a “voluntary” basis, cyber intrusion protections to the Defense Industrial Base (DIB)—the collective name for those companies that regularly do business with the Department of Defense. Reasoning that it would be much easier to monitor threats from the enterprise level, the program would set up equipment at Internet service provider (ISP) hubs run by Verizon and other telecoms; packets coming into any of fifteen DIB companies would be screened by data sets distributed and updated by the NSA. The NSA itself would not perform the screening, although it is possible that NSA employees might dip into the private sector for short periods of time to help. It was an auspicious decision: the reaction from the privacy community was rather muted and even complimentary. If the NSA was going to partner with industry to protect cyber infrastructure, disclosure was a good first step.4 “Because of its important partnership with industry, and given that defense contractors have already been targeted for cyber intrusion on their unclassified systems, DOD is concerned about the security of DIB networks,” said Lieutenant Colonel René White, a Pentagon spokesperson. “Therefore, DOD has asked NSA to evaluate under what conditions it might be possible for the government to work with the DIB to better protect national security information and interests in the DIB systems.”

  White stressed that the cooperation was “purely voluntary.” That’s true—but the Defense Department is also writing new contracting rules that would require companies with sensitive contracts to secure their Internet space using pretty much the same technology that the DIB pilot uses. One reason the government is so sensitive about the DIB pilot is that there is a sensitive program attached to it. One way to prevent attacks is through a concept known within the government as “active defense.” The NSA could use its platforms at the ISPs to prod and poke and ping places on the Internet where intelligence points to the threat of an original cyber attack. Such poking might lead those bad actors to respond in a way that reveals a pattern, allowing the United States to figure out the precise origin of the attack (called “attribution”) or even to design creative ways to let the “attack” happen while not doing any damage. The NSA would scrutinize the attack in real time to learn how it works. There are legal limits to what the NSA can do, and within the telecom companies themselves there are diverging opinions about how much cooperation is acceptable. The legal teams are extremely wary of potential liability, but the government affairs teams, noting that the government has deemed the ISPs to be passive providers of a service, tend to encourage more direct cooperation. Where the balance is drawn depends on the companies involved.

  As of this writing, there is still no single protocol or common procedure for letting companies, big or small, know about potential cyber threats. In 2010, the NASDAQ market was attacked, and it took the government several months to provide financial companies with prophylactic information about the penetration. There is no standard way for an employee at a financial, electrical, telecom, or cyber firm to obtain a security clearance. The government and industry are aware of this virtual air gap in security, and they’ve drawn circles around the problem for years without coming to a solution.5

  Credit where credit is due: several officials in the Bush and Obama administrations have pushed for more transparency about cyber policy issues, and, in fits and spurts, Obama’s national security team has managed some accomplishments in this area, all in the way of providing the public with a better grounding in what the actual
threat is. In the summer of 2011, Howard Schmidt’s office at the National Security Council released a long outline of cyber policy legislation that would be acceptable to the White House—something that had never been done before. William Lynn, the former deputy secretary of defense, became the ad hoc advocate for a shared sensibility inside Washington, even writing in the city’s house journal of international relations, Foreign Affairs, about the Pentagon’s vulnerability. The DHS began inviting journalists to its formal cyber-security response exercises.

  These are encouraging signs, but the government needs to do more. In any event, the cyber-industrial complex is happy to talk about the issue. They want the business, after all. Shortly after he left government to join Booz Allen Hamilton, McConnell was on 60 Minutes, telling Steve Kroft, “Can you imagine what your life would be like without electrical power?”6 In February 2010, when CNN broadcast a cyber war game exercise sponsored by the Bipartisan Policy Council (and featuring several former senior government officials who worked for private companies with lucrative cyber contracts), the White House was not terribly thrilled with the hyperbolic and theatrical treatment that the “formers” (as folks who leave government are known) gave the scenario, which involved a mass attack against cell phones.

  This is not a debate the government would be wise to cede to industry. But unfortunately, the government hasn’t gotten its act together. Even basic questions, like who is responsible for attacks against the United States, are unresolved. In theory, U.S. Cyber Command (stood up in 2009 after the DOD fell victim to a series of system-wide cyber penetrations by China in 2007) provides the resources, consolidating the various offensive cyber capabilities of the Air Force, the Army, and the Navy. In practice, aside from weekly phone calls, the services still pretty much do their own thing. Cyber Command is developing a doctrine and policy, and practices attacking things quite often, but whenever anything needs to be done, the NSA, whose director is also the commanding general of Cyber Command, does the dirty work. Under the new system, it asks Cyber Command to write a “check” to authorize either cyber exploitation or an offensive cyber attack. Lest you think the NSA is regularly bombarding China with cyber penetrations, it’s not. Most U.S.-generated cyber attacks are aimed at very specific targets within recognized battlefields, like Iraq and Afghanistan, and occasionally in countries where the CIA is conducting covert operations. (For example, the electricity was turned off in Abbottabad on the night of the raid that killed Osama bin Laden; either the CIA figured out how to temporarily cut the power from the ground or the NSA had long ago penetrated Pakistan’s electrical grid.)

  James Lewis, a longtime government consultant on cyber issues, is not especially given to hyperbole. He is an academic, not a consultant. But he is worried. “We’re politically inept. It’s like the Churchill quote: America always does the right thing after it’s exhausted all other options. That’s where we are,” he says. “There is strong resistance from the business community for better cyber security. Some of that I don’t understand. Some of it is pretty clear. They don’t want additional costs, they don’t want additional regulations. I understand that. National security is not something you can hand to the market or private sector and expect to have it work. But that’s what we’ve been trying now for about fifteen years. So we’ve had ideological and political constraints that are slowly beginning to shift the equation in ways that favor our opponents.”

  What he means is that the Russians and the Chinese aren’t going to do something crazy. First, they make (and save, through data theft) so much money off cyber espionage and cyber crime that they don’t want to kill the golden goose. China, in particular, needs the U.S. economy to function so it can prosper and get its debts paid back. And second, they know that if they cross the line, Americans—well, we are a little bit crazy and may shoot a missile at them. Right now, our political system is willing to tolerate a significant amount of cyber espionage and the loss of billions of dollars per year. “It’s like the mob in New Jersey,” says Lewis of cyber invaders. “They’re not going to close a business down; they’re going to be parasites and suck money out of them.”

  A miscalculation could be costly, but the rules are unclear and secret. The possibilities for mistakes due to confused lines of authority are nontrivial. The U.S. electrical grid is uniquely vulnerable to cyber attack: its control systems are plugged in to the Internet, and the United States has successfully managed to shut down supposedly highly protected, air-gapped electrical control systems in tests at the Idaho National Laboratory.∗ As former DNI McConnell has admitted, the grid is probed regularly by the Chinese government, which maps its vulnerabilities.

  Suppose that during the course of one of these probes China trips over a cord somewhere and unplugs something. Boom: the United States is attacked; China has disabled part of the electrical grid. Technically, yes—but also not really. They were trying to spy. In the very unlikely event that the United States were to go to war with China, we would want to disable their electrical system and no doubt have used other intelligence means to figure out how to do so. What China is doing is not easily distinguishable from what a human source in Beijing might be doing for the United States.

  A common vocabulary is first needed to address cyber security as well as an accurate sense of where the threat comes from and where it does not. We might want to start by reserving “attack” for really serious cases where critical infrastructure is endangered by a deliberate action. “Hack” can serve as a guide for the rest of what we read about. There are major hacks and then there are nuisance hacks. Most hacks are nuisance hacks. Because there is no requirement to report being hacked (aside from state data breach laws), hacks encompass everything from malicious infiltrations of British banks that siphon away tiny fractions of pences, to the political chicanery of Anonymous and LulzSec. It would be reasonable to require MasterCard to disclose when a hack compromises the way they exchange data with companies; it would not be reasonable to require them to disclose a denial-of-service attack to their public website. Congress, however, doesn’t want to do any of this, because it violates a sacred rule of tech legislation: it should never betray a bias for or against a particular type of technology, and should always be as open-ended as possible so as not to prevent the development of better technology to address whatever the law is intended to regulate. This sounds sensible. But twinned with the lack of required disclosure, it provides an incentive for technology that is cheap rather than technology that is effective. Congress won’t tell power companies how to protect their grids and doesn’t require them to disclose when they’ve been attacked. It might want to do one or the other, or both. Tech neutrality turns into tech indifference, which makes everyone more vulnerable.7

  Incidentally, the government could simply decide to report to the public when a company that handles a lot of data or protects something critical falls victim to a major attack. This wouldn’t require any change to the law—only a change in attitude. In theory, companies could try to hide breaches from their regulators, but in practice it would be very difficult to do. The easiest short-term solution—one that might create incentives for industry to spend more money to protect the stuff we care about—would be to speak more openly. The problem also arises in thinking about the future architecture of the Internet.

  On the other hand, it is very hard for the intelligence community to intercept mobile communications over packet-switched networks. (Reportedly, the NSA cannot penetrate VoIP [voice-over Internet protocol] encryption, although a senior intelligence official says that they can, with great effort, though they usually do not.) This type of communication is very secure. As Susan Landau, a former engineer for Sun Microsystems, has written, “The ability of the government to wiretap under legal authorization is an important tool for national security, but the ability of the government to wiretap under legal authorization is quite different than the government requiring that the network be architected to accommodate legally authorized wiretaps.�
�8

  Tech neutrality has another good argument going for it: by the time government catches up with a technology that the law proscribes, technology is someplace else. Indeed, the government’s Einstein 2 solution imposed on the dot-gov domain by the DHS is about five years out of date, according to officials there. The speed with which the country’s enemies adapt to technology is remarkable. Where it took al-Qaeda ten years after its founding to launch its first attack, it took al-Qaeda’s loosely linked affiliate based in Algeria and Yemen less than a year from its founding to the near assassination of the internal security minister of Saudi Arabia with a highly sophisticated rectum bomb, and shortly thereafter, the near destruction of an airplane over Detroit on Christmas Day.9 The threats of tomorrow are being engineered in academic laboratories today. If the technology has the potential to be transformational and disruptive, does the government have the right to keep it secret?

  Quantum computing is a variable in the cyber-security equation. According to Tony Tether, former director of the Defense Advanced Research Projects Agency (DARPA), quantum computing in the wrong hands poses a threat comparable to advanced biological weapons.

  The physics of quantum computing are quite elegant, which is why scientists are aware of its potential, but also terribly complicated, which is why no one has figured out how to make a workable machine. A quantum computer takes advantage of the weirdness of the quantum world, notably parallel processing, and single bits of information encoded as photons, or qubits, could be used to store two pieces of data. Quantum particles can be, like Schrödinger’s cat, in two states at once. The more qubits a quantum computer has, the more operations it can perform.

 

‹ Prev