Habeas Data

Home > Other > Habeas Data > Page 6
Habeas Data Page 6

by Cyrus Farivar


  While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

  Seventy-two hours later, the government made good on its promise to force Apple into court, and filed its motion to compel. Among other rebuttals, the government hit back against one of Apple’s clearest arguments: that what the government was asking was unprecedented.

  “The use of the All Writs Act to facilitate a warrant is therefore not unprecedented; Apple itself has recognized it for years,” they wrote. However, this was a rather narrow reading of what Apple was claiming. It wasn’t that use of the All Writs Act was new, but rather, claiming that law as the authority to force Apple to write entirely new software that it had never written before.

  The judge allowed Apple an extension to file its reply on February 26. The government was then given a deadline of March 10 to reply, with oral arguments scheduled before Judge Pym in Riverside on March 22. The pace was extraordinarily fast.

  * * *

  Antecedents of the All Writs Act date back to English common law, and before that, to Roman law. In essence, the concept is to empower judges to order that something be done, even if the legislative body (here, Congress) hasn’t officially said that it should be so.

  The entire text of the law, in its current incarnation, is rather short: “(a) The Supreme Court and all courts established by Act of Congress may issue all writs necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law. (b) An alternative writ or rule nisi may be issued by a justice or judge of a court which has jurisdiction.”

  The two-sentence law seems reasonable enough on its face. “AWA [All Writs Act] injunctions are rarely issued and are subject to judicial discretion,” wrote Dimitri Portnoi, now a Los Angeles lawyer, back when he was a law student in 2007. In 1995, it was used to compel a handwriting sample. In 2005, it was invoked as an authority to halt frivolous litigation. In 2012, it was used by a federal judge in Colorado to order a woman accused of bank fraud to decrypt her laptop.

  But, as lawyers note, the All Writs Act does not bestow upon the government power it would not otherwise have. Absent a law telling the government otherwise, investigators have and will continue to push the limits and test how far they can go.

  As Jonathan Mayer, then a legal fellow at Stanford Law School (he now works for Senator Kamala Harris [D-California]), who holds both a doctorate in computer science and a law degree, said in an online video in November 2014: “With a warrant and writ, could the government require…pushing a backdoored software update to enable government access to a device? Or could the government require disclosing a vendor’s private key, so the government can push its own update? What about decrypting data held on a smartphone? The answer is maybe.”

  The most prominent and recent Supreme Court precedent on the All Writs Act is New York Telephone, which was argued on October 3, 1977. The case involved the use of “pen registers,” mechanical monitoring devices used at the telephone company’s facility that records outgoing calls. Decades ago, companies routinely used them to monitor call quality and maintain proper billing records. Although the 1970s-era telecommunications system bears little resemblance to our early twenty-first-century all-digital system, the case law has remained valid.

  In March 1976, US District Judge Charles Tenney of the Southern District of New York ordered that the local telephone company (New York Telephone) help the FBI in an investigation of a local gambling ring.

  New York Telephone was required to provide “technical assistance necessary” in the form of a pen register. Specifically, the government needed to know what two numbers were called from 220 East 14th Street in Manhattan. The phone company challenged, lost, and then appealed to the 2nd US Circuit Court of Appeals, which ruled in its favor. The government appealed further, to the Supreme Court. Supporting the government in a 6–3 decision, the justices found that Title III—the provision of the landmark 1968 Safe Streets Act that codified some of the privacy gains of the Katz decision discussed in the last chapter—does not govern the use of pen registers; what the law bans is narrowly defined “interception,” and pen registers do not “ ‘intercept’ because they do not acquire the ‘contents’ of communications.” This distinction sounds confusing, but translates neatly—if regrettably—into our own time.

  What happened is that the Supreme Court affirmed one standard for “content” and another for “non-content.” Content, such as a wiretap, requires a warrant, if not more. Non-content, such as dialing information—today, we call this metadata—doesn’t require a showing of probable cause. In other words, pen registers are allowed with fewer restrictions than a warrant would require.

  “These devices do not hear sound,” Justice Byron White wrote in the majority opinion. “They disclose only the telephone numbers that have been dialed—a means of establishing communication. Neither the purport of any communication between the caller and the recipient of the call, their identities, nor whether the call was even completed is disclosed by pen registers.”

  However, he noted: “We agree that the power of federal courts to impose duties upon third parties is not without limits; unreasonable burdens may not be imposed. We conclude, however, that the order issued here against respondent was clearly authorized by the All Writs Act and was consistent with the intent of Congress.” (The Pen Register Act of 1986, however, eliminated the government’s need to use the All Writs Act to deploy the mechanical device.)

  However, the liberal wing of the court, led by Justice John Paul Stevens, disagreed. In his dissent, he noted that Congress had granted no such power regarding pen registers.

  “The Court’s decision may be motivated by a belief that Congress would, if the question were presented to it, authorize both the pen register order and the order directed to the Telephone Company,” he wrote. “But the history and consistent interpretation of the federal court’s power to issue search warrants conclusively show that, in these areas, the Court’s rush to achieve a logical result must await congressional deliberation. From the beginning of our Nation’s history, we have sought to prevent the accretion of arbitrary police powers in the federal courts; that accretion is no less dangerous and unprecedented because the first step appears to be only minimally intrusive.”

  The majority outlined a three-part test, which incorporated the company’s “remove” (e.g., distance) from the case, whether the government’s request imposed an “undue burden,” and whether the assistance was, indeed, “necessary.”

  In the 2016 San Bernardino case, Los Angeles federal prosecutors, led by Decker, argued that Apple was not removed, as it sold the iPhone and wrote the software contained on it. Building an entirely new firmware was not overly burdensome, as this was something that Apple normally did (create software) as part of its normal business operations. And finally, Decker underscored, Apple’s assistance is entirely necessary.

  In this case, the ability to perform the search ordered by the user warrant on the SUBJECT DEVICE is of particular importance. The user of the phone, Farook, is believed to have caused the mass murder of a large number of his coworkers and the shooting of many others, and to have built bombs and hoarded weapons for this purpose.

  But this case was far different from New York Telephone, as Apple had already sold the phone and could not control how it was used. And yes, while it was true that Apple did employ legions of software developers, creating new software for the purpose of defeating the iPhone’s security was not something that was normally part of their job. Finally, was it really Apple’s job to be conscripted to work on the government’s behalf, even if they were going to compensate them at market rates?

  * * *

  This entire question of modern cryptography—and who has (or should have) access to such encryption—turns out to be decades old.
In the 1970s and 1980s, military and academic researchers in the United Kingdom and the United States worked on how to make digital encryption easier to use. Specifically, the Diffie-Hellman (D-H) key exchange (1976) made it significantly easier to keep data secret. Its crucial innovation was the entire notion of public key cryptography, the idea that data could be kept secret by using a shared secret, even one that is publicly available.

  Prior to this innovation, two parties could only secure messages by having the encryption algorithm exchanged in advance. In other words, before you pass encrypted notes to a classmate, both people have to already know what the code is. (Or, to put it into high-stakes terms, the Nazi government distributed secret encoding paper lists that described how to use their Enigma encrypting machines, which for years confounded the Allies. This is known as the trusted courier model, where you entrust someone with delivering a physical key.) With public key cryptography, however, one element of the key (the public, as opposed to the private key) could by definition be made public, so there was no need to trust anyone. With this new type of key exchange, the floodgates were now open for a new form of digital security.

  Others expounded upon this idea and developed it, most notably into the RSA algorithm, which put the D-H key exchange into practice, commercializing it in the late 1980s into a product called Mailsafe. This application, which quickly drew other competitors, made it possible to send secure messages quickly and easily. But with personal computers typically costing several thousand dollars, these tools were really only available to a relatively small portion of the population.

  Around that same time, Phil Zimmerman, a programmer and antinuclear activist in his early thirties based in Boulder, Colorado, was inspired to create his own public key encryption program. The release of his program, dubbed Pretty Good Privacy (PGP), was accelerated in January 1991 when an anti-terrorism bill authored by Senator Joe Biden (D-Delaware) included language that required that the government be allowed “to obtain the plaintext contents of voice, data, and other communications when appropriately authorized by law.”

  By April 1991, when Zimmerman and other privacy activists first heard about the legislation, they kicked into high gear. Rather than wait for his work to be made illegal, Zimmerman took his just-finished PGP and gave a copy to Kelly Goen, a fellow crypto-enthusiast in the San Francisco Bay Area. Goen drove around the region and with his own laptop, an acoustic coupler (a device turning older telephones, including pay phones, into modems), and countless quarters, he managed to upload PGP onto various Usenet groups. In short, it was made freely available to anyone who wanted it.

  Meanwhile, starting in 1989, and unbeknownst to Zimmerman, the NSA was busy working on its own way to counter a slow rise of ever-easier, ever-cheaper encryption. The idea was something known as key escrow. Key escrow ran counter to public key cryptography. Here, the government would allow encrypted communications, but it would also hold the key. The concept was that the government would promise to only use that key when it had a judge’s permission.

  This notion ended up becoming what the NSA and the White House dubbed, on April 16, 1993, the Clipper chip.

  As the White House wrote in a press release:

  The chip is an important step in addressing the problem of encryption’s dual-edge sword: encryption helps to protect the privacy of individuals and industry, but it also can shield criminals and terrorists. We need the “Clipper Chip” and other approaches that can both provide law-abiding citizens with access to the encryption they need and prevent criminals from using it to hide their illegal activities.

  Use of the Clipper chip would not be required of device makers, but rather would be voluntary—AT&T, for example, would agree to ship devices with the chip already built-in. If the government bought thousands of such devices, then it might help accelerate other companies to also make competitor devices with a standard built-in Clipper chip. Members of the Clinton White House regularly received briefings from people at the NSA, FBI, CIA, and DOJ, all weighing in on the question of digital privacy.

  Many of them—notably FBI Assistant Director James Kallstrom—made largely the same point: unless the government has access, crimes will go unsolved, children will be kidnapped, awful dangers will continue, and people will die. (In 1995, Kallstrom, an “electronic eavesdropping expert,” was named the head of the FBI’s New York office, its single largest bureau. In 2016, after having left the Bureau, he became a very vocal supporter of presidential candidate Donald Trump.)

  Over 1993 and 1994, privacy and legal activists began to mount a campaign in what was eventually dubbed the “Crypto Wars.” On May 3, 1994, Whit Diffie—one of the two inventors of public key cryptography—testified before a Senate subcommittee against the Clipper chip system.

  “From the viewpoint of a user, any key escrow system diminishes security,” he said. “It puts potential for access to the user’s communications in the hands of an escrow agent [whose] intentions, policies, security capabilities, and future cannot be entirely known.”

  Not a month later, a young AT&T researcher named Matthew Blaze discovered one of Clipper chip’s critical flaws, which would enable someone to circumvent the surveillance aspect—essentially defeating the entire purpose of Clipper chip. By the summer, the White House had realized that it had lost the battle.

  But at the same time the Clipper chip was gaining traction, the FBI began floating a new bill that, if enacted, would extend the 1968 wiretap law to newer digital phone lines. And in the end, the FBI’s efforts were more successful. In 1994, President Bill Clinton signed the Communications Assistance for Law Enforcement Act (CALEA), which mandated that phone companies build into their increasingly digital (rather than analog) telephone networks a method for police to conduct a wiretap. The law primarily targeted phone companies, and not Internet providers (although that changed in 2003). Crucially, the law does not require that a telecommunications carrier decrypt a transmission unless the company itself initiated the encryption.

  In late June 1996, when Clipper chip was in its death throes, Zimmerman—the inventor and activist behind PGP encryption—testified before another Senate subcommittee.

  Advances in technology will not permit the maintenance of the status quo, as far as privacy is concerned. The status quo is unstable. If we do nothing, new technologies will give the government new automatic surveillance capabilities that Stalin could never have dreamed of. The only way to hold the line on privacy in the information age is strong cryptography. Cryptography strong enough to keep out major governments.

  The government has a track record that does not inspire confidence that they will never abuse our civil liberties. The FBI’s COINTELPRO program targeted groups that opposed government policies. They spied on the anti-war movement and the civil rights movement. They wiretapped Martin Luther King’s phone. Nixon had his enemies list. And then there was the Watergate mess.

  And now Congress and the Clinton administration seem intent on passing laws curtailing our civil liberties on the Internet. At no time in the past century has public distrust of the government been so broadly distributed across the political spectrum, as it is today.

  In the wake of New York Telephone and the 1990s-era expansion of government surveillance under CALEA (and the failed Clipper chip), there were inklings that the government wanted to use ever-expanding digital technologies to their own advantage.

  In 2000, the public caught wind of an FBI capability known as Carnivore—a custom-built packet sniffer designed to go after the “meat” of a surveillance target. The application allowed federal agents to essentially sit on the wire between the suspect and his or her Internet Service Provider, snapping up nearly all of his or her unencrypted traffic, which could include everything from e-mails to chats to online shopping.

  The FBI official overseeing this effort was Marcus Thomas, the chief of the Cyber Technology Section.

  “This is an effort on the FBI’s part to keep pace with changes in technology—to ma
intain our ability,” he told the Washington Post in July 2000. “It’s not an increase in our authority; it doesn’t present a change of volume in what we do.”

  Years later, Thomas, who now works for a company called Subsentio, said that during his time at the Bureau, agents typically had neither the time nor the background legal knowledge to evaluate the precise legality of particular tools that they were given.

  “Everything is digging very deeply into crimes, identifying conspiracies, understanding criminal organizations,” he told me.

  They’re not really as concerned about how they go about it. They just don’t sit around and think about how to take a new technique and how to apply it. We’re really, really focused on that. When they need it, they call upon it and they don’t think about how much time was spent. They’re not paid to decide whether it works or not or whether it’s legal or not. Then the lawyers step in. They decide how do we justify how we use it.

  One notable and creative technical and legal feat came in November 2001, when federal authorities in Nevada obtained a court order against ATX, a company that provides an in-car communication system in certain Mercedes-Benz models. (In 2017, a Forbes reporter dubbed this “cartapping.”) ATX had to remotely activate its Tele Aid system (similar to OnStar) in a customer’s car for a period of seven days. It complied. Then, the government went back to the judge and asked for a similar order (not a warrant), but this time for a period of 30 days. The company attempted to go to court to get the order quashed, but the judge denied it. Then, on January 10, 2002, ATX was served with a third order, again for 30 days, which it challenged in federal court.

 

‹ Prev