Copyright © 2013 by Ronald J. Deibert
Signal is an imprint of McClelland & Stewart,
a division of Random House of Canada Limited.
All rights reserved. The use of any part of this publication reproduced, transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, or stored in a retrieval system, without the prior written consent of the publisher – or, in case of photocopying or other reprographic copying, a licence from the Canadian Copyright Licensing Agency – is an infringement of the copyright law.
Deibert, Ronald J., 1964–
Black code : inside the battle for cyberspace / Ron Deibert.
eISBN: 978-0-7710-2534-1
1. Internet—Political aspects. 2. Cyberspace.
3. Internet—Social aspects. 4. State, The. I. Title.
HM851.D44 2013 303.48′33 C2012-904051-7
McClelland & Stewart,
a division of Random House of Canada Limited
One Toronto Street
Toronto, Ontario
M5C 2V6
www.mcclelland.com
v3.1
For Joan
CONTENTS
Cover
Title Page
Copyright
Dedication
Preface
Introduction
Cyberspace: Free, Restricted, Unavoidable
1.
Chasing Shadows
2.
Filters and Chokepoints
3.
Big Data: They Reap What We Sow
4.
The China Syndrome
5.
The Next Billion Digital Natives
6.
We the People of … Facebook
7.
Policing Cyberspace: Is There an “Other Request” on the Line?
8.
Meet Koobface: A Cyber Crime Snapshot
9.
Digitally Armed and Dangerous
10.
Fanning the Flames of Cyber Warfare
11.
Stuxnet and the Argument for Clean War
12.
The Internet Is Officially Dead
13.
A Zero Day No More
14.
Anonymous: Expect Us
15.
Towards Distributed Security and Stewardship in Cyberspace
Not an Epilogue
Notes
Acknowledgements
PREFACE
It always takes long to come to what you have to say, you have to sweep this stretch of land up around your feet and point to the signs, pleat whole histories with pins in your mouth and guess at the fall of words.
—Dionne Brand, “Land to Light On”
May 24, 2012. Calgary, Alberta. I am at a cyber security conference with the disarming title “Nobody Knows Anything.” In attendance are academics, private sector representatives, and senior government officials. Surely these people know something, I think to myself. Perhaps not. All Canadians have heard of the Royal Canadian Mounted Police (RCMP), and most the Canadian Security Intelligence Service (CSIS), but stop a random sample on, say, Yonge Street in Toronto, and ask if they’ve ever heard of the Communications Security Establishment Canada (CSEC) and most will shrug. This is because CSEC, Canada’s version of the U.S. National Security Agency (NSA), is the most secretive intelligence agency in the country. Nobody Knows Anything, I think. How convenient.
I am on a panel with John Adams, the recently retired chief of CSEC and once Canada’s top spy, and Harvey Rishikof, an American lawyer, and now a professor at the National Defense University in Washington, D.C. Rishikof has had a distinguished career in national security, and at various times was the senior policy advisor and legal counsel to the FBI and the Office of National Counterintelligence Executive (NCIX) at the Directorate of National Intelligence (DNI). I felt lost.
When my turn to speak comes around I joke about the title of the event. I explain that I was a little confused by it at first, but upon reflection and after looking at the roster of spooks, ex-spooks, and wannabe-spooks in attendance, it suddenly all made sense. Nobody Knows Anything. “Of course,” I say, “this is all about plausible deniability!” I had forgotten the first rule of public speaking: Know your audience and tell them what they want to hear. This was not going to go well, I thought to myself – and it didn’t.
When his turn comes, Rishikof brings up the Citizen Lab’s Tracking GhostNet report in positive terms, but then demurs. “We [U.S. national intelligence agencies] would not have been able to do what Deibert and his group did with the GhostNet investigation,” he says. “Trespassing and violating computers in foreign jurisdictions …”
Trespassing? Violating computers in foreign jurisdictions!
Here we go again, I say to myself, and in rebuttal, attempt to dispel misconceptions. I insist that the Citizen Lab did not trespass or violate anything, and certainly not “computers in foreign jurisdictions.” We simply browsed computers already connected to the public Internet, and did not force our way into them. Rather, the computers were configured (by their owners) in such a way that their contents were openly displayed to us (and to anyone else who made the effort). Sure, the attackers may have erred by serving up content that they didn’t want others to see, but the bottom line was that they offered up information to anyone who connected to those computers. We just knew where to look. If this is trespassing then so is just about everything that happens online.
As the panel ends, we pack up our material and exchange pleasantries. Adams walks over to me and says in a grave tone: “You know, Ron, there were some people in government who argued that you should be arrested.” Grinning broadly, he laughs. “And I agreed with them!”
Over the last decade, there have been many times like this when I have wondered, as the Talking Heads put it, “How did I get here?”
• • •
They were heady days. It was spring 2001, and I had just received authorization to set up the Citizen Lab at the University of Toronto. The initial funding came from the Ford Foundation, and the idea was simple: To study and explore cyberspace (though few called it that back then) in the context of international security. The dot-com era was in full swing, the Internet and “information superhighway” spreading like a brushfire, timeworn political divisions – the Cold War, South African apartheid, and so on – relegated to history books, and generally people were in a good mood, a very good mood. At the dawn of the twenty-first century it was hard not to be an optimist.
9/11 ripped into all of that and left us all reeling, for the next year or so most of us wondering what kind of world do we now live in? In January 2003, I published an article in Millennium, a journal published by the London School of Economics, arguing that this singular event had reshuffled the deck around issues relating to cyberspace, and that trouble was brewing. Rightly or wrongly, those planes smashing into New York’s World Trade Center, the Pentagon, and a field in Pennsylvania were viewed as a failure of cyber intelligence, of authorities not monitoring Internet communications and activities closely enough. At the same time, the prevailing view for most of those connected was that the Internet could not be controlled by governments: “The Net interprets censorship as damage and routes around it,” as John Gilmore, founder of the Electronic Frontier Foundation, once famously quipped. I was not so sanguine. That article has been haunting me for years; it only touched the surface, and has struck me ever since as unfinished business. It was called “Black Code.”
National security apparatuses have deeply entrenched, subterranean roots whose spread is difficult to curtail, le
t alone reverse. When there is human agency involved – while the Internet often seems to be operating in an ethereal realm, it has proven itself human, perhaps all-too-human – those responsible for security rarely agree that something is outside their control. Instead, they ramp up. Some governments in the 1990s were already erecting borders in cyberspace, long before 9/11 shifted the terrain around state surveillance and gave it added impetus. Anti-terrorism laws unthinkable on September 10, 2001 were proclaimed with little public debate across the industrialized world, and the United States in particular (but certainly not alone) began quietly building offensive cyber attack capabilities. The enemy was terrorism, an abstract noun, but al-Qaeda was a real and immediate foe. I wrote in a Globe and Mail op-ed on January 1, 2003: “Government armed forces from around the world have devoted increasing time, money, and energy to develop offensive cyber-warfare capabilities, including the capacity to engage in state-sponsored denial-of-service attacks, and the use of Trojan horses, viruses and worms.” I wish I had been more vociferous.
The anti-terrorism laws proclaimed after 9/11 were, in the main, defensive in nature and many had sunset clauses attached because they were considered extraordinary, extrajudicial measures in a time of existential crisis. However, especially vis-à-vis cyberspace, this defensive posture quickly morphed into developing offensive capabilities. With rare exception, those laws are still with us and have been enhanced. In this domain the only thing the sun appears ready to set on is the right to communicate and share information privately.
• • •
Founded in the spring of 2001, just prior to the incendiary events later that year, the Citizen Lab’s mission was to combine technical interrogation, field research, and social science to lift the lid on the Internet. It remains so to this day. We aim to document and expose the exercise of power hidden from the average Internet user, and we do so basically by using the same practices as state intelligence agencies – by combining technical intelligence and field investigations with open-source information gathering. Our intent is to “watch the watchers” and to deliver our findings to the public, to constantly probe the degree to which cyberspace remains an open and secure commons for all. Situated at the University of Toronto, the Citizen Lab has the protection, resources, and credibility it needs to do what it must do, and from this base we have built international partnerships with researchers and universities around the world. We have eyes in many places and have become a digital early warning system, peering into the depths of cyberspace and scanning the horizon. What we have seen and continue to see is disturbing.
• • •
Another word, a few words actually, about the title.
In 1999, Stanford University’s Lawrence Lessig published a book called Code and Other Laws of Cyberspace. Its central thesis is that the instructions encoded in the software that effectively run the Internet shape and constrain what is communicated just as laws and regulations do. Although Lessig did not emphasize it, that thesis is part of a larger tradition of theorizing about communications technology associated most prominently with Canadian academics Marshall McLuhan (“the medium is the message”) and Harold Innis (“the bias of communications”). According to this tradition, communications technologies are rarely neutral and their material properties – the wires, cables, machines themselves, and so forth – have direct societal impacts. Think about this for a moment. To what degree does the machine in front of you, that you log onto and operate daily, now determine your behaviour, what you do and don’t do? In many ways changes in modes of communication are like changes in ecological systems, with ideas, social forces, and institutions analogous to species. And when the ecology of communications changes, some species flourish and thrive, others wither and die.
Although Lessig uses the term “code” in a literal sense to refer to actual software, in this book I use it more metaphorically, to refer to the infrastructure of cyberspace, from the invisible spectrum of electromagnetic waves to the vast amounts of plastic, metal, and copper that now surround us, to the trillions of lines of spaghetti-like instructions – the actual codes – that keep it all functioning. Like Lessig, I believe that cyberspace is not an empty vessel or neutral channel. How it is structured matters for identity, human rights, security, and governance … and we need to tend to it to preserve it as a secure and open commons.
The word “black” conjures up that which is hidden, obscured from the view of the average Internet user. Never before have we been surrounded by so much technology upon which we depend, and never before have we also known so little about how that technology actually works. I am not talking about programming a VCR, or lifting the hood of your car in the faint hope that you can fix the engine, or trying to brew a cup of coffee from a digitally operated espresso machine. I am talking about an intimate and ongoing understanding of what’s going on beneath the surface of the systems upon which we have become so reliant in order to communicate and remain informed.
The science fiction writer Arthur C. Clarke argued that “any sufficiently advanced technology is indistinguishable from magic,” and as cyberspace grows more and more complex the more it becomes for most people a mysterious unknown that just “works,” something we just take for granted. It is not only that we know less and less about the technical systems upon which we depend, the problem is deeper than that. We are actively discouraged, by law and the companies involved, from developing a curiosity about and knowledge of the inner workings of cyberspace. The extraordinary applications that we now use to communicate may feel like tools of liberation, but the devil is in the details, in the lengthy licence agreements that restrict how they can be used. And while exploring that technology is strictly policed, and sometimes carries with it warranty violations, fines, even incarceration, the spread of black-code-by-design is a recipe for the abuse of power and authority, and thus protecting rights and freedoms in cyberspace requires a reversal of that taboo, a spotlighting on that which is hidden beneath the surface.
“Black” also refers to the criminal forces that are increasingly insinuating themselves into cyberspace, gradually subverting it from the inside out. The Internet’s original designers built a system of interconnection based on trust, and as beautiful as that original conception was, how it might be abused was never predicted, could not be predicted. One of the first Internet applications, email, was almost instantly hijacked by the persistent nuisance of spam. Each subsequent application has followed suit, and with the almost wholesale penetration of the Internet into homes, offices, governments, hospitals, and energy systems, the stakes are much higher, the consequences of those malignant forces much more serious. Those who take advantage of the Internet’s vulnerabilities today are not just juvenile pranksters or frat house brats; they are organized criminal groups, armed militants, and nation states. Add to this mix the demographic shift that is occurring and the picture gets more frightening. Most of the world’s future Internet population will live in fragile, and in many cases corrupt, states.
And then there are the secretive worlds of national defence and intelligence agencies, as in “black ops,” “black budgets,” going “deep black” – worlds that have now become major players in cyberspace security and governance. The collection of three-letter agencies born alongside World War II (CIA, FBI, NSA, KGB, etc.) that became global behemoths during the Cold War may have seemed to be on the edge of extinction in the 1990s, but the combination of “big data” (the massive explosion of digital information in all of its forms), security threats, and the spectre of terrorism has created a power vortex into which these agencies, with their unique information-gathering capabilities, have stepped.
At the very moment when we are surrounded with so much access to information and apparent transparency, we are delegating responsibility for the security and governance of cyberspace to some of the world’s most secretive agencies. And just as we are entrusting so much information to third parties, we are also relaxing legal protections that restri
ct security agencies from accessing our private data, from investigating us. The title Black Code refers to the growing influence of national security agencies, and the expanding network of contractors and companies with whom they work.
• • •
The Internet began with the spirit of “hope springs eternal.” Today, sadly, we live in a time of cyber phobia. Cyber espionage and warfare, the growing menaces of cyber crime and data breaches, and the rise of new social movements like WikiLeaks and Anonymous have vaulted cyber security to the top of the international political agenda, at untold cost. Almost every day a new headline screams about a serious problem in cyberspace that demands immediate attention. There is a palpable urgency to act – to do something, anything.
As ominous as the dark side of cyberspace may be, our collective reaction may become the darkest driving force of all. Fear is becoming the dominant factor behind a movement to shape, control, and possibly subvert cyberspace, and “What begins in fear usually ends in folly,” as English poet Samuel Taylor Coleridge put it.
We stand at a precipice where the great leap in human communication and ingenuity that gave us global cyberspace could continue to bind us together or deteriorate into something malign. Only by fully uncovering the battle for the future of cyberspace can we understand what’s at stake, and take steps to ensure that this degradation of one of humanity’s greatest innovations does not happen.
(An interesting sidebar to this discussion … “Mainstream media” are often criticized for only following horse races – elections, scandals, and so on – and for giving scant treatment to deep, difficult issues. Regarding cyberspace governance and security, I have actually found that mainstream outlets like the New York Times, Bloomberg News, Wall Street Journal, and others, have done, all things being equal, solid reporting and have been receptive to Citizen Lab investigations and reports. Even though the conceit in much of cyberspace is that media “organs of the establishment” are beholden to special interests and their advertisers, I have not found this to necessarily be the case. The more important matter is that if these issues are out there, reported on in the mainstream press, why are so few people paying attention?)
Black Code: Inside the Battle for Cyberspace Page 1