Book Read Free

Black Code: Inside the Battle for Cyberspace

Page 24

by Ronald J. Deibert


  One alternative approach towards security that meshes with the core values and decentralized architecture of an open and secure cyberspace, and that has a long pedigree in political philosophy, is the “distributed” approach. It has roots in liberal political orders reaching back to ancient Greece and the Roman republic, and the late-medieval, early-Renaissance trade-based systems exemplified by the Venetians, Dutch, and English. But the fullest expression of distributed security is to be found in the early United States of America and the writings of the political philosophers who inspired the nation’s founders, Montesquieu, Publius, and others. Although multi-faceted and complex, distributed security starts with building structures that rein in and tie down political power, both domestically and internationally, as a way to secure rights and freedoms. It puts forward what Johns Hopkins University Professor Daniel Deudney, author of Bounding Power, calls “negarchy” as a structural alternative to the twin evils of hierarchy and anarchy. In short, distributed security is the negation of unchecked and concentrated power, and, on the other side, recklessness and chaos.

  At the core of this model are three key principles: mixture, division, and restraint. Mixture refers to the intentional combination of multiple actors with governance roles and responsibilities in a shared space; division to a design principle wherein no single actor is able to control the space in question without the co-operation and consent of others. As an approach to global cyberspace security and governance, each of these can provide a more robust foundation for the empty euphemism of “multi-stakeholderism,” and principles upon which to counter growing calls for a single global governing body for cyberspace. Citizens, the private sector, and governments all have important roles to play in securing and governing cyberspace, but none to the exclusion or pre-eminence of the others.

  Civic networks need to be players in the governance forums where cyberspace rules of the road are implemented. This is not an easy task. There is no single forum of cyberspace governance; instead, governance is diffuse and distributed across multiple forums, meetings, and standard-setting bodies at local, national, regional, and global levels. The acceptance of civil society participation in these rule-making forums varies widely, and the very idea is alien to some. Governments and the private sector have more resources at their disposal than citizens to attend these meetings and influence their outcomes. Civic networks will need to collaborate to monitor all of these centres of governance, open the doors to participation in those venues that are now closed shops, and make sure that “multi-stakeholder participation” is not just something paid lip service to by politicians, but something meaningfully exercised as part of a deliberate architecture.

  The principle of restraint, however, is perhaps the most important and arguably the most threatened by overreaction. Securing cyberspace requires a reinforcement, rather than a relaxation, of restraint on power, including checks and balances on governments, law enforcement, intelligence agencies, and on the private sector. In an environment of big data, in which so much personal information is entrusted to third parties, oversight mechanisms on government agencies and involved corporations are essential.

  Principles of restraint – sometimes called “mutual restraint” – can also help inform international cyberspace governance discussions concerning confidence- and security-building measures among states. Danger in cyberspace is real but to avoid overreaction, transparent checks and balances are required. Here, the link in the distributed security model between domestic and international processes is exceptionally clear. The more transparent the checks placed on concentrated power at the domestic level, the more adversaries abroad will have confidence in each other’s intentions.

  Distributed security also describes the most efficient and widely respected approach to security in computer science and engineering circles. It is important to remind ourselves that in spite of the threats, cyberspace runs well and largely without persistent disruption. On a technical level, this efficiency is founded on open and distributed networks of local engineers who share information as peers in a community of practice rooted in the university system (itself, a product of the liberal philosophy upon which distributed security rests). These folks need to be central during discussions about cyberspace governance.

  The Internet functions precisely because of the absence of centralized control, because of thousands of loosely coordinated monitoring mechanisms. While these decentralized mechanisms are not perfect and can occasionally fail, they form the basis of a coherent distributed security strategy. Bottom-up, “grassroots” solutions to the Internet’s security problems are consistent with principles of openness, avoid heavy-handedness, and provide checks and balances against the concentration of power. Part of a distributed security strategy should facilitate cooperation among largely scattered security networks, while making their actions more transparent and accountable. Rather than abolish this system for a more top-down approach, we should find ways to buttress and amplify it. Loosely structured but deeply entrenched networks of engineers, working on the basis of credible knowledge and reputation, whose mission and raison d’être is to focus on cyberspace and its secure functioning to the exclusion of all else, are essential to its longevity and security. We need to build out and provide space for those networks to thrive internationally rather than co-opt their talents for national security projects that create divisions and rivalry.

  Part of a distributed security strategy must also include a serious engagement with law enforcement. These agencies are often stigmatized as the Orwellian bogeymen of Internet freedom (and in places like Belarus, Uzbekistan and Burma, they are), but the reality in the liberal democratic world is more complex. Many law enforcement agencies are overwhelmed with cyber crime, are understaffed and lack proper equipment and training, and have no incentives or structures to co-operate across borders. Instead of dealing with these shortcomings head on, politicians are opting for new “Patriot Act” powers that dilute civil liberties, place burdens on the private sector, and conjure up fears of a surveillance society. What law enforcement needs is not new powers, but new resources, capabilities, proper training, and equipment. Alongside those new resources, of course, the highest possible standards of judicial oversight and public accountability must be enforced.

  The same basic premise of oversight and accountability must extend also to the private sector. Civic networks like those that helped spawn the Arab Spring are inherently transnational and have a vital role to play in monitoring the globe-spanning corporations that own and operate cyberspace. Persistent public pressure, backed by credible evidence-based research and campaigns – like the Electronic Frontier Foundation’s privacy scorecard – are the best means to ensure the private sector complies with protection of privacy laws and human rights standards worldwide. Civic networks must also make the case that government pressures to police the Internet impose costly burdens on businesses and should be legislated only with the greatest reservations and proper oversight. The securitization of cyberspace may be inevitable, but what forms it takes is not.

  • • •

  If we are to continue to benefit from the common pooled resources that make cyberspace what it is – a planetary ecosystem in which no one central agency is in control – then all members of that ecosystem need to approach its maintenance in a deliberate and principled fashion. Here is where another tried and true approach might have broad utility for cyberspace: stewardship.

  Cyberspace is less a pure public commons and more a mixed-pooled resource, with constantly emergent shared properties that benefit all who contribute to it. Does stewardship – generally defined as an ethic of responsible behaviour in regard to shared resources – have any relevance to such a domain? The first custodians of the Internet believed that it did. Even if they did not use the language of stewardship, the engineers and scientists who built and designed the Internet saw their roles very much as custodians of some larger public good.

  In discussing the stewardsh
ip of cyberspace, one must remember that it is an entirely artificial environment; that is, without humans, cyberspace would not exist. This places us all in the position of joint custodianship: we can either degrade, even destroy cyberspace, or preserve and extend it. The responsibility is intergenerational, extending to those digital natives yet to assume positions of responsibility, but also linked to those who first imagined the possibilities for what something like cyberspace could represent. Imagine if H.G. Wells were here today to see how close we are to accomplishing his vision of a world encyclopaedia, only to see it carved up by censorship, surveillance, and militarization?

  Governments, NGOS, armed forces, law enforcement and intelligence agencies, private sector companies, programmers, technologists, and average users must all play vital and interdependent roles as stewards of cyberspace. Concentrating governance of cyberspace in a single global body, whether at the UN or elsewhere, makes no sense. The only type of security that functions in an open, decentralized network is distributed security.

  Stewardship happens constantly in cyberspace, even if not described as such. When Twitter unveiled a new national tweet removal policy, it felt obligated to justify its actions in terms of larger consequences, and the larger Internet community judged it accordingly. When companies like Google post transparency reports, listing government requests on user data or notices to remove content from cyberspace, these are acts of stewardship. As people entrust more and more data to third parties, how that information is handled, with whom it is shared, and what is communicated about how that data is treated, must be based on more than corporate self-interest and market considerations. Likewise, profiting from products and services that violate human rights, or that exacerbate malicious acts in cyberspace, is unjustifiable in a context of shared information and communication resources, regardless of how profitable such products and services might be. Justifying these sales on their being in compliance with local laws, as some companies have done, is a hollow and self-serving rationalization that fails the stewardship test of maintaining a global resource.

  Generalized across the world, stewardship would moderate the dangerously escalating exercise of state power in cyberspace by defining limits and setting thresholds of accountability and mutual restraint. The alarming trend of even liberal democratic governments engaging in mass surveillance without judicial oversight contradicts the very essence of cyberspace as an open global commons. Governments have an obligation to establish the playing field, ensure that malicious acts are not tolerated within their jurisdictions, and set the highest possible standards of self-restraint vis-à-vis censorship and surveillance. Privacy commissioners and other regulatory and competition oversight bodies are critical to stewardship in cyberspace, as more and more information and responsibilities are delegated to the private sector. In an era when “national security” is so often used to justify extraordinary intrusions on individual privacy, checks and balances are essential.

  Universities have a special role to play as stewards of an open and secure cyberspace as it was from “the University” that the Internet was born, and from which its guiding principles of peer review and transparency were founded. Protected by academic freedom, equipped with advanced research resources that span the social and natural sciences, and distributed across the planet, university-based research networks could be the ultimate custodians of cyberspace.

  Finally, stewardship in this realm requires an attitudinal shift among users as to how they approach cyberspace. For most of us, it is William Gibson’s “consensual hallucination” – always on, always working, 24/7, like running water. This attitude shift will not be easy. There are considerable disincentives for average people to “lift the lid” on the technology. While we are given extraordinary powers of creativity with cyberspace, walled gardens restrict what we can actually do with it. Busting down these walls has to be at the heart of every citizen’s approach to cyberspace. We don’t all need to learn computer code, but we do need to move beyond sending emails or tweets out into the ether without understanding with whom, beyond the immediate recipient, they are shared and under what circumstances.

  We are at a crossroads. Mounting cyber threats and an escalating arms race are compelling politicians to take urgent action. In the face of these concerns, those who care about liberal democracy on a global scale must begin to articulate a compelling counter-narrative to reflexive state and corporate control over cyberspace. To be sure, distributed security and stewardship are not panaceas. They will not cease the exercise of power and competitive advantage in cyberspace. They will not bring malicious networks to their knees, or prevent cutthroat entrepreneurs from exploiting the domain. But, as a vision of ethical behaviour in cyberspace, they will raise the bar, set standards, and challenge the players to justify their acts in more than self-interested terms. Above all, they will focus collective attention on how best to sustain a common communications environment on a planetary scale in an increasingly compressed political space.

  Decisions made today could take us down a path where cyberspace continues to evolve into a global commons that empowers individuals through access to information and freedom of speech and association, or they could take us in the opposite direction. Developing models of cyber security that deal with the dark side, while preserving our highest aspirations as citizens, is our most urgent imperative.

  NOT AN EPILOGUE

  People often ask me what the inspiration was for the Citizen Lab. Admittedly, doing what we do – a kind of X-Files meets academia – is highly unusual. But it has been no accident.

  Although there have been many formative experiences along the way, one of the most important was an opportunity I had as a graduate student in the 1990s, when I was seconded to the Canadian Ministry of Foreign Affairs as a consultant for an obscure agency called the Verification Research Unit (VRU) headed by a retired Canadian Air Force colonel, Ron Cleminson. Run like a private fiefdom by the iconoclastic veteran, the VRU engaged in groundbreaking studies on arms control, particularly the often troubling question of how to verify whether parties to an arms control agreement were playing by the rules or cheating. Interested in technology and international security as a graduate student, I was contracted by Cleminson to explore how the then emerging commercial market for satellite reconnaissance technology could assist in the verification of arms control agreements.

  My VRU experience suggested the potential of revolutionary changes in information and communications technologies to have a major impact on international security. New satellites were being launched by the governments of France, Canada, and other states that only a few years prior would have been the most guarded secrets of the intelligence community, but now imagery from them was being shown to the general public and offered for sale.

  The implications of all of this hit me shortly after the Gulf War in the early 1990s. Taken aside by a member of the VRU to a locked, windowless room, I was shown highly sophisticated spy-satellite imagery of a couple of scared Iraqis frantically burying drums in the desert. Laid on the desk before me were high-resolution images taken from a KH-11 U.S. spy satellite, orbiting the earth in syn-chronicity with the path of the sun so that the surface illumination was nearly the same in every picture. Familiar today to viewers of movies like the Bourne series, the imagery was astoundingly sharp – a ground resolution of six centimetres – so sharp that I could clearly make out the expressions on the Iraqis’ faces. At the time, these images were highly classified, and I did not have clearance to see them.

  Looking today at my iPhone’s RunKeeper app, which tracks my jogging route down to the level of metres in real time, that moment in the VRU office seems so quaint. How soon, I wondered, given current technological trajectories, would KH-11 imagery be available to the entire world? How long could it remain in the shadows?

  While at the VRU I attended meetings, workshops, and conferences that involved fascinating applied policy work, much of it highly interdisciplinary. Nuclear, chemica
l, and biological engineers worked alongside policy analysts and lawyers; government officials, private sector representatives, and people from academia, all with vast but very different experiences, collaborated on international security projects. In the mid-1990s – the World Wide Web barely off the ground and cyber security on pretty much no one’s mind – I attended a conference organized by Cleminson with the prescient title “Space and Cyberspace: Prospects for Arms Control.” In attendance, an extraordinary cast: an analyst who had handed John F. Kennedy the overhead imagery from the Cuban missile crisis in 1962; a scientist working at Sandia National Labs tracking down the Aum Shinrikyo cult, the Japanese terrorist group that had dumped Sarin nerve gas in the Tokyo subway and who some suspected had purchased property in Australia to test a primitive nuclear device; a technician working on Canada’s RADARSAT satellite, whose synthetic aperture radar imaging could peer through clouds and darkness from space to resolve objects on the surface of the earth.

  A major inspiration that would later inform the Citizen Lab’s “mixed methods” approach came via my experiences researching the technical work around the Comprehensive Test Ban Treaty (CTBT) negotiations, which at that time were occurring through the venue of the United Nations Conference on Disarmament. The process involved nuclear, radiological, chemical, seismic, and imagery specialists from about a dozen countries whose mission was to provide a blueprint for a planetwide surveillance network to verify compliance to a possible CTBT, then under negotiation. The process was highly politicized – with the United States and its allies continuously trying to stall negotiations, in my view – and by the time I dropped into the process, the scientists had been meeting for years, knew each other as close friends. Their plans for total Earth surveillance were so airtight that, as one participant joked, “if an ant farted anywhere on earth, we’d know about it.” The architecture for the CTBT verification system included a worldwide network of seismic sensors; radionuclide sniffing stations that would suck up the air and detect the slightest wisp of anything nuclear; space-based radar, optical, and infrared satellites; and even underwater hydro-acoustic sensors, to capture nuclear tests that might be conducted in the ocean’s depths. Though CTBT has never received enough state ratifications to enter into force, the image of a worldwide network of sensors combining various technological platforms, from undersea to outer space, all meant to check and constrain cheating around nuclear testing and build confidence and security for the planet, stuck with me deeply and still influences how I think global cyber security should be implemented.

 

‹ Prev