To my ongoing frustration, the term hacker has been corrupted and redefined, in part because of the actions of some hackers themselves. Irreverence towards authority has always been an element of the hacker spirit and ethic, and those who defined themselves as hackers would regularly find ways to step over acceptable limits, mostly for humorous ends. MIT Museum Hack archivist Brian Leibowitz notes that in the 1960s students on campus began to use the word as a noun to describe a great prank, and by the late 1960s the meaning included activities that “tested limits of skill, imagination, and wits.” By the mid-1980s, the term was primarily being used at MIT to describe “pranks” and “unapproved exploring” of parts of the Institute or inaccessible places on campus.
Over time hacking came to connote a wide range of often extreme methods and ends. Steven Levy, author of Hackers, points out that “the word now has two branches, one used among computer programmers and the one used in the media.” But few self-identified hackers remain faithful to the original spirit and ethic that first attracted people like Oxblood Ruffin and me; and, worse, today “hacker” and “hacking” are almost entirely synonymous with criminal acts, one or the other word invariably emblazoned in headlines each time Anonymous strikes or a data breach occurs. That is, “computer hacking” is used unquestioningly to describe anyone who breaks the law or causes a ruckus in cyberspace.
The association of the term with criminality is not just a semantic issue; it represents a much larger delegitimization of the underlying philosophy of experimentation at the heart of the hacker ethic. And herein lies an enormously important paradox, one that sits at the heart of our technologically saturated world: we have created a communications environment that is utterly dependent on existing (and emerging) technologies, and yet, at the same time, we are actively discouraging experimentation with, and an understanding of, these technologies. Never before in human history have we been so constantly plugged in and utterly connected. We are immersed in cyberspace, surrounded by technical systems embedded in just about everything we do, and to an ever-increasing extent they govern what we do, or, more accurately what we can and cannot do. In this context, the numerous and increasingly severe restrictions on what we are allowed to do with and within cyberspace are alarming.
• • •
The appearance of free, value-neutral, wonderful experimentation persists: plug in and play, copy and paste, upload and post. We have iPhones that record high-definition video (and software that allows us to edit it into slick movies), and online services like YouTube that allow us to show the world the fruits of our imagination. But the experimentation that is encouraged actually operates on these shallow planes. On deeper, more fundamental levels, it is strictly controlled.
Those controls have their roots in multiple, reinforcing causes. The growing popularity of what Jonathan Zittrain, in his book The Future of the Internet, called “tethered” devices is one of them: tethered, that is, because the devices are connected to their manufacturers long after they leave the showroom or store, and because they are built in such a way that no one but the manufacturers can change their internal workings. Because the impetus behind these tethered devices comes from users seeking protection from security threats, manufacturers seeking greater control over markets, and regulators looking to secure cyberspace as a whole, the momentum behind this direction is powerful and mutually reinforcing. Together, they risk gradually strangling the original conditions that nurtured innovation and the ethic of experimentation that gave rise to the Internet in the first place.
One of the more perverse examples of this dynamic is the way attempts to control cyberspace threatens security research, contributing to greater insecurities through the chilling effects associated with stringent copyright protections, such as those around the Digital Millennium Copyright Act (DMCA) and its equivalents. The Electronic Frontier Foundation has documented numerous examples of security research being stifled, and of researchers veering away from contentious areas of investigation for fear of being held liable for breaching computer crime or copyright laws. In 2002, Secure Network Operations (SNOsoft) released a paper demonstrating security vulnerabilities in Hewlett-Packard’s Tru64 Unix operating system. The company threatened SNOsoft with DMCA litigation. “After widespread press attention, HP ultimately withdrew the DMCA threat,” noted the EFF. “Security researchers got the message, however – publish vulnerability research at your own risk.” In 2003, publisher Wiley & Sons commissioned security researcher Andrew Huang to write a book about security flaws in Microsoft’s xbox that he had discovered as a part of his Ph.D. research, but then dropped it because of liability concerns that the book could be treated as a “circumvention device,” and thus in violation of the DMCA. The EFF has also written a critique on a draft Directive on Attacks Against Information Systems, a computer crime law currently being debated by the European Parliament, which the EFF says “threatens to create legal woes for researchers who expose security flaws.” The EFF points to Article 3 of the draft directive, which makes it illegal to access information systems without authorization. In these and other cases like them, the legal risks around possible violations of intellectual property perversely stifle the research that is essential to securing the very foundation upon which those innovations rest.
Technology comes shrink-wrapped today, with stiff punishments for those caught trying to unwrap it. Nearly every software application, every tool downloaded, every app installed, and every DVD viewed is preceded by end-user licence agreements that list one prohibition after another. There is something profoundly disturbing about a culture in which in order to use technology individuals must first click “I agree” to such lengthy stipulations, and that restricts our communications behaviour, to say nothing of our native curiosity.
Never before have we had such a grand illusion of freedom through technology, when, in fact, that very freedom and technology are constrained by ever-expanding state laws and corporate regulations. This is not how it should be, or was meant to be. In an era when so much power is exercised beneath the surface of our technical systems, often deliberately hidden from scrutiny and shrouded in layers of deliberate obfuscation, a healthy curiosity about those systems is being actively discouraged. In a vital liberal democracy, citizens should be trained at an early age not only to use technology but also to understand it, to experiment with it, explore its hidden recesses, and shed light on those dimensions of the digital world where unchecked and unaccountable power resides. If by “hacking” we mean a healthy curiosity about technology, we need more, not fewer, hackers. Indeed, if experimentation around the technology of cyberspace were encouraged, not only would there be fewer unexposed vulnerabilities that create insecurity, there would not be a need for a reactionary phenomenon like Anonymous in the first place.
15.
Towards Distributed Security and Stewardship in Cyberspace
The whole human memory can be, and probably in a short time will be, made accessible to every individual … It need not be concentrated in any one single place. It need not be vulnerable as a human head or a human heart is vulnerable. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa, or wherever else seems to afford an insurance against danger and interruption. It can have at once, the concentration of a craniate animal and the diffused vitality of an amoeba.
—H.G. Wells, “World Brain: The Idea of a Permanent Encyclopedia,” 1937
Over half of the world’s 7 billion people now share a single complex information and communications system – cyberspace – that functions, and functions very well, despite no grand blueprint or central point of control. Born as an experimental research network in universities, what used to be called the “Internet” has mushroomed, more by accident than design, to become the information and communications operating system for planet Earth. A mixed common-pool resource that cuts across political jurisdictions and the public and private sectors, cyberspace has become, as Marshall McLuhan foresaw, “ou
r central nervous system in a global embrace.”
This unprecedented global network produces a remarkable stream of innovations and social goods. Deep wells of knowledge, translated into multiple languages, are now instantly accessible to people around the world. H.G. Wells’s description of a world encyclopaedia, written less than eighty years ago, is no longer the stuff of science fiction. Geolocational coordinates down to the level of centimetres are now available in the palm of a hand; instantaneous information sharing – “crowd-sourced” among connected individuals – holds out the potential of revolutionizing everything from election monitoring to disaster relief to predicting disease outbreaks; historic documents can be instantly translated into multiple languages, dramatically expanding the global pool of knowledge. And yet, as sweet as the fruits of cyberspace are, there are some that are poisonous. Malicious software that pries open and exposes insecure computing systems is developing at a rate beyond the capacities of cyber security agencies even to count, let alone mitigate. Data breaches of governments, private sector companies, NGOS, and others are now an almost daily occurrence, and systems that control critical infrastructure – electrical grids, nuclear power plants, water treatment facilities – have been demonstrably compromised.
These unfortunate by-products of an open, dynamic network are exacerbated by increasing assertions of state power. Insecurity, competition, and mounting pressures to deal with breaches, malware, and the other dark sides of cyberspace are driving such government interventions. Internet censorship at the national level, once thought to be impossible, is now the global norm, and governments race to develop cyber security strategies, including offensive cyber warfare capabilities. The 2012 leaks that provided details on U.S. and Israeli computer network operations that sabotaged Iranian nuclear enrichment facilities took few by surprise, as many suspected their hands in the Stuxnet virus in the first place. What was surprising was the calculated admission itself, the first instance of a government acknowledging – or at least not denying responsibility – an attack on critical infrastructure through cyberspace. Indeed, Stuxnet did cross the Rubicon.
Other countries are seeking advantage from the cyber criminal underground, stirring a hornet’s nest of data theft and espionage from which they derive strategic intelligence and security benefits. Added to this dangerous brew is a mushrooming commercial market for offensive cyber attack capabilities. The global cyber arms trade now includes malicious viruses, zero-day exploits, and massive botnets. An arms race in cyberspace has been unleashed, and for every U.S. Cyber Command there is now a Syrian or Iranian cyber army equivalent. For every “Internet Freedom in a Suitcase,” there is a justification put forward for greater cyberspace regulations and controls. We find ourselves in a situation where there are enormous profits to be made in developing capabilities to deny access to knowledge, prevent networks from functioning, or subvert them entirely. Fibre-optic surveillance and cyberspace disruption is now big business.
H.G. Wells was only half right: we have indeed created a kind of “world brain” – the problem is that it is an aggressive, insecure, and all-too-human one, and increasingly less the beautiful thing he imagined.
• • •
Faced with mounting problems and pressures to do something, too many policy-makers are tempted by extreme solutions. The Internet’s de facto distributed regime of governance – largely informal and driven up to 2000 by decisions made by mostly like-minded engineers – has come under massive stress as a function of the Internet’s rapid growth and insecurity. Proposals being debated in liberal democratic countries now include censoring the Internet in response to copyright violations; giving secretive signals intelligence agencies responsibility for securing cyberspace; loosening or eliminating judicial oversight around data sharing with law enforcement; and delegating Internet policing to the private sector. All are illustrations of a movement towards clamp down. These policies are antithetical to the principles of liberal democratic government and to the system of checks and balances and public accountability upon which it rests, and yet they are being put in place. They also legitimize the growing desire of autocratic and authoritarian regimes to subject cyberspace to territorialized controls, and the censorship and surveillance practices that go along with them. By our actions in the West, we contribute to this trend abroad. We preach about the need for closed autocratic societies to “open up,” or, as Ronald Reagan famously thundered, to “tear down this wall,” and yet vis-à-vis cyberspace we are contributing to state censorship and surveillance. Although states were once thought to be powerless in the face of the Internet, the giants have awoken from their slumber.
Left unchecked, these trends will result in the gradual disintegration of what is in the long-term interest of all citizens: an open and secure commons of information on a planetary scale. We stand at a crossroads, and there are several paths we can travel down. Fifty years from now, future historians may look back and say, “You know, there was that brief window in the 1990s and 2000s, when citizens came close to building that planetary library and global public sphere, and then let it slip from their grasp.” The social forces leading us down the path of control and surveillance are formidable, even sometimes appear to be inevitable. But nothing is ever inevitable. The future has yet to be written. We face other extraordinary challenges, like climate change and global environmental degradation, issues that also appear at times to be so large and intractable, so requiring fundamental change (from us) as to be hopeless. In fact, the two spheres are intimately connected: we live in an increasingly compressed and interconnected political space on planet Earth, and to solve these problems, it is imperative that we have an open, shared, and equally accessible medium of global communications. We need more than ever to encourage, rather than stifle, the free flow of knowledge and the exchange of ideas, and cyberspace has provided us with that opportunity.
To protect planet Earth, we need to protect the Net.
As with environmental challenges, the solutions to problems vexing cyberspace are going to require approaches at multiple levels – local, national, and global. The articulation of an alternative vision of security, one that doesn’t throw the baby out with the bath water, one that protects and preserves cyberspace as a dynamic and open and yet secure ecosystem, is urgently required. At the heart of this vision must be the elaboration of the proper rights, roles, and responsibilities for all who share in and sustain cyberspace, and it means ensuring that those rights, roles, and responsibilities are implemented and enforced. It is important to recall that cyberspace belongs to everybody and nobody in particular, that it is what we make of it, and that it requires constant tending.
• • •
Surely, one thinks, the challenges of an unprecedented planetary network of communications, of something so complex as global cyberspace, require a special cyber theory of some sort, something that rises to the scope and scale of this all-encompassing domain? Maybe. But maybe not. Instead, perhaps what is required is simply the application of some timeless principles and traditions.
There is an instinctive tendency in security-related discussions to default to realpolitik or Realism (and the theory that world politics is driven by competitive self-interest) with its state-centrism, top-down hierarchical controls, and the erecting of defensive perimeters to outside threats. In the creation of cyber commands, in spiralling arms races among governments, in “kill switches” on national Internets, and in the rising influence of the world’s most secretive agencies into positions of authority over cyberspace, we see this tradition at play. As compelling as it may be, however, Realism and its institutional manifestations fit awkwardly in a world where divisions between inside and outside are blurred, where threats can emerge as easily from within as without, and where that which requires securing – cyberspace – is, ideally, a globally networked commons of information almost entirely in the hands of its users.
What is needed is an alternative cyber security strategy rooted in libe
ral democratic principles that takes account of the growing need for civic networks to share knowledge and to communicate. For many who would characterize themselves as part of global civil society, “security” is seen as anathema. In today’s world of exaggerated threats and self-serving hyperbole from the computer security industry, it is easy to dismiss security as something to be resisted, rather than engaged. Securitization is generally associated with the defence industry, Pentagon strategists, and so forth, and many question whether employing the language of security only plays into the cyber-security military-industrial complex and the exercise of control. But the vulnerabilities of cyberspace are real, the underbelly of cyber crime undeniably huge and growing, an arms race in cyberspace escalating, and major governments are poised to set the rules of the road that may impose top-down solutions that subvert the domain as we know it. Dismissing these concerns as manufactured myths propagated by power elites will only marginalize civic networks from the conversations where policies are being forged.
Instead, civic networks need to be at the forefront of security solutions that preserve cyberspace as an open commons of information, and that protect privacy and support freedom of speech, while at the same time addressing the growing vulnerabilities that have produced a massive explosion in cyber crime. Can security and openness be reconciled? Aren’t the two contradictory?
• • •
Not at all.
Black Code: Inside the Battle for Cyberspace Page 23