by Jared Cohen
In democracies in the developing world, where both democratic institutions and technology are newer, government regulation around privacy will be more random. In each country, a particular incident will initially raise the issues at stake in dramatic fashion and drive public demand, similar to what has happened in the United States. A federal statute was passed in 1994 prohibiting state departments of motor vehicles from sharing personal information after a series of high-profile abuses of that information, including the murder of a prominent actress by a stalker. In 1988, following the leak of the late Judge Robert Bork’s video-rental information during the Supreme Court nomination process, Congress passed the Video Privacy Protection Act, criminalizing disclosure of personally identifiable rental information without customer consent.6
While all of this digital chaos will be a nuisance to democratic societies, it will not destroy the democratic system. Institutions and polities will be left intact, if slightly battered. And once democracies determine the appropriate laws to regulate and control new trends, the result may even be an improvement, with a strengthened social contract and greater efficiency and transparency in society. But this will take time, because norms are not quick to change, and each democracy will move at its own pace.
Without question, the increased access to people’s lives that the data revolution brings will give some repressive autocracies a dangerous advantage in targeting their citizens.
While this is a bad outcome and one we hope will be mitigated by developments discussed elsewhere in the book, we must understand that citizens living in autocracies will have to fight even harder for their privacy and security. Rest assured, demand for tools and software to help safeguard citizens living under digital repression will give rise to a growing and aggressive industry. And that is the power of this new information revolution: For every negative, there will be a counterresponse that has the potential to be a substantial positive. More people will fight for privacy and security than look to restrict it, even in the most repressive parts of the world.
But authoritarian regimes will put up a vicious fight. They will leverage the permanence of information and their control over mobile and Internet service providers to create an environment of heightened vulnerability for their citizens. What little privacy existed before will be long gone, because the handsets that citizens have with them at all times will double as the surveillance bugs regimes have long wished they could put in people’s homes. Technological solutions will protect only a distinct technically savvy minority, and only temporarily.
Regimes will compromise devices before they are sold, giving them access to what everybody says, types and shares in public and in private. Citizens will be oblivious to how they might be vulnerable to giving up their own secrets. They will accidentally provide usable intelligence on themselves—particularly if they have an active online social life—and the state will use that to draw damning conclusions about who they are and what they might be up to. State-initiated malware and human error will give regimes more intelligence on their citizens than they could ever gather through non-digital means. Networks of citizens, offered desirable incentives by the state, will inform on their fellows. And the technology already exists for regimes to commandeer the cameras on laptops, virtually invade a dissident’s home without his or her knowledge, and both listen to and watch everything that is said and done there.
Repressive governments will be able to determine who has censorship-circumvention applications on their handsets or in their homes, so even the non-dissident just trying to illegally download The Sopranos will come under increased scrutiny. States will be able to set up random checkpoints or raids to search people’s devices for encryption and proxy software, the presence of which could earn them fines, jail time or a spot on a government database of offenders. Everyone who is known to have downloaded a circumvention measure will suddenly find life more difficult—they will not be able to get a loan, rent a car or make an online purchase without some form of harassment. Government agents could go classroom to classroom at every school and university in the country, expelling all students whose mobile-phone activity indicates that they’ve downloaded such software. Penalties could extend to these students’ networks of family and friends, further discouraging that behavior for the wider population.
And, in the slightly less totalitarian autocracies, if the governments haven’t already mandated “official” government-verified profiles, they’ll certainly try to influence and control existing online identities with laws and monitoring techniques. They could pass laws that require social-networking profiles to contain certain personal information, like home address and mobile number, so that users are easier to monitor. They might build sophisticated computer algorithms that allow them to roam citizens’ public profiles looking for omissions of mandated information or the presence of inappropriate content.
States are already engaging in this type of behavior, if somewhat covertly. As the Syrian uprising dragged on into 2013, a number of Syrian opposition members and foreign aid workers reported that their laptops were infected with computer viruses. (Many hadn’t realized it until their online passwords suddenly stopped working.) Information technology (IT) specialists outside of Syria checked the discs and confirmed the presence of malware, in this case different types of Trojan horse viruses (programs that appear legitimate but are in fact malicious) that stole information and passwords, recorded keystrokes, took screenshots, downloaded new programs and remotely turned on webcams and microphones, and then sent all of that information back to an IP address which, according to the IT analysts, belonged to the state-owned telecom, Syrian Telecommunications Establishment. In this case, the spyware arrived through executable files (the user had to independently open a file to download the virus), but that doesn’t mean the targeted individuals had been careless. One aid worker had downloaded a file, which appeared to be a dead link (meaning it no longer worked), in an online conversation with a person she thought was a verified opposition activist about the humanitarian need in the country. Only after the conversation did she learn to her chagrin that she had probably spoken with a government impersonator who possessed stolen or coerced passwords; the real activist was in prison.
People living under these conditions will be left to fend for themselves against the tag team of their government and its corrupt corporate allies. What governments can’t build in-house, they can outsource to willing suppliers. Guilt by association will take on a new meaning with this level of monitoring. Just being in the background of a person’s photo could matter if a government’s facial-recognition software were to identify a known dissident in the picture. Being documented in the wrong place at the wrong time, whether by photo, voice or IP address, could land unwitting citizens in an unwanted spotlight. Though this scenario is profoundly unfair, we worry that it will happen all too often, and could encourage self-censoring behaviors among the rest of society.
If connectivity enhances the state’s power, enabling it to mine its citizens’ data with a fly-on-the-wall vantage point, it also constricts the state’s ability to control the news cycle. Information blackouts, propaganda and “official” histories will fail to compete with the public’s access to outside information, and cover-ups will backfire in the face of an informed and connected population. Citizens will be able to capture, share and remark upon an event before the government can decide what to say or do about it, and thanks to the ubiquity of cheap mobile devices, this grassroots power will be fairly evenly distributed throughout even large countries. In China, where the government has one of the world’s most sophisticated and far-reaching censorship systems in place, attempts to cover up news stories deemed potentially damaging to the state have been missing the mark with increasing frequency.
In July 2011, the crash of a high-speed train in Wenzhou, in southeast China, resulted in the deaths of forty people and gave weight to a widely held fear that the country’s infrastructure projects were moving too quickly for
proper safety reviews. Yet the accident was downplayed by official channels, its coverage in the media actively minimized. It took tens of millions of posts on weibos, Chinese microblogs similar to Twitter, for the state to acknowledge that the crash had been the result of a design flaw and not bad weather or an electricity outage, as had previously been reported. Further, it was revealed that the government sent directives to the media shortly after the crash, specifically stating, “There must be no seeking after the causes [of the accident], rather, statements from authoritative departments must be followed. No calling into doubt, no development [of further issues], no speculation and no dissemination [of such things] on personal microblogs!” The directives also instructed journalists to maintain a feel-good tone about the story: “From now on, the Wenzhou train accident should be reported along the theme of ‘major love in the face of major disaster.’ ” But where the mainstream media fell in line, the microbloggers did not, leading to a deeply embarrassing incident for the Chinese government.
For a country like China, this mix of active citizens armed with technological devices and tight government control is exceptionally volatile. If state control relies on the perception of total command of events, every incident that undermines that perception—every misstep captured by camera phone, every lie debunked with outside information—plants seeds of doubt that encourage opposition and dissident elements in the population, and that could develop into widespread instability.
There may be only a handful of failed states in the world today, but they offer an intriguing model for how connectivity can operate in a power vacuum. Indeed, telecommunications seems to be just about the only industry that can thrive in a failed state. In Somalia, telecommunications companies have come to fill many of the gaps that decades of war and failed government have created, providing information, financial services and even electricity.
In the future, as the flood of inexpensive smart phones reaches users in failed states, citizens will find ways to do even more. Phones will help to enable the education, health care, security and commercial opportunities that the citizens’ governments cannot provide. Mobile technology will also give much-needed intellectual, social and entertainment outlets for populations who have been psychologically traumatized by their environment. Connectivity alone cannot revert a failed state, but it can drastically improve the situation for its citizens. As we’ll discuss later, new methods to help communities handle conflict and post-conflict challenges—developments like virtual institution building and skilled labor databases in the diaspora—will emerge to accelerate local recovery.
In power vacuums, though, opportunists take control, and in these cases connectivity will be an equally powerful weapon in their hands. Newly connected citizens in failed states will have all the vulnerabilities of undeletable data, but none of the security that could insulate them from those risks. Warlords, extortionists, pirates and criminals will—if they’re smart enough—find ways to consolidate their own power at the expense of other people’s data. This could mean targeting specific populations, such as wealthier subclans or influential religious leaders, with more precision and virtually no accountability. If the online data (say, transfer records for a mobile money platform) showed that a particular extended family received a comparatively large sum of money from relatives in the diaspora, local thugs could stop by and demand tribute—paid, probably, over a mobile money system as well. Today’s warlords grow rich by acting as the requisite pass-through for all sorts of valuable resources, and in the future, while drugs, minerals and money will all still matter, so too will valuable personal data. Warlords of the future may not even use the data they have, instead selling it to outside parties willing to pay a premium. And, most important, these opportunists will be able to appear even more anonymous and elusive than they do today, because they’ll unfortunately have the resources and incentive to get anonymity in ways ordinary people do not.
Power vacuums, warlords and collapsed states may sound like a foreign and unrelated world to many in Silicon Valley, but this will soon change. Today, technology companies constantly underscore their focus on, and responsibility to, the virtual world’s version of citizenry. But as five billion new people come online, companies will find that the attributes of these users and their problems are much more complex than those of the first two billion. Many of the next five billion people live in impoverished, censored and unsafe conditions. As the providers of access, tools and platforms, technology companies will have to shoulder some of the physical world’s burdens as they play out online if they want to stay true to the doctrine of responsibility to all users.
Technology companies will need to exceed the expectations of their customers in both privacy and security protections. It is unsurprising that the companies responsible for the architecture of the virtual world will shoulder much of the blame for the less welcome developments in our future. Some of the anger directed toward technology firms will be justified—after all, these businesses will be profiting from expanding their networks quickly—but much will be misplaced. It is, after all, much easier to blame a single product or company for a particularly evil application of technology than to acknowledge the limitations of personal responsibility. And of course there will always be some companies that allow their desire for profit to supersede their responsibility to users, though such companies will have a harder time achieving success in the future.
In truth, some technology companies are more acutely aware than others of the responsibility they bear toward their own users and the online community around the world; this is in part why nearly all online products and services today require users to accept terms and conditions and abide by those contractual guidelines. People have a responsibility as consumers and individuals to read a company’s policies and positions on privacy and security before they willingly share information. As the proliferation of companies continues, citizens will have more options and thus due diligence will be more important than ever. A smart consumer will look not just at the quality of a product, but also at how easy that product makes it for you to control your privacy and security. Still, in the court of public opinion and environments where the rule of law is shaky, these preexisting stipulations count for little, and we can expect more attention to be focused on the makers and purveyors of such tools in the coming decades.
This trend will certainly affect how technology companies form, grow and navigate in what will certainly be a tumultuous period. Certain subsections of the technology industry that receive particularly negative attention will have trouble recruiting engineers or attracting users to and monetizing their products, despite the fact that such atrophying will not solve the problem (and will only hurt the community of users in the end, by denying them the full benefits of innovation). Thick skin will be a necessity for technology companies in the coming years of the digital age, because they will find themselves beset by public concerns over privacy, security and user protections. It simply won’t be possible to avoid these discussions, nor will companies be able to avoid taking a position on the issues.
They’ll also have to hire more lawyers. Litigation will always outpace genuine legal reform, as any of the technology giants fighting perpetual legal battles over intellectual property, patents, privacy and other issues would attest. Google encounters lawsuits from governments around the world with some frequency over alleged breaches of copyright or national laws, and it works hard to assure its users that Google serves their interests first and foremost, while staying within the boundaries of the laws itself. But if Google stopped all product development whenever it found itself faced with a government suit, it would never build anything.
Companies will have to learn how to manage public expectations of the possibilities and limits of their products. When formulating policies, technology companies will, like governments, increasingly have to factor in all sorts of domestic and international dynamics, such as the political risk environment, diplomatic relationships betw
een states, and the rules that govern citizens’ lives. The central truth of the technology industry—that technology is neutral but people are not—will periodically be lost amid all the noise. But our collective progress as citizens in the digital age will hinge on our not forgetting it.
Coping Strategies
People and institutions around the world will rise to meet the new challenges they face with innovative private- and public-sector coping strategies. We can loosely group them into four categories: corporate, legal, societal and personal.
Technology corporations will have to more than live up to their privacy and security responsibilities if they want to avoid unwanted government regulation that could stifle industry dynamism. Companies are already taking proactive steps, such as offering a digital “eject button” that allows users to liberate all of their data from a given platform; adding a preferences manager; and not selling personally identifying information to third parties or advertisers. But given today’s widespread privacy and security concerns, there is still a great deal of work to be done. Perhaps a group of companies will make a pledge not to sell data to third parties, in a corporate treaty of sorts.
The second coping strategy will focus on the legal options. As the impact of the data revolution settles in, states will come under increasing pressure to protect their citizens from the permanence of what appears on the Internet and from their own newly exposed vulnerabilities. In democracies, this means new laws. They will be imperfect, overly idealistic and probably often quite rushed, but they will generally represent societies’ best attempts to react effectively to the chaotic and unpredictable changes that connectivity produces.