The New Digital Age

Home > Other > The New Digital Age > Page 24
The New Digital Age Page 24

by Jared Cohen

Rather than a systematic campaign to cut access (which would incur unwelcome scrutiny), the Romanian government would need only to implement these blockages randomly, frequently enough to harass the group itself but intermittently enough to allow for plausible denials. The Roma, of course, could find imperfect technological work-arounds that enabled basic connectivity, but ultimately the blockages would be sufficiently disruptive that even intermittent access couldn’t replace what was lost. Over a long enough period, a dynamic like this might settle into a kind of virtual apartheid, with multiple sets of limitations on connectivity for different groups within society.

  Electronically isolating minority groups will become increasingly prevalent in the future because states have the will to do so, and they have access to the data that enables it. Such initiatives might even start as benign programs with public support, then over time transform into more restrictive and punitive policies as power shifts in the country. Imagine, for example, if the ultra-Orthodox contingent in Israel lobbied for the creation of a white-listed “kosher Internet,” where only preapproved websites were allowed, and their bid was successful—after all, the thinking might be, creating a special Internet lane for them is not unlike forming a special “safe” list of Internet sites for children.1 Years later, if the ultra-Orthodox swept the elections and took control of the government, their first decision might be to make all of Israel’s Internet “kosher.” From that position, they would have the opportunity to restrict access even further for minority groups within Israel.

  The most worrisome result of such policies is how vulnerable these restrictions would make targeted groups, whose lifelines could literally be cut. If limited access were to be a precursor to physical harassment or state violence by compromising a group’s ability to send out alert signals, it would also strip victims of their ability to document the abuse or destruction afterward. Soon it may be possible to say that what happens in a digital vacuum, in effect, doesn’t happen.

  In countries where governments are targeting minority or repressed groups in this way, an implicit or explicit arrangement between some citizens and states will emerge, whereby people trade information or obedience in exchange for better access. Where noticeable cooperation with the government is demonstrated, the state will grant those individuals faster connections, better devices, protection from online harassment or a broader range of accessible Internet sites. An artist and father of six living in Saudi Arabia’s Shiite minority community may have no desire to become an informant or sign a government pledge to stay out of political affairs, but if he calculates that that cooperation means a more reliable income for himself or better educational opportunities for his children, his resolve might well weaken. The strategy of co-opting potentially restive minority groups by playing to their incentives is as old as the modern state itself; this particular incarnation is merely suited for our digital age.

  Neither of these tactics—erasing content and limiting access—is the purview of states alone. Technically capable groups and individuals can pursue virtual discrimination independently of the government. The world’s first virtual genocide might be carried out not by a government but by a band of fanatics. Earlier, we discussed how extremist organizations will venture into destructive online activities as they develop or acquire technological skills, and it follows that some of those activities will echo the harassment described above. This goes for lone-wolf zealots, too. It’s not hard to imagine that a rabidly anti-Muslim activist with strong technical skills might go after his local Muslim community’s websites, platforms and media outlets to harass them. This is the virtual equivalent of defacing their property, breaking into their businesses and shouting at them from street corners. If the perpetrator is exceptionally skilled, he will find ways to limit the Muslims’ access by targeting certain routers to shut them down, sending out jamming signals in their neighborhoods or building computer viruses that disable their connections.

  In fact, virtual discrimination will suit some extremists better than their current options, as a former neo-Nazi leader and current anti-hate activist named Christian Picciolini told us. “Online intimidation by hate groups or extremists is more easily perpetrated because the web dehumanizes the interaction and provides a layer of anonymity and ‘virtual’ disconnection,” he explained. “Having the Internet as an impersonal buffer makes it easier for the intimidator to say certain harmful things that might not normally be said face-to-face for fear of peer judgment or persecution. Racist rhetoric rightfully carries a certain social stigma against the general population, but online, words can be said without being connected to the one saying [them].” Picciolini expects virtual harassment by hate groups to increase significantly in the coming years, since “the consequences of online discrimination seem less audacious to the offender, and therefore [harassment will] happen more frequently and to a more vehement degree.”

  In the past, physical and legal exclusion was the dominant tactic used by the powerful in conflict-prone societies, and we believe that virtual exclusion will come to join (but not surpass) that tool kit. When the conditions become unbearable, as throughout history, the sparks of conflict will ignite.

  Multidimensional Conflict

  Misinformation and propaganda have always been central features of human conflict. Julius Caesar filled his famous account of the Gallic Wars (58 B.C.–50 B.C.) with titillating reports of the vicious barbarian tribes he’d fought. In the fog of competing narratives, determining the “good” and “bad” actors in a conflict is a critical yet often difficult task, and it will become even more challenging in the new digital age. In the future, marketing wars between groups will become a defining feature of conflict, because all sides will have access to electronic platforms, tools and devices that enhance their storytelling abilities for audiences at home and abroad. We saw this unfold during the November 2012 conflict between Israel and Hamas, when the terrorist organization launched a grassroots marketing war that flooded the virtual world with graphic photos of dead women and children. Hamas, which thrives on a public that is humiliated and demoralized, was able to exploit the larger number of casualties in Gaza. Israel, which focuses more on managing national morale and reducing ambiguity around its actions, countered by utilizing an @IDFSpokesperson Twitter handle, which included tweets like “Video: IDF pilots wait for area to be clear of civilians before striking target youtube.com/watch?v=G6a112wRmBs … #Gaza.” But the reality of marketing wars are that the side which is happy to glorify death and use it for propaganda will often gain wider-spread sympathy, especially as a larger and less-informed audience joins the conversation. Hamas’s propaganda tactics were not new, but the growing ubiquity of platforms such as YouTube, Facebook and Twitter made it possible for them to reach a much larger and non-Arabic-speaking audience in the West, who with each tweet, like and plus-one amplified Hamas’s marketing war.

  Groups in conflict will try to destroy each other’s digital marketing capabilities before a conflict even starts. Few conflicts are clearly black-and-white at the end—let alone when they start—and this near-equivalency in communications power will greatly affect how civilians, leaders, militaries and the media deal with conflict. What’s more, the very fact that anyone can produce and share his or her version of events will actually nullify many claims; with so many conflicting accounts and without credible verification, all claims become devalued. In war, data management (compiling, indexing, ranking and verifying the content emanating from a conflict zone) will shortly succeed access to technology as the predominant challenge.

  Modern communication technologies enable both the victims and the aggressors in a given conflict to cast doubt on the narrative of the other side more persuasively than with any media in history. For states, the quality of their marketing might be all that lies between staying in power and facing a foreign intervention. For civilians trapped in a town under siege by government forces, powerful amateur videos and real-time satellite mapping can counter the claims of the
state and strongly suggest, if not prove, that it is lying. Yet in a situation like the 2011 violence in Côte d’Ivoire (where two sides became locked in a violent battle over contested electoral results), if both parties have equally good digital marketing, it becomes much harder to discern what is really happening. And if neither side is fully in control of its marketing (that is, if impassioned individuals outside the central command produce their own content), the level of obfuscation rises even more.

  For outsiders looking in, already difficult questions, like who to speak with to understand a conflict, who to support in a conflict and how best to support them, become considerably more complicated in an age of marketing wars. (This is particularly true when not many outsiders speak the local language, or in the absence of standing alliances, like between NATO countries or the SADC countries, the Southern African Development Community.) Critical information needed to make those decisions will be buried beneath volumes of biased and conflicting content emanating from the conflict zone. States rarely intervene militarily unless it is very clear what is taking place, and even then they often hesitate for fear of the unforeseen physical consequences and the scrutiny in the twenty-four-hour news cycle.2

  Marketing wars within a conflict abroad will have domestic political implications, too. If the bulk of the American public, swayed by one side’s emotionally charged videos, concludes that intervention in a given conflict is a moral necessity, but the U.S. government’s intelligence suggests that those videos aren’t reflective of the real dynamics in the conflict, how should the administration respond? It can’t release classified material to justify its position, but neither can it effectively counter the narrative embraced by the public. If both sides present equally persuasive versions, outside actors become frozen in place, unable to take a step in any direction—which might be the exact goal of one of the parties in the conflict.

  In societies susceptible to ethnic or sectarian violence, marketing wars will typically begin long before there is a spark that ignites any actual fighting. Connectivity and virtual space, as we’ve shown, can often amplify historical and manufactured grievances, strengthening the dissonant perspectives instead of smoothing over their inaccuracies. Sectarian tensions that have lain somewhat dormant for years might reignite once people have access to an anonymous online space. We’ve seen how religious sensitivities can become inflamed almost instantaneously when controversial speech or images reach the Internet—the Danish cartoon controversy in 2005 and violent demonstrations over the Innocence of Muslims video in 2012 are just a couple of many prominent examples—and it’s inevitable that online space will create more ways for people to offend one another. The viral nature of incendiary content will not allow an offensive act in any part of the world to go unnoticed.

  Marketing is not the same thing as intelligence, of course. Early attempts at digital marketing by groups in conflict will be little more than crude propaganda and misinformation transferred to a virtual platform. But over time, as these behaviors are adopted around the world by states and individuals, the aesthetic distance between intelligence and marketing will close. States will have to be careful not to mistake one for the other. Once groups are wise to what they need to produce in order to generate a specific response, they will be able to tailor their content and messaging accordingly.

  Those with state resources will have the upper hand in any marketing war, but never the exclusive advantage. Even if the state controls many of the means of production—the cell towers, the state media, the ISPs—it will be impossible for any party to have a complete information monopoly. When all it takes to shoot, edit, upload and disseminate user-generated content is a palm-sized phone, a regime can’t totally dominate. One video captured by a shaky mobile-phone camera during the postelection protests in Iran in 2009 galvanized the opposition movement: the famous “Neda video.” Neda Agha-Soltan was a young woman living in Tehran who while parked on a quiet side of the street at an antigovernment protest stepped out of her car to escape the heat and was shot in the heart by a government sniper from a nearby rooftop. Amazingly, the entire incident was caught on someone’s mobile phone. While members of the crowd attempted to revive Neda, others began filming her on their phones as well. The videos were passed between Iranians, mostly through the peer-to-peer platform Bluetooth, since the regime had blocked mobile communications in anticipation of the protests; they found their way online and went viral. Around the world, observers were galvanized to speak out against the Iranian regime while protesters in Iran marched, calling for justice for Neda. All of this significantly ratcheted up the global attention paid to a protest movement the regime was desperately trying to stop.

  Even in the most restrictive societies, places where spyware and virtual harassment and pre-compromised mobile phones are rampant, some determined individuals will find a way to get their messages out. It might involve smuggling SIM cards, rigging mesh networks (essentially, a wireless collective in which all devices act like routers, creating multiple nodes for data traffic instead of a central hub) or distributing “invisible” phones that are designed to record no communications (perhaps by allowing all calls to be voice over IP) and that allow anonymous use of Internet services. All state efforts to curtail the spread of an in-demand technology fail; it’s merely a question of when. (This is true even for the persecuted minorities whose government tries to exclude them from the Internet.) Long before the Neda video, Iran tried to ban satellite-television dishes; their mandate was met with an increase in satellite adoption among the Iranian public. Today, the illegal satellite market in Iran is among the largest per capita in the world and even some members of the regime profit from black market sales.

  The 1994 Rwandan genocide, a high-profile conflict from the pre-digital age that claimed the lives of 800,000 people, demonstrates what a difference proportionate marketing power makes. In 1994, while Hutus, Tutsis and Twa all owned radios, only Hutus owned radio stations. With no means of amplifying their voices, Tutsis were powerless against the barrage of propaganda and hate speech building on the airwaves. When Tutsis tried to operate their own radio station, the Hutu-dominated government identified these operators, raided their offices and made arrests. If the minority Tutsi population in the years leading up to the 1994 genocide had had the powerful mobile devices we have today, perhaps a narrative of doubt could have been injected into Rwandan public discourse, so that some ordinary Hutu civilians would not have found the anti-Tutsi propaganda sufficiently compelling to lead them to take up arms against their fellow Rwandans. The Tutsis would have been able to broadcast their own content from handsets, while on the move, without having to rely on government approval or intermediaries to develop and disseminate content. During the genocide, the Hutu radio stations announced names and addresses of people who were hiding—one can only imagine what a difference an alternative communications channel, like encrypted peer-to-peer messaging, might have made.

  Despite potential gains, there will be longer-term consequences to this new level playing field, though we cannot predict what will be lost when traditional barriers are removed. Misinformation, as mentioned above, will distract and distort, leading all actors to misinterpret events or miscalculate their response. Not every brutal crime committed is part of a systematic slaughter of an ethnic or religious group, yet it can be incorrectly cast as such with minimal effort. Even in domestic settings, misinformation can present a major problem: How should a local government handle an angry mob at city hall demanding justice for a manipulated video? Governments and authorities will face questions like these repeatedly, and only some of the answers they give will be pacifying.

  The best and perhaps only reply to these challenges is digital verification. Proving that a photo has been doctored (by checking the digital watermark), or that a video has been selectively edited (by crowd-sourcing the whole clip to prove parts are missing), or that a person shown to be dead is in fact alive (by tracking his online identity) will bring some verac
ity in a hyper-connected conflict. In the future, a witness to a militia attack in South Sudan will be able to add things like digital watermarks, biometric data and satellite positioning coordinates to add heft to his claims, useful for when he shares the content with police or the media. Digital verification is the next obvious stage of the process. It already occurs when journalists and government officials cross-check their sources with other forms of information. It will be even easier and more reliable when computers do most of the work.

  Teams of international verification monitors could be created, dispatched to conflicts where there is a significant dispute about the digital narratives emerging. Like the Red Cross, verification monitors would be seen as neutral agents, in this case highly capable ones technically.3 (They need not be deployed to the actual conflict zone in every case—their work could sometimes be done over an Internet connection. But in conflicts where communications infrastructure is limited or overwhelmingly controlled by one side, proximity to the actors would be necessary, as would language skills and cultural knowledge.) Their stamp of approval would be a valuable commodity, a green light for media and other observers to take given content seriously. A state or warring party could bypass these monitors, but doing so would devalue whatever content was produced and make it highly suspect to others.

  The monitors would examine the data, not the deed, so their conclusions would be weighted heavily, and states might launch interventions, send aid or issue sanctions based on what they say. And, of course, with such trust and responsibility comes the inevitable capacity for abuse, since these monitors would be no less immune to the corruption that stymies other international organizations. Regimes might attempt to co-opt verification monitors, through bribes or blackmail, and some monitors might harbor personal biases that reveal themselves too late. Regardless, the bulk of these monitors would be comprised of honest engineers and journalists working together, and their presence in a conflict would lead to more safety and transparency for all parties.

 

‹ Prev