Book Read Free

The Future

Page 11

by Al Gore


  Like the journalism essential to its flourishing, democracy itself is now stuck in this odd and dangerous transition era that falls between the waning age of the printing press and the still nascent maturation of effective democratic discourse on the Internet. Reformers and advocates of the public interest are connecting with one another in ever larger numbers over the Internet and are searching with ever greater intensity for ways to break through the quasi-hypnotic spell cast over the mass television audience—day after day, night after night—by constant, seductive, expensive, and richly produced television programming.

  Virtually all of this programming is punctuated many times each hour by slick and appealing corporate messages designed to sell their products and by corporate issue advertising designed to shape the political agenda. During election years, especially in the United States, television viewers are also deluged with political advertisements from candidates who—again because of the economics of the television medium—are under constant and unrelenting pressure from wealthy and powerful donors to adopt the donors’ political agendas—agendas that are, unsurprisingly, congruent with those contained in the corporate issue advertising.

  Public goods—such as education, health care, environmental protection, public safety, and self-governance—have not yet benefited from the new efficiencies of the digital age to the same extent as have private goods. The power of the profit motive has been more effective at driving the exploitation of new opportunities in the digital universe. By contrast, the ability of publics to insist upon the adoption of new, more efficient, digital models for the delivery of public goods has been severely hampered by the sclerosis of democratic systems during this transition period when digital democracy has yet to take hold.

  EDUCATION AND HEALTH CARE IN A NEW WORLD

  The crisis in public education is a case in point. Our civilization has barely begun the necessary process of adapting schools to the tectonic shift in our relationship to the world of knowledge. Education is still too frequently based on memorizing significant facts. Yet in a world where all facts are constantly at our fingertips, we can afford to spend more time teaching the skills necessary to not only learn facts but also learn the connections among them, evaluate the quality of information, discern larger patterns, and focus on the deeper meaning inherent in those patterns. Students accustomed to the rich and immersive experience of television, video games, and social media frequently find the experience of sitting in desks staring at chalk on a blackboard to be the least compelling and engaging part of their day.

  There is clearly a great potential for the development of a new curriculum, with tablet-based e-books and search-based, immersive, experiential, and collaborative online courses. E. O. Wilson’s new, enhanced digital textbook Life on Earth is a terrific example of what the future may hold. In higher education, a new generation of high-quality ventures has emerged—including Coursera, Udacity, Minerva, and edX—that is already beginning to revolutionize and globalize world-class university-level instruction. Most of the courses are open to all, for free!

  The hemorrhaging of government revenues at the local, state, and national level—caused in part by the lower wages and persistent high unemployment associated with the outsourcing and robosourcing in Earth Inc., and declining property values in the wake of the global economic crisis triggered in part by computer-generated subprime mortgages—is leading to sharp declines in budgets for public education at the very time when reforms are most needed. In addition, the aging of populations in developed countries and the declining percentage of parents of school-aged children have diminished the political clout wielded by advocates for increasing these budgets.

  Even though public funding for education has been declining, many creative teachers and principals have found ways to adapt educational materials and routines to the digital age. The Khan Academy is a particularly exciting and innovative breakthrough that is helping many students. Nevertheless, in education as in journalism, no enduring model has yet emerged with enough appeal to replace the aging and decaying model that is now failing to meet necessary standards. And some online, for-profit ventures—like the University of Phoenix and Argosy University Online—appear to have taken advantage of the hunger for college-level instruction on the Internet without meeting their responsibility to the students who are paying them. One online college, Trinity Southern University, gave an online degree in business administration to a cat named Colby Nolan, which happened to be owned by an attorney general. The school was later prosecuted and shut down.

  Health care, like education, is struggling to adapt to the new opportunities inherent in the digital universe. Crisis intervention, payment for procedures, and ridiculously expensive record keeping required by insurance companies and other service providers still dominate the delivery of health care. We have not yet exploited the new ability inherent with smartphones and purpose-built digital health monitors to track health trends in each individual and enable timely, cost-effective interventions to prevent the emergence of chronic disease states that account for most medical problems.

  More sophisticated information-based strategies utilizing genomic and proteomic data for each individual could also clearly improve health outcomes dramatically at much lower cost. Epidemiological strategies—such as the monitoring of aggregate Internet searches for flu symptoms—are beginning to improve the allocation and deployment of public health resources. While interesting experiments have begun in these and other areas, however, there has as yet been no effectively focused public pressure or sustained political initiative to implement a comprehensive new Internet-empowered health care strategy. Some insurance companies have begun to use data mining techniques to scour social media and databases aggregated by marketing companies in order to better assess the risk of selling life insurance to particular individuals. At least two U.S. insurance companies have found the approach so fruitful that they even waive medical exams for customers whose data profiles classify them as low-risk.

  THE SECURITY CONUNDRUM

  With all of the exciting potential for the Internet to improve our lives, why have the results been so mixed thus far? Perhaps because of human nature, it is common for us to overemphasize the positive impacts of any important new technology when it is introduced and first used. It is also common, unfortunately, for us to give short shrift to the risks of new technologies and underestimate unintended side effects.

  History teaches, of course, that any tool—the mighty Internet included—can and will be used for both good and ill. While the Internet may be changing the way we organize our thinking, and while it is changing the way we organize our relationships with one another, it certainly does not change basic human nature. And thus the age-old struggle between order and chaos—and dare I say good and evil—will play out in new ways.

  More than four centuries ago, when the explosion of information created by the printing press was just beginning, the legend of Doctor Faust first appeared. Some historians claim that Faust was based on the financier and business partner of Gutenberg, Johann Fust, who was charged in France with witchcraft because of the seemingly magical process by which thousands of copies of the same text could be replicated perfectly.

  In the Faust legend, which has appeared in varying forms over the centuries, the protagonist makes a deal with the devil in which he exchanges his soul for “unlimited knowledge and worldly pleasures.” Ever since then, as the scientific and technological revolution accelerated, many new breakthroughs, like nuclear power and stem cell technology, among others, have frequently been described as “Faustian bargains.” It is a literary shorthand for the price of power—a price that is often not fully comprehended at the beginning of the bargain.

  In our time, when we adapt our thinking processes to use the Internet (and the devices and databases connected to it) as an extension of our own minds, we enter into a kind of “cyber-Faustian bargain”—in which we gain the “unlimited knowledge and worldly pleasures” of the In
ternet. Unless we improve privacy and security safeguards, however, we may be risking values more precious than worldly wealth.

  For individuals, the benefits of this bargain—vastly increased power to access and process information anywhere and anytime, a greatly increased capacity to communicate and collaborate with others—are incredibly compelling. But the price we pay in return for these incalculable benefits is a significant loss of control over the security and privacy of the thoughts and information that we send into this extended nervous system. Two new phrases that have crept into our lexicon—“the death of distance” and “the disappearance of privacy”—are intimately connected, each to the other. Most who use the Internet are tracked by many websites that then sell the information. Private emails can be read by the government without a warrant, without permission, and without notification. And hacking has become easy and widespread.

  The same cyber-Faustian bargain has been made by corporations and governments. Like individuals, they are just beginning to recognize the magnitude of the cybersecurity price that apparently has to be paid on an ongoing basis. And to be clear, virtually no one argues with the gains in efficiency, power, productivity, and convenience that accompany this revolutionary change in the architecture of the information economy. What is not yet clear is how the world can resolve—or at least manage—the massive new threats to security and privacy that accompany this shift.

  Internet and software companies are themselves also making the same bargain, with a historic and massive shift from software, databases, and services located within computers themselves to “the cloud”—which means, essentially, using the Internet and the remote servers and databases connected to it as extensions of the memory, software, and processing power that used to be primarily contained within each computer. The growing reliance on the cloud creates new potential choke points that may have implications for both data security and reliability of service. In late 2012, several popular Internet companies in the U.S. that rely on Amazon.com’s cloud services were all knocked out of commission when problems shut down Amazon’s data centers in Virginia.

  The world’s historic shift onto the Internet confronts us with a set of dilemmas that are inherent in the creation of a planet-wide nervous system connecting all of us to the global brain. Some of these dilemmas have arisen because digital information is now recognized—and valued—as the key strategic resource in the twenty-first century.

  Unlike land, iron ore, oil, or money, information is a resource that you can sell or give away and yet still have. The value of information often expands with the number of people who share it, but the commercial value can often be lost when its initial owner loses exclusivity. The essence of patent and copyright law has been to resolve that tension and promote the greatest good for the greatest number, consistent with principles of justice and fairness. The inventor of a new algorithm or the discoverer of a new principle of electromagnetism deserves to be rewarded—partly to provide incentives for others to chase similar breakthroughs—but society as a whole also deserves to benefit from the widespread application of such new discoveries.

  This inherent tension has been heightened by the world’s shift onto the Internet. Longtime technology thought leader Stewart Brand is often quoted as having said in the early years of the Internet, “Information wants to be free.” But what he actually said was, “On the one hand, information wants to be expensive, because it’s so valuable,” adding, “On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

  Because digital information has become so strategic in the operations of Earth Inc., we are witnessing a global, multipronged struggle over the future of the Internet, with battlefronts scattered throughout the overlapping worlds of politics and power, commerce and industry, art and culture, science and technology:

  • Between those who want information to be free and others who want to control it and exchange it for wealth or power;

  • Between those who want people to be free and those who want to control their lives;

  • Between individuals who share private information freely on social networks and others who use that information in unanticipated and sometimes harmful ways;

  • Between Internet-based companies who indiscriminately collect vast amounts of information about their customers and customers who value their privacy;

  • Between legacy centers of power that occupied privileged positions in the old order of information now breaking down and new centers of inchoate power seeking their own place in the new pattern struggling to emerge;

  • Between activists (and “hacktivists”) who value transparency and nations and corporations that value secrecy;

  • Between corporations whose business models depend upon the ability to protect intellectual property contained in computers connected to the Internet and competitors who seek to steal that intellectual property by using other computers also connected to the Internet;

  • Between cybercriminals intent on exploiting rich new targets in the flows of wealth and information on the Internet and law enforcement organizations whose strategy for stopping cybercrime sometimes threatens to destroy historic and hard-won boundaries between the spheres occupied by individuals and the episodic desire by their governments to invade those private spheres.

  The complexity of the world’s transition to the Internet is even more fraught because all of these conflicts are occurring simultaneously on the same common Internet that everyone shares. And, not surprisingly, proposed remedies for problems in one set of conflicts frequently enhance the potential for disrupting efforts to resolve problems in other sets of conflicts.

  Proposals to require measures that eliminate anonymity on the Internet in order to protect cybersecurity and fight cybercrime pose a deadly threat to the ability of dissidents in authoritarian countries to propose reforms and connect with others seeking change in their governments. By the same token, the dream of reformers that the global Internet will inevitably drive global change in the direction of more freedom for individuals, regardless of where they live, strikes fear in the hearts of authoritarian rulers.

  Even in free countries, activists who expose information that governments have tried to keep secret often trigger intrusive new government measures to expand the information they collect about citizens. When the Wikileaks organization, run by an Australian living in Sweden on servers based in Sweden, Iceland, and possibly other locations, publicized information stolen from the U.S. government, the subsequent crackdown enraged other hacktivists, who then broke into numerous other government and corporate websites around the world.

  Because the Internet crosses national boundaries, it diminishes the ability of nation-states to manage such conflicts through laws and regulations that reflect the values in each nation (or at least the values of the governments in power). Independent groups of hacktivists have been able to break into sites controlled by the FBI, CIA, the U.S. Senate, the Pentagon, the International Monetary Fund, the official website of the Vatican, Interpol, 10 Downing Street in London, the British Ministry of Justice, and NASA (even breaking into the software of the space station while it was orbiting the Earth). When the FBI organized a secure conference call to discuss how to respond to such attacks with Scotland Yard, hackers recorded the call and put it on the web. The inmates have clearly taken over a large part of the Internet asylum when Nurse Ratched’s private conversations about security are broadcast for all to hear.

  The extreme difficulty in protecting cybersecurity was vividly demonstrated when EMC, a technology security company used by the National Security Agency, the Central Intelligence Agency, the Pentagon, the White House, the Department of Homeland Security, and many leading defense contractors, was penetrated by a cyberattack believed to have originated in China. EMC’s security system was considered the state of the art in protecting computers connected to the Internet—which,
of course, is why it was used by the organizations with the greatest need for protecting their digital data. It remains undisclosed how much sensitive information was stolen, but this attack was a sobering wake-up call.

  In 2010, U.S. secretary of defense Robert Gates labeled cyberspace as the “fifth domain” for potential military conflict—alongside land, sea, air, and space. In 2012, Rear Admiral Samuel Cox, the director of intelligence at the U.S. Cyber Command (established in 2009), said that we are now witnessing “a global cyber arms race.” Other experts have noted that at this stage in the development of cybersecurity technology, offense has the advantage over defense.

  Securing the secrecy of important communications has always been a struggle. It was first mentioned by “the father of history,” Herodotus, in his description of the “secret writing” that he said was responsible for the Greek victory in the Battle of Thermopylae, which prevented ancient Greece’s conquest by Persia. A Greek living in Persia, Demaratus, witnessed the preparations for what the leader of Persia, Xerxes, intended as a surprise invasion and sent an elaborately hidden warning to Sparta. Later during the same war, a Greek leader shaved his messenger’s head, wrote what he wished to convey on the messenger’s scalp, and then “waited for the hair to regrow.” From the use of “invisible ink” in the Middle Ages to Nazi Germany’s use of the Enigma machine during World War II, cryptography in its various forms has often been recognized as crucial to the survival of nations.

 

‹ Prev