The Future

Home > Nonfiction > The Future > Page 8
The Future Page 8

by Al Gore


  In this context, the emergence of new and powerful forms of artificial intelligence represents not just the extension of yet another human capacity, but an extension of the dominant and uniquely human capacity to think. Though science has established that we are not the only sentient living creatures, it is nevertheless abundantly obvious that we as a species have become dominant on Earth because of our capacity to make mental models of the world around us and manipulate those models through thought to gain the power to transform our surroundings and exert dominion over the planet. The technological extension of the ability to think is therefore different in a fundamental way from any other technological extension of human capacity.

  As artificial intelligence matures and is connected with all the other technological extensions of human capacity—grasping and manipulating physical objects, recombining them into new forms, carrying them over distance, communicating with one another utilizing flows of information of far greater volume and far greater speed than any humans are capable of achieving, making their own abstract models of reality, and learning in ways that are sometimes superior to the human capacity to learn—the impact of the AI revolution will be far greater than that of any previous technological revolution.

  One of the impacts will be to further accelerate the decoupling of gains in productivity from gains in the standard of living for the middle class. In the past, improvements in economic efficiency have generally led to improvements in wages for the majority, but when the substitution of technology capital for labor creates the elimination of very large numbers of jobs, a much larger proportion of the gains go to those who provide the capital. The fundamental relationship between technology and employment is being transformed.

  This trend is now nearing a threshold beyond which so many jobs are lost that the level of consumer demand falls below the level necessary to sustain healthy economic growth. In a new study of the Great Depression, Joseph Stiglitz has argued that the massive loss of jobs in agriculture that accompanied the mechanization of farming led to a similar contraction of demand that was actually a much larger factor in causing the Depression than has been previously recognized—and that we may be poised for another wrenching transition with the present ongoing loss of manufacturing jobs.

  New jobs can and must be created, and one of the obvious targets for new employment is the provision of public goods in order to replace the income lost by those whose employment is being robosourced and outsourced. But elites who have benefited from the emergence of Earth Inc. have thus far effectively used their accumulated wealth and political influence to block any shift of jobs to the public sector. The good news is that even though the Internet has facilitated both outsourcing and robosourcing, it is also providing a new means to build new forms of political influence not controlled by elites. This is a major focus of the next chapter.

  * * *

  * This term was first coined by Buckminster Fuller in 1973, but he used it to convey a completely different meaning.

  For a larger version of the following image, click here.

  2

  THE GLOBAL MIND

  JUST AS THE SIMULTANEOUS OUTSOURCING AND ROBOSOURCING OF PRODUCTIVE activity has led to the emergence of Earth Inc., the simultaneous deployment of the Internet and ubiquitous computing power have created a planet-wide extension of the human nervous system that transmits information, thoughts, and feelings to and from billions of people at the speed of light.

  We are connecting to vast global data networks—and to one another—through email, text messaging, social networks, multiplayer games, and other digital forms of communication at an unprecedented pace. This revolutionary and still accelerating shift in global communication is driving a tsunami of change forcing disruptive—and creative—modifications in activities ranging from art to science and from collective political decision making to building businesses.

  Some familiar businesses are struggling to survive: newspapers, travel agencies, bookstores, music, video rental, and photography stores are among the most frequently recognized early examples of businesses confronted with a technologically driven mandate to either radically change or disappear. Some large institutions are also struggling: national postal services are hemorrhaging customers as digital communication displaces letter writing, leaving the venerable post office to serve primarily as a distribution service for advertisements and junk mail.

  At the same time, we are witnessing the explosive growth of new business models, social organizations, and patterns of behavior that would have been unimaginable before the Internet and computing: from Facebook and Twitter to Amazon and iTunes, from eBay and Google to Baidu, Yandex.ru, and Globo.com, to a dozen other businesses that have started since you began reading this sentence—all are phenomena driven by the connection of two billion people (thus far) to the Internet. In addition to people, the number of digital devices connected to other devices and machines—with no human being involved—already exceeds the population of the Earth. Studies project that by 2020, more than 50 billion devices will be connected to the Internet and exchanging information on a continuous basis. When less sophisticated devices like Radiofrequency Identification (RFID) tags capable of transmitting information wirelessly or transferring data to devices that read them are included, the number of “connected things” is already much larger. (Some school systems, incidentally, have begun to require students to wear identification tags equipped with RFID tags in an effort to combat truancy, generating protests from many students.)

  TECHNOLOGY AND THE “WORLD BRAIN”

  Writers have used the human nervous system to describe electronic communication since the invention of the telegraph. In 1851, only six years after Samuel Morse received the message “What hath God wrought?” Nathaniel Hawthorne wrote: “By means of electricity, the world of matter has become a great nerve vibrating thousands of miles in a breathless point of time. The round globe is a vast brain, instinct with intelligence.” Less than a century later, H. G. Wells modified Hawthorne’s metaphor when he offered a proposal to develop a “world brain”—which he described as a commonwealth of all the world’s information, accessible to all the world’s people as “a sort of mental clearinghouse for the mind: a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared.” In the way Wells used the phrase “world brain,” what began as a metaphor is now a reality. You can look it up right now on Wikipedia or search the World Wide Web on Google for some of the estimated one trillion web pages.

  Since the nervous system connects to the human brain and the brain gives rise to the mind, it was understandable that one of the twentieth century’s greatest theologians, Teilhard de Chardin, would modify Hawthorne’s metaphor yet again. In the 1950s, he envisioned the “planetization” of consciousness within a technologically enabled network of human thoughts that he termed the “Global Mind.” And while the current reality may not yet match Teilhard’s expansive meaning when he used that provocative image, some technologists believe that what is emerging may nevertheless mark the beginning of an entirely new era. To paraphrase Descartes, “It thinks; therefore it is.”*

  The supercomputers and software in use have all been designed by human beings, but as Marshall McLuhan once said, “We shape our tools, and thereafter, our tools shape us.” Since the global Internet and the billions of intelligent devices and machines connected to it—the Global Mind—represent what is arguably far and away the most powerful tool that human beings have ever used, it should not be surprising that it is beginning to reshape the way we think in ways both trivial and profound—but sweeping and ubiquitous.

  In the same way that multinational corporations have become far more efficient and productive by outsourcing work to other countries and robosourcing work to intelligent, interconnected machines, we as individuals are becoming far more efficient and productive by instantly connecting our thoughts to computers, servers, and databases all over the world. Just as radical changes in the
global economy have been driven by a positive feedback loop between outsourcing and robosourcing, the spread of computing power and the increasing number of people connected to the Internet are mutually reinforcing trends. Just as Earth Inc. is changing the role of human beings in the production process, the Global Mind is changing our relationship to the world of information.

  The change being driven by the wholesale adoption of the Internet as the principal means of information exchange is simultaneously disruptive and creative. The futurist Kevin Kelly says that our new technological world—infused with intelligence—more and more resembles “a very complex organism that often follows its own urges.” In this case, the large complex system includes not only the Internet and the computers, but also us.

  Consider the impact on conversations. Many of us now routinely reach for smartphones to find the answers to questions that arise at the dinner table by searching the Internet with our fingertips. Indeed, many now spend so much time on their smartphones and other mobile Internet-connected devices that oral conversation sometimes almost ceases. As a distinguished philosopher of the Internet, Sherry Turkle, recently wrote, we are spending more and more time “alone together.”

  The deeply engaging and immersive nature of online technologies has led many to ask whether their use might be addictive for some people. The Diagnostic and Statistical Manual of Mental Disorders (DSM), when it is updated in May 2013, will include “Internet Use Disorder” in its appendix for the first time, as a category targeted for further study. There are an estimated 500 million people in the world now playing online games at least one hour per day. In the United States, the average person under the age of twenty-one now spends almost as much time playing online games as they spend in classrooms from the sixth through twelfth grades. And it’s not just young people: the average online social games player is a woman in her mid-forties. An estimated 55 percent of those playing social games in the U.S.—and 60 percent in the U.K.—are women. (Worldwide, women also generate 60 percent of the comments and post 70 percent of the pictures on Facebook.)

  OF MEMORY, “MARKS,” AND THE GUTENBERG EFFECT

  Although these changes in behavior may seem trivial, the larger trend they illustrate is anything but. One of the most interesting debates among experts who study the relationship between people and the Internet is over how we may be adapting the internal organization of our brains—and the nature of consciousness—to the amount of time we are spending online.

  Human memory has always been affected by each new advance in communications technology. Psychological studies have shown that when people are asked to remember a list of facts, those told in advance that the facts will later be retrievable on the Internet are not able to remember the list as well as a control group not informed that the facts could be found online. Similar studies have shown that regular users of GPS devices began to lose some of their innate sense of direction.

  The implication is that many of us use the Internet—and the devices, programs, and databases connected to it—as an extension of our brains. This is not a metaphor; the studies indicate that it is a literal real-location of mental energy. In a way, it makes sense to conserve our brain capacity by storing only the meager data that will allow us to retrieve facts from an external storage device. Or at least Albert Einstein thought so, once remarking: “Never memorize what you can look up in books.”

  For half a century neuroscientists have known that specific neuronal pathways grow and proliferate when used, while the disuse of neuron “trees” leads to their shrinkage and gradual loss of efficacy. Even before those discoveries, McLuhan described the process metaphorically, writing that when we adapt to a new tool that extends a function previously performed by the mind alone, we gradually lose touch with our former capacity because a “built-in numbing apparatus” subtly anesthetizes us to accommodate the attachment of a mental prosthetic connecting our brains seamlessly to the enhanced capacity inherent in the new tool.

  In Plato’s dialogues, when the Egyptian god Theuth tells one of the kings of Egypt, Thamus, that the new communications technology of the age—writing—would allow people to remember much more than previously, the king disagreed, saying, “It will implant forgetfulness in their souls: they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”†

  So this dynamic is hardly new. What is profoundly different about the combination of Internet access and mobile personal computing devices is that the instantaneous connection between an individual’s brain and the digital universe is so easy that a habitual reliance on external memory (or “exomemory”) can become an extremely common behavior. The more common this behavior becomes, the greater one comes to rely on exomemory—and the less one relies on memories stored in the brain itself. What becomes more important instead are the “external marks” referred to by Thamus 2,400 years ago. Indeed, one of the new measures of practical intelligence in the twenty-first century is the ease with which someone can quickly locate relevant information on the Internet.

  Human consciousness has always been shaped by external creations. What makes human beings unique among, and dominant over, life-forms on Earth is our capacity for complex and abstract thought. Since the emergence of the neocortex in roughly its modern form around 200,000 years ago, however, the trajectory of human dominion over the Earth has been defined less by further developments in human physical evolution and more by the evolution of our relationship to the tools we have used to augment our leverage over reality.

  Scientists disagree over whether the use of complex speech by humans emerged rather suddenly with a genetic mutation or whether it developed more gradually. But whatever its origin, complex speech radically changed the ability of humans to use information in gaining mastery over their circumstances by enabling us for the first time to communicate more intricate thoughts from one person to others. It also arguably represented the first example of the storing of information outside the human brain. And for most of human history, the spoken word was the principal “information technology” used in human societies.

  The long hunter-gatherer period is associated with oral communication. The first use of written language is associated with the early stages of the Agricultural Revolution. The progressive development and use of more sophisticated tools for written language—from stone tablets to papyrus to velum to paper, from pictograms to hieroglyphics to phonetic alphabets—is associated with the emergence of complex civilizations in Mesopotamia, Egypt, China and India, the Mediterranean, and Central America.

  The perfection by the ancient Greeks of the alphabet first devised by the Phoenicians led to a new way of thinking that explains the sudden explosion in Athens during the fourth and fifth centuries BCE of philosophical discourse, dramatic theater, and the emergence of sophisticated concepts like democracy. Compared to hieroglyphics, pictographs, and cuneiform, the abstract shapes that made up the Greek alphabet—like those that make up all modern Western alphabets—have no more inherent meaning in themselves than the ones and zeros of digital code. But when they are arranged and rearranged in different combinations, they can be assigned gestalt meanings. The internal organization of the brain necessary to adapt to this new communications tool has been associated with the distinctive difference historians find in the civilization of ancient Greece compared to all of its predecessors.

  The use of this new form of written communication led to an increased ability to store the collective wisdom of prior generations in a form that was external to the brain but nonetheless accessible. Later advances—particularly the introduction of the printing press in the fourteenth century (in Asia) and the fifteenth century (in Europe)—were also associated with a further expansion of the amount of knowledge stored externally and a further increase in the ease with which a much larger percentage of the population could gain access to it. With the introduction of print, the exponential curve tha
t measures the complexity of human civilization suddenly bent upward at a sharply steeper angle. Our societies changed; our culture changed; our commerce changed; our politics changed.

  Prior to the emergence of what McLuhan described as the Gutenberg Galaxy, most Europeans were illiterate. Their relative powerlessness was driven by their ignorance. Most libraries consisted of a few dozen hand-copied books, sometimes chained to the desks, written in a language that for the most part only the monks could understand. Access to the knowledge contained in these libraries was effectively restricted to the ruling elites in the feudal system, which wielded power in league with the medieval church, often by force of arms. The ability conferred by the printing press to capture, replicate, and distribute en masse the collected wisdom of preceding ages touched off the plethora of advances in information sharing that led to the modern world.

  Less than two generations after Gutenberg’s press came the Voyages of Discovery. When Columbus returned from the Bahamas, eleven print editions of the account of his journey captivated Europe. Within a quarter century sailing ships had circumnavigated the globe, bringing artifacts and knowledge from North, South, and Central America, Asia, and previously unknown parts of Africa.

  In that same quarter century, the mass distribution of the Christian Bible in German and then other popular languages led to the Protestant Reformation (which was also fueled by Martin Luther’s moral outrage over the print-empowered bubble in the market for indulgences, including the exciting new derivatives product: indulgences for sins yet to be committed). Luther’s Ninety-Five Theses, nailed to the door of the church in Wittenberg in 1517, were written in Latin, but thousands of copies distributed to the public were printed in German. Within a decade, more than six million copies of various Reformation pamphlets had been printed, more than a quarter of them written by Luther himself.

 

‹ Prev