Why the West Rules—for Now

Home > Other > Why the West Rules—for Now > Page 65
Why the West Rules—for Now Page 65

by Morris, Ian;


  Most Panglossian of all is what the journalist James Mann calls the “Soothing Scenario,” a claim that come what may, prosperity will Westernize the East. Asking whether the West still rules will then be a meaningless question, because the whole world will have become Western. “Trade freely with China,” George W. Bush urged in 1999, “and time is on our side.”

  The only way to flourish in the modern global economy, this argument runs, is to be liberal and democratic—that is, more like the Western core. Japan, Taiwan, South Korea, and Singapore all moved from one-party toward somewhat democratic rule as they grew rich in the late twentieth century, and if the Chinese Communist Party can embrace capitalism, perhaps it can embrace democracy too. Those regions most involved in global trade may already be doing so. In Guangdong and Fujian provinces, for instance, many local officials are nowadays directly elected. National politics certainly remains authoritarian, but the rulers in Beijing have become markedly more responsive to public concerns over natural disasters, public health crises, and corruption.

  Many Westerners who have spent time in the East, though, are less impressed with the idea that the East will become culturally Westernized at the very moment that it achieves the power to dominate the globe. Americans, after all, did not start acting more like Europeans after they displaced Europe as the dominant region in the Western core; rather, Europeans began complaining about the Americanization of their own culture.

  China’s urban elites did find plenty to like in Western culture when they entered the American-dominated global economy in the 1980s. They dropped the Mao suit, opened English schools, and even (briefly) sipped lattes at a Starbucks in the Forbidden City. The overpriced bars in Beijing’s Back Lakes district are as full of hyperactive twenty-somethings checking stock quotes on their cell phones as those in New York or London. The question, though, is whether Westernization will continue if power and wealth carry on draining across the Pacific.

  The journalist Martin Jacques suggests not. We are already, he argues, seeing the rise of what he calls “contested modernities” as Easterners and South Asians adapt the industrialism, capitalism, and liberalism invented in the nineteenth-century Western core to their own needs. In the first half of the twenty-first century, Jacques speculates, Western rule will give way to a fragmented global order, with multiple currency zones (dollar-, euro-, and renminbi-denominated) and spheres of economic/military influence (an American sphere in Europe, southwest Asia, and perhaps South Asia, and a Chinese sphere in East Asia and Africa), each dominated by its own cultural traditions (Euro-American, Confucian, and so on). But in the second half of the century, he predicts, numbers will tell; China will rule and the world will be Easternized.

  Extrapolating from how China has used its power since the 1990s, Jacques argues that the Sinocentric world of the late twenty-first century will be quite different from the Western world of the nineteenth and twentieth centuries. It will be even more hierarchical, with the old Chinese idea that foreigners should approach the Middle Kingdom as tribute-bearing supplicants replacing Western theories about the nominal equality of states and institutions. It will also be illiberal, dropping the West’s rhetoric about universal human values; and statist, brooking no opposition to the powers of political rulers. All over the world, people will forget the glories of the Euro-American past. They will learn Mandarin, not English, celebrate Zheng He, not Columbus, read Confucius instead of Plato, and marvel at Chinese Renaissance men such as Shen Kuo rather than Italians such as Leonardo.

  Some strategists think Chinese global rule will follow Confucian traditions of peaceful statecraft and be less militarily aggressive than the West’s; others disagree. Chinese history gives no clear guidance. There have certainly been Chinese leaders who opposed war as a policy tool (particularly among the gentry and bureaucracy), but there have been plenty of others who readily used force, including the first few emperors of virtually every dynasty except the Song. Those international-relations theorists who style themselves “realists” generally argue that China’s caution since the Korean War owes more to weakness than to Confucius. Beijing’s military spending has increased more than 16 percent each year since 2006 and is on target to match America’s in the 2020s. Depending on the decisions future leaders make, the East’s rise to global rule in the twenty-first century may be even bloodier than the West’s in the nineteenth and twentieth.

  So there we have it. Maybe great men and women will come to America’s aid, preserving Western rule for a few generations more; maybe bungling idiots will interrupt China’s rise for a while. Maybe the East will be Westernized, or maybe the West will be Easternized. Maybe we will all come together in a global village, or maybe we will dissolve into a clash of civilizations. Maybe everyone will end up richer, or maybe we will incinerate ourselves in a Third World War.

  This mess of contradictory prognoses evokes nothing so much as the story I mentioned in Chapter 4 of the blind men and the elephant, each imagining he was touching something entirely different. The only way to explain why the West rules, I suggested at that point in the book, was by using the index of social development to cast a little light on the scene. I now want to suggest that the same approach can help us see what the elephant will look like a hundred years from now.

  2103

  So let us look again at Figure 12.1, particularly at the point where the Eastern and Western lines meet in 2103. The vertical axis shows that by then social development will stand at more than five thousand points.

  This is an astonishing number. In the fourteen thousand years between the end of the Ice Age and 2000 CE, social development rose nine hundred points. In the next hundred years, says Figure 12.1, it will rise four thousand points more. Nine hundred points took us from the cave paintings of Altamira to the atomic age; where will another four thousand take us? That, it seems to me, is the real question. We cannot understand what will come after Chimerica unless we first understand what the world will look like at five thousand points.

  In an interview in 2000 the economist Jeremy Rifkin suggested, “Our way of life is likely to be more fundamentally transformed in the next several decades than in the previous thousand years.” That sounds extreme, but if Figure 12.1 really does show the shape of the future, Rifkin’s projection is in fact a serious understatement. Between 2000 and 2050, according to the graph, social development will rise twice as much as in the previous fifteen thousand years; and by 2103 it will have doubled again. What a mockery, this, of history!

  This is where all the prognostications that I discussed in the previous section fall down. All extrapolate from the present into the near future, and all—unsurprisingly—conclude that the future will look much like the present, but with a richer China. If we instead bring the whole weight of history to bear on the question—that is, if we talk to the Ghost of Christmas Past—we are forced to recognize just how unprecedented the coming surge in social development is going to be.

  The implications of development scores of five thousand points are staggering. If, for the sake of argument, we assume that the four traits of energy capture, urbanization, information technology, and war-making capacity will each account for roughly the same proportions of the total social development score in 2103 as they did in 2000,* then a century from now there will be cities of 140 million people (imagine Tokyo, Mexico City, New York, São Paolo, Mumbai, Delhi, and Shanghai rolled into one) in which the average person consumes 1.3 million kilocalories of energy per day.

  A fivefold increase in war-making capacity is even harder to visualize. We already have enough weapons to destroy the world several times over, and rather than simply multiplying nuclear warheads, bombs, and guns, the twenty-first century will probably see technologies that make twentieth-century weapons as obsolete as the machine gun made the musket. Something like “Star Wars,” the anti-ballistic-missile shield that American scientists have been working on since the 1980s, will surely become a reality. Robots will do our fighting. Cyberwar
fare will become all-important. Nanotechnology will turn everyday materials into impenetrable armor or murderous weapons. And each new form of offense will call forth equally sophisticated defenses.

  Most mind-boggling of all, though, are the changes in information technology implied by Figure 12.1. The twentieth century took us from crude radios and telephones to the Internet; it is not so far-fetched to suggest that the twenty-first will give everyone in the developed cores instant access to and total recall of all the information in the world, their brains networked like—or into—a giant computer, with calculating power trillions of times greater than the sum of all brains and machines in our own time.

  All these things, of course, sound impossible. Cities of 140 million people surely could not function. There is not enough oil, coal, gas, and uranium in the world to supply billions of people with 1.3 million kilocalories per day. Nano-, cyber-, and robot wars would annihilate us all. And merging our minds with machines—well, we would cease to be human.

  And that, I think, is the most important and troubling implication of Figure 12.1.

  I have made two general claims in this book. The first was that biology, sociology, and geography jointly explain the history of social development, with biology driving development up, sociology shaping how development rises (or doesn’t), and geography deciding where development rises (or falls) fastest; and the second was that while geography determines where social development rises or falls, social development also determines what geography means. I now want to extend these arguments. In the twenty-first century social development promises—or threatens—to rise so high that it will change what biology and sociology mean too. We are approaching the greatest discontinuity in history.

  The inventor and futurist Ray Kurzweil calls this the Singularity—“a future period during which the pace of technological change will be so rapid, its impact so deep … that technology appears to be expanding at infinite speed.” One of the foundations of his argument is Moore’s Law, the famous observation made by the engineer (and future chairman of Intel) Gordon Moore in 1965 that with every passing year the miniaturization of computer chips roughly doubled their speed and halved their cost. Forty years ago gigantic mainframes typically performed a few hundred thousand calculations per second and cost several million dollars, but the little thousand-dollar laptop I am now tapping away on can handle a couple of billion per second—a ten-million-fold improvement in price-performance, or a doubling every eighteen months, much as Moore predicted.

  If this trend continues, says Kurzweil, by about 2030 computers will be powerful enough to run programs reproducing the 10,000 trillion electrical signals that flash every second among the 22 billion neurons inside a human skull. They will also have the memory to store the 10 trillion recollections that a typical brain houses. By that date scanning technology will be accurate enough to map the human brain neuron by neuron—meaning, say the technology boosters, that we will be able to upload actual human minds onto machines. By about 2045, Kurzweil thinks, computers will be able to host all the minds in the world, effectively merging carbon-and silicon-based intelligence into a single global consciousness. This will be the Singularity. We will transcend biology, evolving into a new, merged being as far ahead of Homo sapiens as a contemporary human is of the individual cells that merge to create his/her body.

  Kurzweil’s enthusiastic vision provokes as much mockery as admiration (“the Rapture for Nerds,” some call it), and the odds are that—like all prophets before him—he will be wrong much more often than he is right. But one of the things Kurzweil is surely correct about is that what he calls “criticism from incredulity,” simple disbelief that anything so peculiar could happen, is no counterargument. As the Nobel Prize–winning chemist Richard Smalley likes to say, “When a scientist says something is possible, they’re probably underestimating how long it will take. But if they say it’s impossible, they’re probably wrong.” Humans are already taking baby steps toward some sort of Singularity, and governments and militaries are taking the prospect of a Singularity seriously enough to start planning for it.

  We can, perhaps, already see what some of these baby steps have wrought. I pointed out in Chapter 10 that the industrial revolution set off even bigger changes in what it means to be human than the agricultural revolution had done. Across much of the world, better diets now allow humans to live twice as long as and grow six inches taller than their great-great-grandparents. Few women now spend more than a small part of their lives bearing and rearing babies, and compared with any earlier age, few babies now die in infancy. In the richest countries doctors seem able to perform miracles—they can keep us looking young (in 2008, five million Botox procedures were performed in the United States), control our moods (one in ten Americans has used Prozac), and consolidate everything from cartilage to erections (in 2005 American doctors wrote 17 million prescriptions for Viagra, Cialis, and Levitra). The aging emperors of antiquity, I suspect, would have thought these little purple pills quite as wonderful as anything in Kurzweil’s Singularity.

  Twenty-first-century genetic research promises to transform humanity even more, correcting copying errors in our cells and growing new organs when the ones we were born with let us down. Some scientists think we are approaching “partial immortalization”: like Abraham Lincoln’s famous ax (which had its handle replaced three times and its blade twice), each part of us might be renewed while we ourselves carry on indefinitely.

  And why stop at just fixing what is broken? You may remember the 1970s television series The Six Million Dollar Man, which began with a pilot named Steve Austin (played by Lee Majors) losing an arm, an eye, and both legs in a plane crash. “We can rebuild him—we have the technology,” says the voiceover, and Austin quickly reappears as a bionic man who outruns cars, has a Geiger counter in his arm and a zoom lens in his eye, and eventually a bionic girlfriend (Lindsay Wagner) too.

  Thirty years on, athletes have already gone bionic. When the golfer Tiger Woods needed eye surgery in 2005, he upgraded himself to better-than-perfect 20/15 vision, and in 2008 the International Association of Athletics Federations even temporarily banned the sprinter Oscar Pistorius from the Olympics because his artificial legs seemed to give him an edge over runners hobbled by having real legs.*

  By the 2020s middle-aged folks in the developed cores might see farther, run faster, and look better than they did as youngsters. But they will still not be as eagle-eyed, swift, and beautiful as the next generation. Genetic testing already gives parents opportunities to abort fetuses predisposed to undesirable shortcomings, and as we get better at switching specific genes on and off, so-called designer babies engineered for traits that parents like may become an option. Why take chances on nature’s genetic lottery, ask some, if a little tinkering can give you the baby you want?

  Because, answer others, eugenics—whether driven by racist maniacs like Hitler or by consumer choice—is immoral. It may also be dangerous: biologists like to say that “evolution is smarter than you,” and we may one day pay a price for trying to outwit nature by culling our herd of traits such as stupidity, ugliness, obesity, and laziness. All this talk of transcending biology, critics charge, is merely playing at being God—to which Craig Venter, one of the first scientists to sequence the human genome, reportedly replies: “We’re not playing.”

  Controversy continues, but I suspect that our age, like so many before it, will in the end get the thought it needs. Ten thousand years ago some people may have worried that domesticated wheat and sheep were unnatural; two hundred years ago some certainly felt that way about steam engines. Those who mastered their qualms flourished; those who did not, did not. Trying to outlaw therapeutic cloning, beauty for all, and longer life spans does not sound very workable, and banning the military uses of tinkering with nature sounds even less so.

  The United States Defense Advanced Research Projects Agency (DARPA) is one of the biggest funders of research into modifying humans. It was DARPA that bro
ught us the Internet (then called the Arpanet) in the 1970s, and its Brain Interface Project is now looking at molecular-scale computers, built from enzymes and DNA molecules rather than silicon, that could be implanted in soldiers’ heads. The first molecular computers were unveiled in 2002, and by 2004 better versions were helping to fight cancer. DARPA, however, hopes that more advanced models will give soldiers some of the advantages of machines by speeding up their synaptic links, adding memory, and even providing wireless Internet access. In a similar vein, DARPA’s Silent Talk project is working on implants that will decode preverbal electrical signals within the brain and send them over the Internet so troops can communicate without radios or e-mail. One National Science Foundation report suggests that such “network-enabled telepathy” will become a reality in the 2020s.

  The final component of Kurzweil’s Singularity, computers that can reproduce the workings of biological brains, is moving even faster. In April 2007 IBM researchers turned a Blue Gene/L supercomputer into a massively parallel cortical simulator that could run a program imitating a mouse’s brain functions. The program was only half as complex as a real mouse brain, and ran at only one-tenth of rodent speed, but by November of that year the same lab had already upgraded to mimicking bigger, more complex rat brains.

  Half a slowed-down rat is a long way from a whole full-speed human, and the lab team in fact estimated that a human simulation would require a computer four hundred times as powerful, which with 2007 technology would have had unmanageable energy, cooling, and space requirements. Already in 2008, however, the costs were falling sharply, and IBM predicted that the Blue Gene/Q supercomputer, which should be up and running in 2011, would get at least a quarter of the way there. The even more ambitious Project Kittyhawk, linking thousands of Blue Genes, should move closer still in the 2020s.

 

‹ Prev