Book Read Free

Our Mathematical Universe

Page 45

by Max Tegmark


  As a warm-up, we can practice deflecting the smaller and more numerous asteroids that strike Earth more frequently. For example, the 1908 Tunguska event was caused by an object weighing about as much as an oil tanker, which didn’t pose any existential risk, but whose roughly 10-megaton blast would have killed millions had it hit a large city. Once we’ve mastered the art of deflecting small asteroids for our protection, we’ll be prepared when the next big one arrives, and we’ll also be able to use this same technical know-how for the longer-term engineering project we discussed earlier: harnessing asteroids to enlarge Earth’s orbit away from the gradually brightening Sun.

  Asteroids certainly didn’t cause all mass extinctions. Another astronomical suspect, a gamma-ray burst from a supernova explosion, has been blamed for the second-largest recorded extinction, which took place about 450 million years ago. Although the forensic evidence is currently too weak for a guilty verdict, the suspect certainly had the means and a plausible opportunity. When some massive and fast-rotating stars explode as supernovae, they fire off part of their enormous explosion energy as a beam of gamma rays. If such a killer beam hit Earth, it would deliver a one-two punch: it would both zap us directly and destroy our ozone layer, after which our Sun’s ultraviolet light would start sterilizing Earth’s surface.

  There are interesting links between the different astronomical threats. Occasionally, a random star will stray close enough to our Solar System that it will perturb the orbits of distant asteroids and comets, sending a swarm of them into the inner Solar System where some might collide with Earth. For example, the star Gliese 710 is predicted to pass within a light-year of us in about 1.4 million years, four times closer than our current nearest neighbor, Proxima Centauri.

  Moreover, today’s orderly traffic flow where most stars orbit around the center of our Milky Way Galaxy in the same direction, as in a roundabout, will be replaced by a chaotic mess when our Galaxy merges with Andromeda, significantly increasing the frequency of disruptive close encounters with other stars that could trigger a hail of asteroids or ultimately even eject Earth from our Solar System. This galactic collision will also cause gas clouds to collide, triggering a burst of star formation, and the heaviest newborn stars will soon explode as supernovae, which may be too close for comfort.

  Returning closer to home, we also face “the enemy within”: events caused by our own planet. Supervolcanoes and massive floods of basalt lava are prime suspects in many extinction events. They have the potential to create “volcanic winter” by enveloping Earth in a dark dust cloud, blocking sunlight for years much as a major asteroid impact would. They may also disrupt ecosystems globally by infusing the atmosphere with gases that produce toxicity, acid rain or global warming. Such a super-eruption in Siberia is widely blamed for the greatest recorded extinction of all, the “Great Dying,” which wiped out 96% of all marine species about 250 million years ago.

  Self-Inflicted Problems

  In summary, we humans face many existential risks involving astronomical or geological effects; I’ve summarized only those I personally take most seriously. When I think about all such risks, the conclusion I draw is actually rather optimistic:

  1. It’s likely that future technologies can help life flourish for billions of years to come.

  2. We and our descendants should be able to develop these technologies in time if we have our act together.

  By first eliminating the most urgent problems, on the left side in Figure 13.3, we’ll buy ourselves time to tackle the remaining ones.

  Ironically, these most-urgent problems are largely self-inflicted. Whereas most geological and astronomical disasters loom thousands, millions or billions of years from now, we humans are radically changing things on time scales of decades, opening up a Pandora’s box of new existential risks. By transforming water, land and air with fishing, agriculture and industry, we’re driving about 30,000 species to extinction each year, in what some biologists are calling “the Sixth Extinction.” Is it soon our turn to go extinct, too?

  You’ve undoubtedly followed the acrimonious debate about human-caused risks, ranging from global pandemics (accidental or deliberate) to climate change, pollution, resource depletion and ecosystem collapse. Let me tell you a bit more about the two human-caused risks that concern me the most: accidental nuclear war and unfriendly artificial intelligence.

  Accidental Nuclear War

  A serial killer is on the loose! A suicide bomber! Beware the bird flu! Although headline-grabbing scares are better at generating fear, boring old cancer is more likely to do you in. Although you have less than a 1% chance per year to get it, live long enough, and it has a good chance of getting you in the end. As does accidental nuclear war.

  During the half century that we humans have been tooled up for nuclear Armageddon, there has been a steady stream of false alarms that could have triggered all-out war, with causes ranging from computer malfunction, power failure and faulty intelligence to navigational error, bomber crash and satellite explosion. This bothered me so much when I was seventeen that I volunteered as a freelance writer for the Swedish peace magazine PAX, whose editor-in-chief Carita Andersson kindly nurtured my enthusiasm for writing, taught me the ropes and let me pen a series of news articles. Gradual declassification of records has revealed that some of these nuclear incidents carried greater risk than was appreciated at the time. For example, it became clear only in 2002 that during the Cuban Missile Crisis, the USS Beale had depth-charged an unidentified submarine that was in fact Soviet and armed with nuclear weapons, and whose commanders argued over whether to retaliate with a nuclear torpedo.

  Despite the end of the Cold War, the risk has arguably grown in recent years. Inaccurate but powerful ICBMs undergirded the stability of “mutually assured destruction,” because a first strike couldn’t prevent massive retaliation. The shift toward more accurate missile navigation, shorter flight times and better enemy submarine tracking erodes this stability. A successful missile-defense system would complete this erosion process. Both Russia and the United States retain their launch-on-warning strategies, requiring launch decisions to be made on five- to fifteen-minute time scales where complete information may be unavailable. On January 25, 1995, Russian president Boris Yeltsin came within minutes of initiating a full nuclear strike on the United States because of an unidentified Norwegian scientific rocket. Concern has been raised over a U.S. project to replace the nuclear warheads on two of the twenty-four D5 ICBMs carried by Trident submarines with conventional warheads, for possible use against Iran or North Korea: Russian early-warning systems would be unable to distinguish them from nuclear missiles, expanding the possibilities for unfortunate misunderstandings. Other worrisome scenarios include deliberate malfeasance by military commanders triggered by mental instability and/or fringe political/religious agendas.

  But why worry? Surely, if push came to shove, reasonable people would step in and do the right thing, just as they have in the past? Nuclear nations do indeed have elaborate countermeasures in place, just as our body does against cancer. Our body can normally deal with isolated deleterious mutations, and it appears that fluke coincidences of as many as four mutations may be required to trigger certain cancers. Yet if we roll the dice enough times, shit happens—Stanley Kubrick’s dark nuclear war comedy Dr. Strangelove illustrates this with a triple coincidence.

  Accidental nuclear war between two superpowers may or may not happen in my lifetime, but if it does, it will obviously change everything. The climate change we’re currently worrying about pales in comparison with nuclear winter, where a global dust cloud blocks sunlight for years, much like when an asteroid or supervolcano caused a mass extinction in the past. The 2008 economic turmoil was of course nothing compared to the resulting global crop failures, infrastructure collapse and mass starvation, with survivors succumbing to hungry armed gangs systematically pillaging from house to house. Do I expect to see this in my lifetime? I’d give it about 30%, putting it roughly
on par with my getting cancer. Yet we devote way less attention and resources to reducing the risk of nuclear disaster than we do for cancer. And whereas humanity as a whole survives even if 30% get cancer, it’s less obvious to what extent our civilization would survive a nuclear Armageddon. There are concrete and straightforward steps that can be taken to slash this risk, as spelled out in numerous reports by scientific organizations, but these never become major election issues and tend to get largely ignored.

  An Unfriendly Singularity

  The Industrial Revolution has brought us machines that are stronger than us. The information revolution has brought us machines that are smarter than us in certain limited ways. In what ways? Computers used to outperform us only on simple, brute-force cognitive tasks such as rapid arithmetic or database searching, but in 2006, a computer beat the world chess champion Vladimir Kramnik, and in 2011, a computer dethroned Ken Jennings on the American quiz show Jeopardy! In 2012, a computer was licensed to drive cars in Nevada after being judged safer than a human driver. How far will this development go? Will computers eventually beat us at all tasks, developing superhuman intelligence? I have little doubt that this can happen: our brains are a bunch of particles obeying the laws of physics, and there’s no physical law precluding particles from being arranged in ways that can perform even-more-advanced computations. But will it actually happen, and would that be a good thing or a bad thing? These questions are timely: although some think machines with superhuman intelligence can’t be built in the foreseeable future, others such as the American inventor and author Ray Kurzweil predict their existence by 2030, making this arguably the single most urgent existential risk to plan for.

  The singularity idea

  In summary, it’s unclear whether the development of ultra-intelligent machines will or should happen, and artificial-intelligence experts are divided. What I think is quite clear, however, is that, if it happens, the effects will be explosive. The British mathematician Irving Good explained why in 1965, two years before I was born: “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

  In a thought-provoking and sobering 1993 paper, the mathematician and sci-fi author Vernor Vinge called this intelligence explosion “the Singularity,” arguing that it was a point beyond which it was impossible for us to make reliable predictions.

  I suspect that if we can build such ultra-intelligent machines, then the first one will be severely limited by the software we’ve written for it, and that we’ll have compensated for our lack of understanding about how to optimally program intelligence by building hardware with significantly more computing power than our brains have. After all, our neurons are no better or more numerous than those of dolphins, just differently connected, suggesting that software can sometimes be more important than hardware. This situation would probably enable this first machine to radically improve itself over and over and over again simply by rewriting its own software. In other words, whereas it took us humans millions of years of evolution to radically transcend the intelligence of our apelike ancestors, this evolving machine could similarly soar beyond the intelligence of its ancestors, us humans, in a matter of hours or seconds.

  After this, life on Earth would never be the same. Whoever or whatever controls this technology would rapidly become the world’s wealthiest and most powerful, outsmarting all financial markets and out-inventing and out-patenting all human researchers. By designing radically better computer hardware and software, such machines would enable their power and their numbers to rapidly multiply. Soon technologies beyond our current imagination would be invented, including any weapons deemed necessary. Political, military and social control of the world would soon follow. Given how much influence today’s books, media and web content have, I suspect that machines able to outpublish billions of ultra-talented humans could win our hearts and minds even without outright buying or conquering us.

  Who controls the singularity?

  If a singularity occurs, how would it affect our human civilization? We obviously don’t know for sure, but I think it will depend on who/what initially controls it, as illustrated in Figure 13.4. If the technology is initially developed by academics or others who make it open source, I think the resulting free-for-all situation will be highly unstable and lead to control by a single entity after a brief period of competition. If this entity is an egoistic human or for-profit corporation, I think government control will soon follow as the owner takes over the world and becomes the government. An altruistic human might do the same. In this case, the human-controlled artificial intelligences (AIs) would effectively be like enslaved gods, entities with understanding and ability vastly beyond us humans, but nonetheless doing whatever their owner told them to do. Such AIs might be as superior to today’s computers as we humans are to ants.

  It may prove impossible to keep such superintelligent AIs enslaved even if we try our utmost to keep them “boxed in,” disconnected from the Internet. As long as they can communicate with us, they could come to understand us well enough to figure out how to sweet-talk us into doing something seemingly innocuous that allows them to “break out,” go viral, and take over. I very much doubt that we could contain such a breakout given how we struggle to eradicate even the vastly simpler human-made computer viruses of today.

  Figure 13.4: If the singularity does occur, it will make a huge difference who controls it. I suspect that the “nobody” option is totally unstable and would, after a brief period of competition, lead to control by a single entity. I think control by an egoistic human or a for-profit corporation would lead to government control, as the owner effectively takes over the world and becomes the government. An altruistic human might do the same, or choose to cede control to a friendly artificial intelligence (AI) that can better protect human interests. However, an unfriendly AI could become the ultimate controller by outwitting its creator and rapidly developing traits entrenching its power.

  Click here to see a larger image.

  To forestall a breakout, or to serve human interests better, its owner may choose to voluntarily cede power to what AI researcher Eliezer Yudkowsky terms a “friendly AI,” which, no matter how advanced it eventually gets, retains the goal of having a positive rather than negative effect on humanity. If this is successful, then the friendly AIs would act as benevolent gods, or zookeepers, keeping us humans fed, safe and fulfilled while remaining firmly in control. If all human jobs get replaced by machines under friendly-AI control, humanity could still remain reasonably happy if the products we need were given to us effectively for free. In contrast, the scenario in which an egoistic human or for-profit corporation controls the singularity would probably result in the greatest income disparity that our planet has ever seen, since history suggests that most humans prefer amassing personal wealth over spreading it around.

  Even the best-laid plans often fail, however, and a friendly-AI situation might be unstable, eventually transforming into one controlled by an unfriendly AI, whose goals don’t coincide with those of us humans, and whose actions end up destroying both humanity and everything we care about. Such destruction could be incidental rather than purposeful: the AI may simply want to use Earth’s atoms for other purposes that are incompatible with our existence. The analogy with how we humans treat lower life-forms isn’t encouraging: if we want to build a hydroelectric dam and there happens to be some ants in the area that would drown as a result, we’ll build the dam anyway—not out of any particular antipathy toward ants, but merely because we’re focused on goals we consider more
important.

  The internal reality of ultra-intelligent life

  If there’s a singularity, would the resulting AI, or AIs, feel conscious and self-aware? Would they have an internal reality? If not, they’re for all practical purposes zombies. Of all traits that our human form of life has, I feel that consciousness is by far the most remarkable. As far as I’m concerned, it’s how our Universe gets meaning, so if our Universe gets taken over by life that lacks this trait, then it’s meaningless and just a huge waste of space.

  As we discussed in Chapters 9 and 11, the nature of life and consciousness is a hotly debated subject. My guess is that these phenomena can exist much more generally than in the carbon-based examples we know of. As mentioned in Chapter 11, I believe that consciousness is the way information feels when being processed. Since matter can be arranged to process information in numerous ways of vastly varying complexity, this implies a rich variety of levels and types of consciousness. The particular type of consciousness that we subjectively know is then a phenomenon that arises in certain highly complex physical systems that input, process, store and output information. Clearly, if atoms can be assembled to make humans, the laws of physics also permit the construction of vastly more advanced forms of sentient life.

 

‹ Prev