Book Read Free

Earth in Human Hands

Page 32

by David Grinspoon


  Another topic was the possibility of establishing a common language. Here again McNeill played the skeptic. He did not share the scientists’ confidence that they would be able to communicate with ET. The SETI scientists shared a strong conviction that whatever immense dissimilarities we may have with our alien cousins, we will understand the same math and physics, and could use both to develop the basis of a common language. Marvin Minsky, reflecting his faith in the convergence of mathematical minds, and giving a nod to the early ’70s “generation gap,” offered that “It is probably easier to communicate with a Jovian scientist than with an American teenager.” McNeill was dubious, feeling that although our math and physics may seem universal to scientists, they may well be social constructs, peculiar to our own society, and therefore completely incomprehensible to aliens. The scientists answered that he simply did not understand mathematics and physics well enough to grasp why they must be universal. Near the end of the meeting, McNeill stated,

  I must say that in listening to the discussion these last days, I feel I detect what might be called a pseudo or scientific religion. I do not mean this as a condemnatory phrase. Faith and hope and trust have been very important factors in human life and it is not wrong to cling to these and pursue such faith. But I remain, I fear, an agnostic, not only in traditional religion but also in this new one.

  Some of the scientists could take only so much of these challenges. They weren’t really there to talk with “windbags” about whether SETI made sense. They wanted to talk about how to go about doing it. At one point, impatient with the digressions into such obscure questions, Freeman Dyson got up and declared, “To hell with philosophy. I came here to learn about observations and instruments and I hope we shall soon begin to discuss these concrete questions.”

  The Inevitable Expansion Fallacy

  There is a consensus narrative, based on an extrapolated interpretation of human history, that ascribes properties of aggressive growth and relentless territorial expansion to “superior” alien cultures. At Byurakan II, the discussions on the nature of advanced civilizations were centered on Kardashev’s three types. This classification system had been given a big boost when Shklovsky and Sagan adopted it in their best-selling classic book. Sagan had decided, based on energy use, that human civilization (in the 1970s) was currently at 0.7—not quite a type I, where we would have mastery and control over our entire planet. What a difference that 0.3 makes: such a small quantity, separating our current conflicted, confused state from our perceived destiny as wise, confident planetary masters. Represented like this, it seems such a minor deficit. Only 0.3? We’re almost there. Sigh.

  The basic premise (that civilizations would proceed inevitably along the path to greater population and energy use until they had godlike command over entire galaxies) appears to have been widely accepted by the scientists gathered at Byurakan II. It seems strange, from the vantage point of the twenty-first century, that nobody in this gathering of insightful scholars questioned the assumption that such endless expansion would be the obvious path of advanced societies. Rather, several speakers mentioned the existential danger that would befall any civilization that stopped expanding. Perhaps it reflects the zeitgeist of the time, smack dab in the middle of the Great Acceleration, when the notion of human progress was still connected, in such an untroubled way, with the idea of endless expansion. I suppose it fit well with both the Soviet belief in inevitable historical progress and the capitalist ideal of endless growth.

  All these brilliant minds saw the problem of survival through a lens shaped by the anxieties and hopes of their time. Early discussions of L always mentioned the likelihood that civilizations would invent nuclear weapons around the time they discovered radio technology. If this were the case, then inevitable nuclear holocaust might ensure that L was only decades long. SETI pioneers such as Drake, Morrison, Kardashev, Shklovsky, and Sagan imagined that if L was short, it was because most civilizations might “blow themselves up” in a nuclear holocaust. Between Green Bank and Byurakan, the Cuban Missile Crisis flared up, and annihilation seemed like a real possibility.9

  Humanity’s biggest challenge was seen as avoiding nuclear war and achieving a peaceful society, which could then put its resources into progress and development for the betterment of all mankind. Progress was defined to a large extent in terms of reworking the landscape with huge engineering projects. Both Freeman Dyson and William McNeill, in their personal accounts of Byurakan II, noted what wonderful progress the Soviets were making in diverting water from the huge lake near the Byurakan Observatory and irrigating the previously fallow landscape, allowing the small town of Byurakan to grow into a thriving city. Of the dozen largest dams in North America today, more than half of them were built within six years of this gathering. The Great Acceleration was in full force. Apollo 8 had just given us our first look in the mirror, and the first Earth Day was held a year and a half before Byurakan II, but the global environmental movement was still only nascent.

  Today, this idea about progress, that there must be a universal pattern of unending population and energy growth, is still ingrained in contemporary discussions of SETI. The Kardashev narrative, classifying civilizations into types I, II, and III, based on an assumption that the longer a civilization survives, the larger and more power-hungry it will become, is everywhere in the SETI literature.

  Yet, these days, this ubiquitous assumption—let’s call it the “inevitable expansion fallacy”—seems more dubious than it must have in the 1960s, now that it is becoming clear that by defining our own progress simply through continued exponential growth in population and energy use, we may be limiting the longevity of our civilization.

  Today nuclear weapons still threaten our survival, much more than we generally acknowledge. Yet the existential threats most on our mind are related to the runaway exploitation of resources, the destruction of key natural life-support systems, and the unintended consequences of mindless, unplanned development. Given current anxieties, some present-day discussions about L and the galactic prevalence of intelligent civilizations are beginning to focus more on our Anthropocene existential threats of climate change or resource exhaustion and the challenges of sustainability. What connects these concerns is the overarching question “How can an advanced technological species develop a long-term, stable relationship with world-changing technology?”

  Clearly our own proto-intelligent civilization is confronting the limits and dangers associated with an ethic of unquestioned growth and reflexive implementation of powerful technology for its own sake. So we could look a little differently at the assumed qualities of “advanced” civilizations and question whether achieving true planetary intelligence, that capable of creating a civilization built to last, might require a different set of guiding values. It may be that the Kardashev scale and its variants amount to an assumption that intelligent civilizations must act in a way that is, in fact, not very intelligent.

  The idea that relentless expansion will be a universal drive is often supported with the argument that any technological species must be the product of a Darwinian evolutionary process, and therefore all will have the commandment to be fruitful and multiply bred into their bones or exoskeletons. Endless expansion is a genetic obligation deeply embedded in organic life. It served us well throughout four billion years of biological evolution, when increasing numbers could help ensure species survival. Since we are still struggling to free ourselves from this animal imperative, we tend to project it onto our future selves and other sentient races. We picture aliens as technologically powerful but still mindlessly driven to expand their population and their control over resources at all costs.

  Yet this imperative is clearly becoming a threat to our survival, and it seems quite likely that advanced civilizations will have discarded it. This period of multiplying like rabbits cannot last long. It is not just a good idea to stop this behavior. It’s physically impossible to continue it for very long. We’d soon run out o
f planet, and even the prospect of interplanetary or even interstellar expansion does not help. At best, it slightly delays the consequences. If we maintained a population growth rate of 2 percent per year, in less than a thousand years we would experience devastating overcrowding, no matter what. It is easy to show that even if we learned to expand off the planet and increase our domain at the speed of light (a pretty safe theoretical upper limit), and if we managed to colonize all available stars and planets within a sphere expanding at this impossible speed, then we would still run out of planets and perish in our own waste within a few thousand years. There is no alternative to limiting population growth and resource use in the not-very-long run.

  Because of this, it is reasonable to suppose that truly successful, long-lived species have all discarded the expansion imperative, and replaced it with an ethic of sustainability, of valuing longevity over expansion. If technological intelligence has a true and lasting form, one of its basic properties must be that it moves beyond the exponential expansion phase (characteristic of simple life in a petri dish or on a finite planet) before it hits the top of the S-curve and crashes. For us, achieving this kind of planetary intelligence will require critically examining our inherited biological habits and shedding those that have become liabilities. Planetary intelligence would mean thoughtful control over one’s self, escape from the mindless drives to multiply, to expand, to lay waste, kill, and drown in your own waste. Perhaps this is why we will not find what Shklovsky called “miracles,” the highly visible works of vastly expanded superadvanced civilizations. Because advanced intelligences are not stupid.

  This realization could also influence our SETI strategies. It is an often unquestioned assumption in SETI theory that civilizations become more obviously visible the longer they exist. However, I wonder if, after a short period of proto-intelligent flamboyance, the opposite may be true. Yes, our presence on this planet has certainly become more and more obvious the more “advanced” we have become. If we define advancement by the power of our technology to rework our environment, which we often do, then this is a tautology. Yet what if it’s in the nature of advanced intelligence to undergo a transition toward a less obvious imprint? If every planetary civilization encounters a crisis comparable to our Anthropocene dilemma, then the logical outcome may be that the truly intelligent species become harder and harder to observe. Awakening to the reality of planetary existence may involve becoming more aware of, and thoughtful with, one’s long-term patterns of development. Older and wiser civilizations, those who’ve learned how to use technology in the service of long-term survival, may be less wasteful and therefore less clearly visible.

  In 2015, a group of Penn State University astronomers published the results of an “Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies.” In a very thorough search, no civilizations were found. As they summarize their results: “We show, for the first time, that Kardashev Type III civilizations (as Kardashev originally defined them) are very rare in the local universe.” The authors’ assumptions about advanced civilization include the following: “Detectably large energy supplies can plausibly be expected to exist because life has potential for exponential growth until checked by resource or other limitations, and intelligence implies the ability to overcome such limitations.”

  Does it? I wonder. Perhaps true, lasting technological intelligence may imply nearly the opposite: the ability to overcome the biological need for exponential growth.

  Natural selection weeds out those who are unfit to survive. This will be true on a planetary, civilizational scale as well. What if an essential part of becoming a very wise species, equipped for survival with powerful technology, is to realize and internalize the advantage of living more in accordance with the natural systems within which your existence is embedded? What if one characteristic of really advanced intelligence is to become less and less distinguishable from natural phenomena? That would certainly explain why we have not seen the predicted “miracles” created by type II and III civilizations.

  The Continuity Criterion

  It is said that Mahatma Gandhi, when asked to comment on Western civilization, remarked, “I think it would be a good idea.”10 That’s how I feel about intelligent life on Earth, especially when I wonder about truly intelligent life.

  Intelligence, like life, is hard to define. To search for it elsewhere we need a working definition. Among the radio astronomers of SETI, it’s often tacitly assumed that the hallmark of intelligent life on any planet is the ability to do radio astronomy. This “radio definition” is often offered with tongue firmly planted in cheek, acknowledging the self-serving irony. Yet we also use it pragmatically, circumventing tricky questions of comparative evolution and human uniqueness by choosing to search for something concrete and recognizable.

  Certainly we can imagine intelligent technological life that does not use radio. Yet what of the opposite: could there be nonintelligent life that has developed radio? Well, look at the one planet we know of where radio telescopes have been built. Does Earth host intelligent life? Are we intelligent in a way that would be recognized as such by other sentient technological creatures? We could quibble for centuries over what this means, but at a bare minimum, consider this potential criterion: It’s not intelligence if it cannot solve the puzzle of how to survive on a planet. Legitimate technological intelligence as a significant planetary phenomenon must be able to sustain itself for some nonnegligible length of time. I say this for two reasons:

  First, what is the use of all this cleverness and ability to control one’s environment if self-preservation is not possible, or self-destruction is inevitable? If “intelligent life” is stupid enough to ensure its own rapid destruction, then perhaps it should be called something else. We humans are obviously brilliant at certain kinds of problem solving, but arguably if we cannot surmount the existential hurdles we place in our own path, then on the galactic stage, whether or not anyone else is watching, we could not be considered truly intelligent.

  Second, as with the “radio definition” of intelligence, there is a pragmatic angle. Intelligent life11 that doesn’t last long would be extremely hard to detect, even if it existed on a large number of planets. To put this in the mathematical language of the Drake equation, the average distance between civilizations will be too great to make contact feasible, unless L is much greater than the present age of our own civilization. There is only a decent chance of finding a message over a reasonable span of time (say, within another century) if the galaxy is fairly well sprinkled with broadcasting civilizations having average life spans much longer than centuries. Real galactic visibility requires serious longevity.

  This means that even if nearly every planet out there produced a technological civilization very similar to ours, there could still be nothing much on the galactic radio.* In order for our current searches to have a reasonable chance of success, someone must be maintaining a signal or beacon for thousands of years, something we certainly have not done. These broadcasting civilizations would be quite different from us. We don’t have that kind of commitment. Will we ever? Will our civilization even be around in another five thousand years? By this minimum standard—let’s call it the continuity criterion—intelligent life has arguably not yet evolved on Earth, and radio telescopes by themselves are no definite sign of intelligence.

  Maybe, however, they are a step in the right direction. If aliens can build radios, surely they’ll be capable of simple algebra and geometry. And probability arguments. They, too, will realize that unless someone else is broadcasting continuously for thousands of years, at a minimum, there’s little point in listening. They’ll understand that the problem of SETI is the same as the problem of sustainability.

  To think about finding someone else out there is to look at our current technological civilization and ask: Is there a stable state achievable with this tool kit, with these qualities and potentials? Is there a developmental path leading from what we are now
to one of these long-lived broadcasting civilizations? There may well be, but our Anthropocene predicament illustrates why it is not a given that such a path is easily navigable.

  Could a cave painter in France eighteen thousand years ago imagine a jet plane or a laptop computer? Can you and I imagine the technology of the year AD 16,000, or even the year 2500? My grandparents were born before automobiles, and three of them lived into the space age. You and I have never known a time when the shifting of the built environment was not keeping us on our toes. What could we possibly say about the qualities, abilities, or motives of alien civilizations that are hundreds of thousands of years more aged than we? SETI demands that we consider planetary civilizations that long ago passed through a stage of technological transition analogous to what we’re experiencing now.

  “Where are they?” is an ancient question, and one SETI scientists have been studying theoretically and experimentally for more than half a century. Faced with the challenge of surviving, and thriving, in the Anthropocene, we can turn the question around and ask, “Where will we be in twenty thousand years?” The answer may depend on three other questions of great relevance to both SETI and our current situation.

  The first is: does the development of a technological civilization capable of SETI always precipitate global environmental crisis? On Earth at least, both were set in motion by the same explosion of technical prowess and scientific understanding. In chapter 3, I argue that the first wave of technological success to sweep a planet will always cause global “changes of the third kind,” which will bring about major ecological disruption.

  The second question is: can a civilization in such a crisis transition to one that employs technology well, in ways that facilitate long-term survival? At least sometimes it may arrive at a more stable state dominated by the fourth kind of planetary change, where knowledge of planetary function is integrated into technological deployment. Only a planet that has reached such a stage would have the continuity to host a long-lived communication effort that could produce decent visibility on the galactic scale.

 

‹ Prev