Lonely Planets
Page 57
charged. I even agree with the impulse that we need a global spiritual
response to this situation, to learn to change our behavior species-wide.
These aliens sure are perceptive little buggers. I don’t believe that
there really are ships and little guys. I think those messages of global
concern are coming from John and the experiencers themselves. These
are warnings we need to hear, but why do we need alien intermediaries?
As Priscilla said in The Courtship of Miles Standish, “Speak for your-
self, John!”
The Immortals
23
Oh yes, there is hope—infinite hope. But not for us.
—FRANZ KAFKA
Image unavailable for
electronic edition
Do not fear the universe!
The ghosts of the alien
Image unavailable for
dead will beckon thee
electronic edition
into the future!
Put on your spacesuits, and
dance; dance into the future . . .
—SIFL & OLLY
W H E N W E A R E T H E A L I E N S
Stuck here in our tiny little patch of time, we do what we can to
unearth the past and peer into the future. Stranded for now on our little
patch of universe, we do our best to understand the rest of it. Can we
predict the behavior of alien civilizations? Many people have tried,
using projections of our own future, often based on lessons from
human history.
The cultural shock experienced on both sides when European explor-
ers encountered less technically advanced societies is used as a caution-
ary tale to dampen our enthusiasm about alien contact. This warning is
often refuted with the argument that the aliens could not possibly be as
savage as we, or they would not have survived to become an advanced
technical civilization. Others insist that only an aggressive society will
survive the evolutionary process, and therefore aliens must be ruthless
colonizers.
Looking back over a much longer timescale, we’ve seen how narrow
390
L o n e l y P l a n e t s
interpretations of life’s history on Earth can lead to questionable con-
clusions about life in the galaxy. The flawed Rare Earth Hypothesis
(chapter 9) shows how easy it is to read too much into one planet’s
history.
Now, turn 180 degrees (on an axis perpendicular to time) and look
into the future. What do you make out in your Earth-future crystal
ball? Even if the picture were clear, would that really tell us about the
fate of intelligence elsewhere?
We cannot help but see a close connection between L, the average
lifetime of civilizations, and our expectations for the survival of our
own species. Many discussions of L in the literature explicitly tie the
two together, concluding that if we are likely to destroy ourselves
within the next century, then L must be short. Yet, our chances of sur-
vival may be irrelevant to the value of L in the universe. It’s under-
standable that we try to use our own projected future as data, since one
fuzzy data point is better than none. Except, in this case, I think we
commit a serious error when we assume that we are likely to be typical
of worlds that might become long-lived broadcasting civilizations. We
are much more likely to be highly atypical.
In biological systems, the most common outcomes are often not the
most important ones. Consider the hundreds or thousands of sea-horse
ponies born in every litter. For any individual, the overwhelmingly
likely outcome is to quickly become fish food. But it’s the one-in-a-
thousand survivors that matter. At this point, we are one of the ponies.
If we base our views of intelligence elsewhere on expectations for our
own future, then we are committing a logical fallacy and perhaps sell-
ing the universe short.
A R E W E I N T E L L I G E N T ?
Well, I don’t know about you, but . . . I can count to 1,023 on my fingers in binary, or 1,048,575 if I also use my toes, which makes me at
least as good as a primitive microchip, albeit much slower. Seriously,
though, are we humans an intelligent species? When we hunt for intelli-
gence elsewhere in the galaxy, are we looking out there for something
that already exists down here? What, exactly, are we seeking?
Throughout this book, I have weaseled around defining “intelli-
gence” because it’s like trying to define “life”: though we think we
know what it is, we can’t quite put it into words.
The Immortals
391
When we debate how long it will take for intelligence to arise on
some random planet, we always start with the history of Earth and
deduce that it typically takes 4.5 billion years. We use ourselves as the
benchmark and assume that intelligence, on a level that is relevant for
discussions of Cosmic Evolution, is something that has now arrived on
Earth, in human form.
But, what is it? We can come up with some properties and abilities of
intelligence, even while we cannot come up with a perfect definition
that everyone, even on this little planet, will accept. If we were going to
list some of these qualities in a Lonely Planets personal ad, it might
read:
LONELY: SBF* species seeks a special friend. You must be capable
of abstract thought and symbolic language, must have the ability
to learn from experience, to pass on learned knowledge by teach-
ing others, to purposefully modify your environment, to anticipate
the future and act accordingly, to make tools, dream, and play the
drums. If this sounds like you, let’s meet for conversation and pos-
sibly something more.
Who on Earth could answer such an ad? Some other animals have
some of these abilities, but none have them all (dolphins can’t play the
drums).
More pragmatic definitions have been proposed for specific scientific
tasks. Among those who study artificial intelligence, a common defini-
tion is this: a machine is intelligent if it can pass the “Turing test,”
named after the tortured British mathematical genius Alan Turing, who
first devised it in 1950.† By this definition, if a machine can mimic
human intelligence so accurately that we can’t tell the difference, then it
is, in fact, intelligent. This is reminiscent of the empirical definition of
life mentioned in chapter 7: “We’ll know it when we see it.”‡
*Sentient but fragile.
†Turing invented computer science, excelled at mathematics and philosophy, and was an early innovator in what is now called “chaos theory” and “complexity science.” Two years after publishing his landmark paper discussing artificial intelligence and introducing the “Turing Test,” he was arrested in Manchester for homosexuality. At his trial, he proudly refused to deny that he was gay—a courageous and dangerous stance in the
1950s. His sentence included hormone injections that were supposed to “cure” him. Two years later, he committed suicide.
‡Also similar to “community standards” definitions of obscenity.
392
L o n e l y P l a n e t s
SETI theorists use their own narrow and practical definiti
on of intel-
ligence. They usually avoid the tricky subtleties by simply defining
intelligence as the ability to build a radio telescope. That this definition
was conceived by radio astronomers is always good for a few yuks. (A
woodwind player might define intelligence as the ability to play modal
scales on the clarinet.) The advantage of this simple definition is that it
relates intelligence explicitly to an observable phenomenon, thereby
making it accessible to scientific experimentation. It tells us exactly
what to look for.
The “radio definition” of intelligence implicitly includes humans
among the galactic intelligentsia. However, when it comes to radio sig-
nals, we only listen, we don’t send. So, even if the galaxy were overrun
with “civilizations” manifesting our level of “intelligence,” there would
be no messages on the air. The Fermis of countless worlds could all be
asking, “Where are they?”
There is a large asymmetry in galactic radio discourse. It’s much eas-
ier to listen in than it is to broadcast. But there is a kind of broadcasting
that someone out there must be doing for SETI to succeed, and we can’t do it yet. What is required is not just a level of technology or transmitter power, but a long-term commitment. If you do the math (with the
Drake Equation), you find that for SETI to be viable, for us to have a
reasonable chance of finding a signal, there must be many civilizations
broadcasting continuously for thousands of years.* We are not even
close to being able to become one of these serious broadcasters.
We’ve sent out some symbolic broadcasts—scribbled a few simple mes-
sages and tossed them out there in leaky electromagnetic bottles. We’ve
attached notes to our four spacecraft (so far) that are leaving the solar sys-
tem, just in case they wash up somewhere. And of course, if anyone is
really hunting for the likes of us, our presence is not a well-kept secret: for
decades, we have been leaking our sitcoms, talk shows, and ebullient
commercials for Jesus, minivans, and beer.† A spherical shell of radio sig-
*In the lingo of the Drake Equation, unless L, the average lifetime of a communicating civilization, is many thousands of years, then N, the number of communicating civilizations, is so low that no one is around for thousands of light-years.
†When aliens come to Earth, if their preconceived notions of us are conditioned by TV
broadcasts, their first report home might read, “The females have smaller mammary glands and the males less well developed musculature than we expected. There are a greater variety of types of humans than we thought based on their messages to us, but no sign of other humanoid species. We can find no trace of the ones with the funny forehead ridges.”
The Immortals
393
nals is expanding outward from Earth, its diameter increasing at twice the
speed of light. As I write, this sphere forms a ball of news, entertainment,
psychobabble, and advertising 166 light-years in diameter.* In the time it
took you to read this sentence, it grew by another million miles. The near-
est stars are only about four light-years away.
These indiscretions might have tipped off some of our closest neigh-
bors that something is up on the third stone from the Sun. Would they
conclude from these transmissions that we are intelligent, or merely
that some nutcases have stumbled upon primitive radio technology?
When we focus on the technical aspect of how to become a “radio
communicating” civilization, there are no great hurdles. We can make
bigger dishes, increase our listening sensitivity, and broadcast with ever-
increasing power. From this perspective, we seem almost there. We cal-
culate and speculate about finding others that are slightly spiffed-up
versions of ourselves and take it as an article of faith that such a stage
will arise soon after the one that we are in now.†
But it takes more than technology to be a broadcasting society. It
requires that you survive with high technology for many thousands of
years and commit to projects that last, at the minimum, for millennia.
Your standard SETI aliens have science and technology that are similar
to ours, but they must have solved many of the great social, political,
and spiritual problems we now face. The abilities that will enable a
species to participate in interstellar communication may be part of a
qualitatively different phenomenon than what we self-referentially (and
self-aggrandizingly) call intelligence.
Our discussions of L have an implicit focus on how long an intelli-
gence “like ours” might last. We tend to ask, what is the distance to the
nearest planet with someone like us? But we should also be asking, what
else might intelligence become? What can it grow into that it hasn’t yet
become here (and may not), and how long might that last?
C O N F E D E R A C Y O F D U N C E S
As Doris Lessing wrote in the first volume of her autobiography,
“Forgive me for the banality of this observation, but there is something
*The first commercial radio stations with regularly scheduled broadcasts were heard in 1920. Television broadcasts started in 1947.
†Shklovskii’s “adolescent optimism.”
394
L o n e l y P l a n e t s
very wrong with the human race.” We might refer to what we humans
have achieved so far as “proto-intelligence.” Let me briefly summarize
what it is about us that seems so strictly proto.
Think of the characteristics listed above in the personal ad for an
intelligent soul mate: the ability to learn from mistakes, anticipate dan-
gers, to think your way out of a paper bag, and so on. We humans cer-
tainly do have these abilities as individuals, at least on our better days.
Individuals can alter their environment and change their behavior to
aid in survival. Occasionally, communities of humans manifest these
qualities as well.
But consider the behavior of the human race collectively. We are
dumber in numbers. As a species, as a global entity, we aren’t able to
respond to information and make intelligent decisions. An individual,
behaving as we do, would seem dumber than a dodo (and look what
happened to them). With human intellect, the whole seems to be less
than the sum of the parts.*
When we think of aliens sending an interstellar radio signal, we usu-
ally picture them as representing their entire species, and imagine them
attempting to communicate with humanity as a whole. This makes
sense, given that the likely timescale of any conversation, where each
reply might take centuries, requires a group effort. Individuals cannot
talk to the aliens by radio. For our species to achieve the level of matu-
rity that allows for—indeed that may be defined by—interstellar travel
or communication, we’ll have to learn to act collectively.
Without such an ability, we are vulnerable to many extinction
threats, including several of our own making. I worry most about these
“unnatural disasters.” In our headlong, blind rush toward new technol-
ogy, we may be cooking up dangers to ourselves that will leave us nos-
talgic for the quaint threat o
f nuclear self-annihilation.
T O U C H O F G R A Y
In April 2000, Bill Joy, the cofounder and chief scientist at Sun
Microsystems, published a powerful article in Wired about the growing
threat of what he calls GNR technologies (genetics, nanotechnology,
and robotics). Joy described a scenario that is frightening, in large part
because it seems quite credible. He looks at nanotechnology (the ability
*I guess this is the opposite of an emergent phenomenon. Or is it “emergent stupidity”?
The Immortals
395
to build submicroscopic machines by manipulating matter on molecu-
lar and atomic levels), combined with genetic engineering and increas-
ing computer speed and miniaturization, and asks, “Where is all this
going?” He concludes that we may soon have the ability to design self-
reproducing agents with unprecedented power to remake our planet.
What Joy is most concerned about is “the power of destructive self-
replication.”
One nightmare scenario is commonly referred to by the nanotechnol-
ogy elite as “the gray goo problem.” If a genetically engineered, roboti-
cally enhanced nanobacteria that can outcompete naturally evolved
microorganisms escapes from a lab, it could replicate like mad and
destroy the entire biosphere, turning our pale blue dot into a gray goo
glob.
Bill Joy has worked all his life to create better software and micro-
processors, believing that his work was helping to create a wonderful
future for all of humanity. Only recently has it occurred to him that he
may have been helping to build the tools of human extinction. He
points out that unlike twentieth-century weapons of mass destruction,
which generally require rare materials, highly specialized training, or
large institutions to construct, the new GNR weapons of mass destruc-
tion might soon be easily created by any individual with a little bit of
technical knowledge. The march of new technology is moving in a
direction that may empower individuals to do massive harm.
Undeniably, these same technologies also carry great potential for
problem solving and liberation from hunger and suffering. I wish I had
more confidence that the institutions implementing and watching over
genetic engineering and nanotechnology experiments were doing so