The Age of Spiritual Machines: When Computers Exceed Human Intelligence

Home > Other > The Age of Spiritual Machines: When Computers Exceed Human Intelligence > Page 9
The Age of Spiritual Machines: When Computers Exceed Human Intelligence Page 9

by Ray Kurzweil


  Before 2030, we will have machines proclaiming. Descartes’s dictum. And it won’t seem like a programmed response. The machines will be earnest and convincing. Should we believe them when they claim to be conscious entities with their own volition?

  The “Consciousness Is a Different Kind of Stuff” School

  The issue of consciousness and free will has been, of course, a major preoccupation of religious thought. Here we encounter a panoply of phenomena, ranging from the elegance of Buddhist notions of consciousness to ornate pantheons of souls, angels, and gods. In a similar category are theories by contemporary philosophers that regard consciousness as yet another fundamental phenomenon in the world, like basic particles and forces. I call this the “consciousness is a different kind of stuff” school. To the extent that this school implies an interference by consciousness in the physical world that runs afoul of scientific experiment, science is bound to win because of its ability to verify its insights. To the extent that this view stays aloof from the material world, it often creates a level of complex mysticism that cannot be verified and is subject to disagreement. To the extent that it keeps its mysticism simple, it offers limited objective insight, although subjective insight is another matter (I do have to admit a fondness for simple mysticism).

  The “We’re Too Stupid” School

  Another approach is to declare that human beings just aren’t capable of understanding the answer. Artificial intelligence researcher Douglas Hofstadter muses that “it could be simply an accident of fate that our brains are too weak to understand themselves. Think of the lowly giraffe, for instance, whose brain is obviously far below the level required for self-understanding-yet it is remarkably similar to our brain.”10 But to my knowledge, giraffes are not known to ask these questions (of course, we don’t know what they spend their time wondering about). In my view, if we are sophisticated enough to ask the questions, then we are advanced enough to understand the answers. However, the “we’re too stupid” school points out that indeed we are having difficulty clearly formulating these questions.

  A Synthesis of Views

  My own view is that all of these schools are correct when viewed together, but insufficient when viewed one at a time. That is, the truth lies in a synthesis of these views. This reflects my Unitarian religious education in which we studied all the world’s religions, considering them “many paths to the truth.” Of course, my view may be regarded as the worst one of all. On its face, my view is contradictory and makes little sense. The other schools at least can claim some level of consistency and coherence.

  Thinking Is as Thinking Does

  Oh yes, there is one other view, which I call the “thinking is as thinking does” school. In a 1950 paper, Alan Turing describes his concept of the Turing Test, in which a human judge interviews both a computer and one or more human foils using terminals (so that the judge won’t be prejudiced against the computer for lacking a warm and fuzzy appearance).11 If the human judge is unable to reliably unmask the computer (as an impostor human) then the computer wins. The test is often described as a kind of computer IQ test, a means of determining if computers have achieved a human level of intelligence. In my view, however, Turing really intended his Turing Test as a test of thinking, a term he uses to imply more than just clever manipulation of logic and language. To Turing, thinking implies conscious intentionality.

  Turing had an implicit understanding of the exponential growth of computing power, and predicted that a computer would pass his eponymous exam by the end of the century. He remarked that by that time “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” His prediction was overly optimistic in terms of time frame, but in my view not by much.

  THE VIEW FROM QUANTUM MECHANICS

  I often dream about falling. Such dreams are commonplace to the ambitious or those who climb mountains. Lately I dreamed I was clutching at the face of a rock, but it would not hold. Gravel gave way. I grasped for a shrub, but it pulled loose, and in cold terror I fell into the abyss. Suddenly I realized that my fall was relative; there was no bottom and no end. A feeling of pleasure overcame me. I realized that what I embody, the principle of life, cannot be destroyed. It is written into the cosmic code, the order of the universe. As I continued to fall in the dark void, embraced by the vault of the heavens, I sang to the beauty of the stars and made my peace with the darkness.

  —Heinz Pagels, physicist and quantum mechanics researcher before his death in a 1988 climbing accident

  The Western objective view states that after billions of years of swirling around, matter and energy evolved to create life-forms-complex self-replicating patterns of matter and energy-that became sufficiently advanced to reflect on their own existence, on the nature of matter and energy, on their own consciousness. In contrast, the Eastern subjective view states that consciousness came first-matter and energy are merely the complex thoughts of conscious beings, ideas that have no reality without a thinker. As noted above, the objective and subjective views of reality have been at odds since the dawn of recorted history. There is often merit, however, in combining seemingly irreconcilable views to achieve a deeper understanding. Such was the case with the adoption of quantum mechanics fifty years ago. Rather than reconcile the views that electromagnetic radiation (for example, light) was either a stream of particles (that is, photons) or a vibration (that is, light waves), both views were fused into an irreducible duality. While this idea is impossible to grasp using only our intuitive models of nature, we are unable to explain the world without accepting this apparent contradiction. Other paradoxes of quantum mechanics (for example, electron “tunneling” in which electrons in a transistor appear on both sides of a barrier) helped create the age of computation, and may unleash a new revolution in the form of the quantum computer,12 but more about that later. Once we accept such a paradox, wonderful things happen. In postulating the duality of light, quantum mechanics has discovered an essential nexus between matter and consciousness. Particles apparently do not make up their minds as to which way they are going or even where they have been until the are forced to do so by the observations of a conscious observer. We might say that they appear not really to exist at all retroactively until and unless we notice them.

  So twentieth-century Western science has come around to the Eastern view. The Universe is sufficiently sublime that the essentially Western objective view of consciousness arising from matter and the essentially Eastern subjective view of matter arising from consciousness apparently coexist as another irreducible duality. Clearly, consciousness, matter, and energy are inextricably linked.

  We may note here a similarity of quantum mechanics to the computer simulation of a virtual world. In today’s software games that display images of a virtual world, the portions of the environment not currently being interacted with by the user (that is, those offscreen) are usually not computed in detail, if at all. The limited resources of the computer are directed toward rendering the portion of the world that the user is currently viewing. As the user focuses in on some other aspect, the computational resources are then immediately directed toward creating and displaying that new perspective. It thus seems as if the portions of the virtual world that are offscreen are nonetheless still “there,” but the software designers figure there is no point wasting valuable computer cycles on regions of their simulated world that no one is watching.

  I would say that quantum theory implies a similar efficiency in the physical world. Particles appear not to decide where they have been until forced to do so by being observed. The implication is that portions of the world we live in are not actually “rendered” until some conscious observer turns her attention toward them. After all, there’s no point wasting valuable “computes” of the celestial computer that renders our Universe. This gives new meaning to the question about the unheard tree that falls in the forest.

  In th
e end, Turing’s prediction foreshadows how the issue of computer thought will be resolved. The machines will convince us that they are conscious, that they have their own agenda worthy of our respect. We will come to believe that they are conscious much as we believe that of each other. More so than with our animal friends, we will empathize with their professed feelings and struggles because their minds will be based on the design of human thinking. They will embody human qualities and will claim to be human. And we’ll believe them.

  ON THIS MULTIPLE-CONSCIOUSNESS IDEA, WOULDN’T I NOTICE THAT—I MEAN IF I HAD DECIDED TO DO ONE THING AND THIS OTHER CONSCIOUSNESS IN MY HEAD WENT AHEAD AND DECIDED SOMETHING ELSE?

  I thought you had decided not to finish that muffin you just devoured.

  TOUCHE. OKAY, IS THAT AN EXAMPLE OF WHAT YOU’RE TALKING ABOUT?

  It is a better example of Marvin Minsky’s Society of Mind, in which he conceives of our mind as a society of other minds—some like muffins, some are vain, some are health conscious, some make resolutions, others break them. Each of these in turn is made up of other societies. At the bottom of this hierarchy are little mechanisms Minsky calls agents with little or no intelligence. It is a compelling vision of the organization of intelligence, including such phenomena as mixed emotions and conflicting values.

  SOUNDS LIKE A GREAT LEGAL DEFENSE. “No, JUDGE, IT WASN’T ME. IT WAS THIS OTHER GAL IN MY HEAD WHO DID THE DEED!”

  That’s not going to do you much good if the judge decides to lock up the other gal in your head.

  THEN HOPEFULLY THE WHOLE SOCIETY IN MY HEAD WILL STAY OUT OF TROUBLE. BUT WHICH MINDS IN MY SOCIETY OF MIND ARE CONSCIOUS?

  We could imagine that each of these minds in the society of mind is conscious, albeit that the lowest-ranking ones have relatively little to be conscious of. Or perhaps consciousness is reserved for the higher-ranking minds. Or perhaps only certain combinations of higher-ranking minds are conscious, whereas others are not. Or perhaps—

  NOW WAIT A SECOND, HOW CAN WE TELL WHAT THE ANSWER IS?

  I believe there’s really no way to tell. What possible experiment can we run that would conclusively prove whether an entity or process is conscious? If the entity says, “Hey, I’m really conscious,” does that settle the matter? If the entity is very compelling when it expresses a professed emotion, is that definitive? If we look carefully at its internal methods and see feedback loops in which the process examines and responds to itself, does that mean it’s conscious? If we see certain types of patterns in its neural firings, is that convincing? Contemporary philosophers such as Daniel Dennett appear to believe that the consciousness of an entity is a testable and measurable attribute. But I think science is inherently about objective reality. I don’t see how it can break through to the subjective level.

  MAYBE IF THE THING PASSES THE TURING TEST?

  That is what Turing had in mind. Lacking any conceivable way of building a consciousness detector, he settled on a practical approach, one that emphasizes our unique human proclivity for language. And I do think that Turing is right in a way—if a machine can pass a valid Turing Test, I believe that we will believe that it is conscious. Of course, that’s still not a scientific demonstration.

  The converse proposition, however, is not compelling. Whales and elephants have bigger brains than we do and exhibit a wide range of behaviors that knowledgeable observers consider intelligent. I regard them as conscious creatures, but they are in no position to pass the Turing Test.

  THEY WOULD HAVE TROUBLE TYPING ON THESE SMALL KEYS OF MY COMPUTER.

  Indeed, they have no fingers. They are also not proficient in human languages. The Turing Test is clearly a human-centric measurement.

  IS THERE A RELATIONSHIP BETWEEN THIS CONSCIOUSNESS STUFF AND THE ISSUE OF TIME THAT WE SPOKE ABOUT EARLIER?

  Yes, we clearly have an awareness of time. Our subjective experience of time passage—and remember that subjective is just another word for conscious—is governed by the speed of our objective processes. If we change this speed by altering our computational substrate, we affect our perception of time.

  RUN THAT BY ME AGAIN.

  Let’s take an example. If I scan your brain and nervous system with a suitably advanced noninvasive-scanning technology of the early twenty-first century—a very-high-resolution, high-bandwidth magnetic resonance imaging, perhaps—ascertain all the salient information processes and then download that information to my suitably advanced neural computer, I’ll have a little you or at least someone very much like you right here in my personal computer.

  If my personal computer is a neural net of simulated neurons made of electronic stuff rather than human stuff, the version of you in my computer will run about a million times faster. So an hour for me would be a million hours for you, which is about a century.

  OH, THAT’S GREAT, YOU’LL DUMP ME IN YOUR PERSONAL COMPUTER, AND THEN FORGET ABOUT ME FOR A SUBJECTIVE MILLENNIUM OR TWO.

  We’ll have to be careful about that, won’t we.

  CHAPTER FOUR

  A NEW FORM OF INTELLIGENCE ON EARTH

  THE ARTIFICIAL INTELLIGENCE MOVEMENT

  What if these theories are really true, and we were magically shrunk and put into someone’s brain while he was thinking. We would see all the pumps, pistons, gears and levers working away, and we would be able to describe their workings completely, in mechanical terms, thereby completely describing the thought processes of the brain. But that description would nowhere contain any mention of thought! It would contain nothing but descriptions of pumps, pistons, levers!

  —Gottfried Wilhelm Leibniz

  Artificial stupidity (AS) may be defined as the attempt by computer scientists to create computer programs capable of causing problems of a type normally associated with human thought.

  —Wallace Marshal

  Artificial intelligence (AI) is the science of how to get machines to do the things they do in the movies.

  —Astro Teller

  The Ballad of Charles and Ada

  Returning to the evolution of intelligent machines, we find Charles Babbage sitting in the rooms of the Analytical Society at Cambridge, England, in 1821, with a table of logarithms lying before him.

  “Well, Babbage, what are you dreaming about?” asked another member, seeing Babbage half asleep.

  “I am thinking that all these tables might be calculated by machinery!” Babbage replied.

  From that moment on, Babbage devoted most of his waking hours to an unprecedented vision: the world’s first programmable computer. Although based entirely on the mechanical technology of the nineteenth century, Babbage’s “Analytical Engine” was a remarkable foreshadowing of the modem computer.1

  Babbage developed a liaison with the beautiful Ada Lovelace, the only legitimate child of Lord Byron, the poet. She became as obsessed with the project as Babbage, and contributed many of the ideas for programming the machine, including the invention of the programming loop and the subroutine. She was the world’s first software engineer, indeed the only software engineer prior to the twentieth century.

  Lovelace significantly extended Babbage’s ideas and wrote a paper on programming techniques, sample programs, and the potential of this technology to emulate intelligent human activities. She describes the speculations of Babbage and herself on the capacity of the Analytical Engine, and future machines like it, to play chess and compose music. She finally concludes that although the computations of the Analytical Engine could not properly be regarded as “thinking,” they could nonetheless perform activities that would otherwise require the extensive application of human thought.

  The story of Babbage and Lovelace ends tragically. She died a painful death from cancer at the age of thirty-six, leaving Babbage alone again to pursue his quest. Despite his ingenious constructions and exhaustive effort, the Analytical Engine was never completed. Near the end of his existence he remarked that he had never had a happy day in his life. Only a few mourners were recorded at Babbage’s funeral in
1871.2

  What did survive were Babbage’s ideas. The first American programmable computer, the Mark I, completed in 1944 by Howard Aiken of Harvard University and IBM, borrowed heavily from Babbage’s architecture. Aiken commented, “If Babbage had lived seventy-five years later, I would have been out of a job.”3

  Babbage and Lovelace were innovators nearly a century ahead of their time. Despite Babbage’s inability to finish any of his major initiatives, their concepts of a computer with a stored program, self-modifying code, addressable memory, conditional branching, and computer programming itself still form the basis of computers today.4

  Again, Enter Alan Turing

  By 1940, Hitler had the mainland of Europe in his grasp, and England was preparing for an anticipated invasion. The British government organized its best mathematicians and electrical engineers, under the intellectual leadership of Alan Turing, with the mission of cracking the German military code. It was recognized that with the German air force enjoying superiority in the skies, failure to accomplish this mission was likely to doom the nation. In order not to be distracted from their task, the group lived in the tranquil pastures of Hertfordshire, England.

 

‹ Prev