Science Fiction Criticism
Page 57
Notes
1. I am indebted to Brooks Landon and Felicity Nussbaum for their helpful comments on this essay.
2. Among the studies that explore these connections are Jay Bolter, Writing Space: The Computer, Hypertext, and the History of Writing (Hillsdale, N.J.: Lawrence Erlbaum Associates, 1991); Michael Heim, Electric Language: A Philosophical Study of Word Processing (New Haven: Yale University Press, 1987); and Mark Poster, The Mode of Information: Poststructuralism and Social Context (Chicago: University of Chicago Press, 1990).
3. The paradox is discussed in N. Katherine Hayles, Chaos Bound: Orderly Disorder in Contemporary Literature and Science (Ithaca: Cornell University Press, 1990), pp. 31–60.
4. Self-organizing systems are discussed in Grégoire Nicolis and Ilya Prigogine, Exploring Complexity: An Introduction (New York: Freeman and Company, 1989); Roger Lewin, Complexity: Life at the Edge of Chaos (New York: Macmillan 1992); and M. Mitchell Waldrop, Complexity: The Emerging Science at the Edge of Order and Chaos (New York: Simon and Schuster, 1992).
5. Friedrich A. Kittler, Discourse Networks 1800/1900, trans. Michael Metteer and Chris Cullens (Stanford: Stanford University Press, 1990).
6. The implications of these conditions for postmodern embodiment are explored in N. Katherine Hayles, “The Materiality of Informatics,” Configurations: A Journal of Literature, Science, and Technology 1 (Winter 1993), pp. 147–70.
7. In The Age of the Smart Machine: The Future of Work and Power (New York: Basic Books, 1988), Shoshana Zuboff explores through three case studies the changes in American workplaces as industries become informatted.
8. Computer law is discussed in Katie Hafner and John Markoff, Cyberpunk: Outlaws and Hackers on the Computer Frontier (New York: Simon and Schuster, 1991); also informative is Bruce Sterling, The Hacker Crackdown: Law and Disorder on the Electronic Frontier (New York: Bantam, 1992).
9. Sherry Turkel documents computer network romances in “Constructions and Reconstructions of the Self in Virtual Reality,” a paper presented at the Third International Conference on Cyberspace (Austin, Texas, May 1993); Nicholson Baker’s Vox: A Novel (New York: Random House, 1992) imaginatively explores the erotic potential for better living through telecommunications; and Howard Rheingold looks at the future of erotic encounters in cyberspace in “Teledildonics and Beyond,” Virtual Reality (New York: Summit Books, 1991), pp. 345–77.
10. Howard Rheingold surveys the new virtual technologies in Virtual Reality. Also useful is Ken Pimentel and Kevin Teixeira, Virtual Reality: Through the New Looking Glass (New York: McGraw-Hill, 1993). Benjamin Woolley takes a skeptical approach toward claims for the new technology in Virtual Worlds: A Journey in Hyped Hyperreality (Oxford: Blackwell, 1992).
11. Donna Haraway, “Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s,” Socialist Review 80 (1985), pp. 65–108; see also “The High Cost of Information in Post World War II Evolutionary Biology: Ergonomics, Semiotics, and the Sociobiology of Communications Systems,” Philosophical Forum 8, no. 2–3 (1981–82), pp. 244–75.
12. Jacques Lacan, “Radiophonies,” Scilicet 2/3 (1970), pp. 55, 68. For floating signifiers, see Le Séminaire XX: Encore (Paris: Seuil, 1975), pp. 22, 35.
13. Although presence and absence loom larger in Lacanian psycholinguistics than do pattern and randomness, Lacan was not uninterested in information theory. In the 1954–55 Seminar, he played with incorporating ideas from information theory and cybernetics into psychoanalysis. See especially “The Circuit” (pp. 77–90) and “Psychoanalysis and Cybernetics, or on the Nature of Language” (pp. 294–308) in The Seminar of Jacques Lacan: Book II, ed. Jacques-Alain Miller (New York: W. W. Norton and Co., 1991).
14. Several theorists of the postmodern have identified mutation as an important element of postmodernism, including Ihab Hassan in The Postmodern Turn: Essays in Postmodern Theory and Culture (Columbus: Ohio State University, 1987), p. 91, and Donna Haraway, “The Actors Are Cyborgs, Nature Is Coyote, and the Geography Is Elsewhere: Postscript to ‘Cyborgs at Large,’” in Technoculture, ed. Constance Penley and Andrew Ross (Minneapolis: University of Minnesota Press, 1991), pp. 21–26.
15. Claude E. Shannon and Warren Weaver, The Mathematical Theory of Communication (Urbana: University of Illinois Press, 1949).
16. The gender encoding implicit in “man” (rather than human) is also reflected in the emphasis on tool usage as a defining characteristic, rather than, say, altruism or extended nurturance, traits traditionally encoded female.
17. Kenneth P. Oakley, Man the Tool-Maker (London: Trustees of the British Museum, 1949).
18. The term “homeostasis,” or self-regulating stability through cybernetic corrective feedback, was introduced by physiologist Walter B. Cannon in “Organization for Physiological Homeostasis,” Physiological Reviews 9 (1929), pp. 399–431. Cannon’s work influenced Norbert Wiener, and homeostasis became an important concept in the initial phase of cybernetics from 1946–53.
19. Key figures in moving from homeostasis to self-organization were Heinz von Foerster, especially Observing Systems (Salinas, Calif.: Intersystems Publications, 1981) and Humberto R. Maturana and Francisco J. Varela, Autopoiesis and Cognition: The Realization of the Living (Dordrecht: Reidel, 1980).
20. Howard Rheingold, Virtual Reality, pp. 13–49; Hans Moravec, Mind Children: The Future of Robot and Human Intelligence (Cambridge: Harvard University Press, 1988), pp. 1–5, 116–22.
21. The seminal text is Norbert Wiener, Cybernetics: Or Control and Communication in the Animal and the Machine (Cambridge: MIT Press, 1948).
22. Henry James, The Art of the Novel (New York: Charles Scribner’s Sons, 1937), p. 47.
23. David Harvey, The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change (New York: Blackwell, 1989).
24. The material basis for informatics is meticulously documented in James Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (Cambridge: Harvard University Press, 1986).
25. For an account of how tracks are detected, see Cyberpunk, pp. 35–40, 68–71.
26. David Porush discusses the genre of “cybernetic fiction,” which he defines as fictions that resist the dehumanization that can be read into cybernetics, in The Soft Machine: Cybernetic Fiction (New York and London: Methuen, 1985); Burroughs’s titular story is discussed on pp. 85–111. Robin Lydenberg has a fine exposition of Burroughs’s style in Word Cultures: Radical Theory and Practice in William Burroughs’ Fiction (Urbana: University of Illinois Press, 1987).
27. Fredric Jameson, Postmodernism, or, the Cultural Logic of Late Capitalism (Durham: Duke University Press, 1991).
28. Jacques Derrida, Of Grammatology, trans. Gayatri C. Spivak (Baltimore: Johns Hopkins University Press, 1976).
29. Mark Leyner, My Cousin, My Gastroenterologist (New York: Harmony Books, 1990).
30. Walter Benjamin, “The Storyteller,” Illuminations, trans. Harry Zohn (New York: Schocken, 1969).
31. Jean-Francois Lyotard, The Postmodern Condition: A Report on Knowledge, trans. Geoff Bennington and Brian Massumi (Minneapolis: University of Minnesota Press, 1984).
32. It is significant in this regard that Andrew Ross calls for cultural critics to consider themselves hackers in “Hacking Away at the Counterculture,” in Technoculture, pp. 107–34.
33. George W. S. Trow, Within the Context of No Context (Boston: Little Brown, 1978).
34. Roland Barthes, S/Z, trans. Richard Miller (New York: Hill and Wang, 1974).
35. Paul Virilio and Sylvère Lotringer, Pure War, trans. Mark Polizzotti (New York: Semiotext(e), 1983).
36. “Embodied virtuality” is Mark Weiser’s phrase in “The Computer for the 21st Century,” Scientific American 265 (September 1991), pp. 94–104. Weiser distinguishes between technologies that put the user into a simulation with the computer (virtual reality) and those that embed computers within already existing environments (embodied virtuality or
ubiquitous computing). In virtual reality, the user’s sensorium is redirected into functionalities compatible with the simulation; in embodied virtuality, the sensorium continues to function as it normally would but with an expanded range made possible through the environmentally embedded computers.
26
The coming technological singularity: How to survive in a post-human era
Vernor Vinge
What is the singularity?
The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
There may be developed computers that are “awake” and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is “yes, we can,” then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
Biological science may provide means to improve natural human intellect.
The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades.1 Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt2 has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I’m not guilty of a relative-time ambiguity, let me more specific: I’ll be surprised if this event occurs before 2005 or after 2030.)
What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work — the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct “what if’s” in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.
From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. (In Blood Music,3 Greg Bear paints a picture of the major changes happening in a matter of hours.)
I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam4 paraphrased John von Neumann as saying:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed5.)
In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote6:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. . . . It is more probable than not that within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.
Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees.
Through the ’60s and ’70s and ’80s, recognition of the cataclysm spread.7, 8, 9, 10 Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the “hard” science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future.11 Now they saw that their most diligent extrapolations resulted in the unknowable . . . soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are.
What about the ’90s and the ’00s and the ’10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we’ll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if we were willing to give up speed, if we were willing to settle for an artificial being who was literally slow.12 But it’s much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans’ natural equipment.)
But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technologically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of true technological unemployment finally come true.
Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing science fiction in the middle ’60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed.
And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur fast
er than any technical revolution seen so far. The precipitating event will likely be unexpected — perhaps even to the researchers involved. (“But all our previous models were catatonic! We were just tweaking some parameters. . . .”) If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.
And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I’d be more comfortable if I were regarding these transcendental events from one thousand years remove . . . instead of twenty.
Can the singularity be avoided?
Well, maybe it won’t happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose13 and Searle14 against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question “How We Will Build a Machine that Thinks.”15 As you might guess from the workshop’s title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power that is present in organic brains. A minority felt that the largest 1992 computers were within three orders of magnitude of the power of the human brain. The majority of the participants agreed with Moravec’s estimate16 that we are ten to forty years away from hardware parity. And yet there was another minority who pointed to “other sources,”17, 18 and conjectured that the computational competence of single neurons may be far higher than generally believed. If so, our present computer hardware might be as much as ten orders of magnitude short of the equipment we carry around in our heads. If this is true (or for that matter, if the Penrose or Searle critique is valid), we might never see a Singularity. Instead, in the early ’00s we would find our hardware performance curves beginning to level off — this because of our inability to automate the design work needed to support further hardware improvements. We’d end up with some very powerful hardware, but without the ability to push it further. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever “wake up” and there would never be the intellectual runaway which is the essence of the Singularity. It would likely be seen as a golden age . . . and it would also be an end of progress. This is very like the future predicted by Gunther Stent. In fact, on page 137 of The Coming of the Golden Age,19 Stent explicitly cites the development of transhuman intelligence as a sufficient condition to break his projections.