Book Read Free

Seek!: Selected Nonfiction

Page 8

by Rudy Rucker


  11. Allen Ginsberg, Howl (annotated edition), HarperPerennial 1995, p. 6

  Page 57

  A Moloch machine in Metropolis.

  (Museum of Modern Art film still.)

  half to the upper half. Periodically a line of dots would appear on the left hand side of the screen . . . in the upper or the lower half of the screen. If they met the hole in the fence, they would pass through; otherwise they would retreat. These dots were controlled by a learning program. If the operator moved the hole from top to bottom in some regular way, the learning program would recognize what was going on, and after a short time, the line of dots would always get through the hole. No one took this program very seriously . . .12

  This kind of interactive, noodling computer exploration blossomed into a movement at the Massachusetts Institute of Technology during the 1960s and 1970s. The catalyst was the first interactive machine, the PDP-1, built by DEC (Digital Equipment Corporation). As mentioned above, with the ''real-time" PDP-1, instead of handing your batch of punch cards to the priestly keepers of a hulking giant mainframe, you could sit down at a keyboard, type things in, and see immediate feedback on a screen.

  12. Maurice Wilkes, Memoirs of a Computer Pioneer, MIT Press 1985, p. 208.

  Page 58

  Steven Levy's wonderful book Hackers chronicles how the arrival of the PDP-1 at MIT in 1961 changed computing forever. A small cadre of engineering students began referring to themselves as computer hackers, and set to work doing creative things with the PDP-1. One of their most well-known projects was a video game called Spacewar, in which competing spaceships fired torpedoes at each other while orbiting around a central sun. Such games are of course a commonplace now, but Spacewar was the first.13 When the improved PDP-6 arrived at MIT in the mid 1960s, it was used for a wide range of hacker projects, including The Great Subway Hack in which one of the hackers went down to New York City and managed to travel to every single subway stop using a single subway token, thanks to a schedule interactively updated by the PDP-6 on the basis of phone calls from MIT train spotters stationed around Manhattan.

  As I mentioned in the first essay, the meaning of the term "computer hacker" has changed over the years; "hacker" is now often used to refer to more or less criminal types who use computer networks for purposes of fraud or espionage. This linguistic drift has been driven by the kinds of stories about computers which the press chooses to report. Unable to grasp the concept of a purely joyous manipulation of information, the media prefer to look for stories about the dreary old Moloch themes of money, power and war. But in the original sense of the word, a computer hacker is a person who likes to do interesting things with machines - a person, if you will, who'd rather look at a computer monitor than at a television screen.

  According to Steven Levy's book, the MIT hackers went so far as to formulate a credo known as the Hacker Ethic:

  1) Access to computers should be unlimited and total.

  2) All information should be free.

  3) Mistrust authority - promote decentralization.

  13. Brian Silverman and some other hackers have recently reconstructed Spacewar. They recreated a historically accurate binary source code for the program and are running it on a PDP-1 emulator they wrote in Java as a Java application that you can run over the Web. The reconstruction is at http://el.www.media.mit.edu/groups/el/ projects/spacewar.

  Page 59

  4) Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.

  5) You can create art and beauty on a computer.

  6) Computers can change your life for the better.14

  Personal Computers

  When first promulgated, the principles of the Hacker Ethic seemed like strange, unrealistic ideas, but now there are ever-increasing numbers of people who believe them. This is mostly thanks to the fact that personal computers have spread everywhere.

  In 1975, the Intel Corporation began making an integrated circuit chip which had an entire computer processor on it. The first of these chips used four-bit "words" of memory and was called the 4004; it was quickly followed by the eight-bit 8008. An obscure company called MITS (Model Instrumentation Telemetry Systems) in Albuquerque, New Mexico, had the idea of putting the Intel 8008 chip in a box and calling it the Altair computer. A mock-up of the Altair appeared on the cover of the January 1975 cover of Popular Electronics, and the orders began pouring in. This despite the daunting facts that: firstly, the Altair was sold simply as a kit of parts which you had to assemble; secondly, once the Altair was assembled the only way to put a program into it was by flicking switches (eight flicks per byte of program code); and thirdly, the only way to get output from it was to look at a row of eight tiny little red diode lights.

  Nowhere was the Altair more enthusiastically greeted than in Silicon Valley, that circuit-board of towns and freeways that sprawls along the south end of the San Francisco Bay from San Jose to Palo Alto. This sunny, breezy terrain was already filled with electronics companies such as Fairchild, Varian and Hewlett-Packard, which did good business supplying local military contractors like Lock-heed. Catalyzed by the Altair, a hobbyist group named the Homebrew Computer Club formed.

  One of the early Homebrew high points was when a hardware hacker named Steve Dompier found that if he put his radio next to

  14. Steven Levy, Hackers: Heroes of the Computer Revolution, Doubleday, New York 1984, pp. 4045.

  Page 60

  his Altair, the electrical fields from certain of the computer's operations could make the radio hum at various pitches. After several days of feverish switch flicking, Dompier was able to make his Altair-plus-radio system play the Beatles' "Fool on the Hill" - followed by "Daisy," the same song that the dying computer HAL sings in the classic science fiction movie 2001.

  One of the regulars at the Homebrew Computer Club meetings was a shaggy young man named Steve Wozniak. Rather than assembling an Altair, Woz concocted his own computer out of an amazingly minimal number of parts. He and his friend Steve Jobs decided to go into business in a small way, and they sold about 50 copies of Wozniak's first computer through hobbyist publications. The machine was called an Apple, and it cost $666.66. And then Wozniak and Jobs started totally cranking. In 1978 they released the Apple II, which had the power of the old mainframe computers of the 1960s . . . plus color and sound. The Apple II sold and sold; by 1980, Wozniak and Jobs were millionaires.

  The next big step in the development of the personal computer happened in 1981 when IBM released its own personal computer, the IBM PC. Although not so well-designed a machine as the Apple II, the IBM PC had the revolutionary design idea of using an open architecture which would be easy for other manufacturers to copy. Each Apple computer included a ROM (mad-only memory) chip with certain secret company operating system routines on it, and there was no way to copy these chips. IBM, on the other hand, made public the details of how its operating system worked, making it possible for people to clone it. Their processor was a standard eight-bit Intel 8088 (not to be confused with the Altair's 8008), soon replaced by the sixteen-bit 8086. The floodgates opened and a torrent of inexpensive IBM PC compatible machines gushed into the marketplace. Apple's release of the Macintosh in 1984 made the IBM PC architecturn look shabbier than ever, but the simple fact that IBM PC clones were cheaper than Macintoshes led to these machines taking the lion's share of the personal computer market. With the coming of the Microsoft Windows operating systems, the "Wintel" (for Windows software with Intel chips) clone machines acquired Mac-like graphic user interfaces that made them quite comfortable to use.

  Page 61

  This brings us reasonably close to the present, so there's not much point in going over more chronological details. One of the things that's exciting about the history of computers is that we are living inside it. It's still going on, and no final consensus opinion has yet been arrived at.

  The Joy of Hacking

  For someone who writes programs or design
s computer hardware, there is a craftsperson's pleasure in getting all the details right. One misplaced symbol or circuit wire can be fatal. Simply to get such an elaborate structure to work provides a deep satisfaction for certain kinds of people. Writing a program or designing a chip is like working a giant puzzle with rules that are, excitingly, never quite fully known. A really new design is likely to be doing things that nobody has ever tried to do before. It's fresh territory, and if your hack doesn't work, it's up to you to figure out some way to fix things.

  Hackers are often people who don't relate well to other people. They enjoy the fact that they can spend so much time interacting with a non-emotional computer. The computer's responses are clean and objective. Unlike, say, a parent or an officious boss, the computer is not going to give you an error message just because it doesn't like your attitude or your appearance. A computer never listens to the bombast of the big men on campus or the snide chatter of the cheerleaders, no, the computer will only listen to the logical arabesques of the pure-hearted hacker.

  Anyone with a computer is of necessity a bit of a hacker. Even if all you use is a word processor or a spread sheet and perhaps a little electronic mail, you soon get comfortable with the feeling that the space inside your computer is a place where you can effectively do things. You're proud of the tricks you learn for making your machine behave. Thanks to your know-how, your documents are saved and your messages come and go as intended.

  The world of the computer is safe and controlled; inside the machine things happen logically. At least this is how it's supposed to be. The computer is meant to be a haven from the unpredictable chaos of interpersonal relations and the bullying irrationality of society at large. When things do go wrong with your computer - like when

  Page 62

  you suffer up the learning curve of a new program or, even worse, when you install new hardware or a new operating system - your anxiety and anger can grow quite out of proportion. "This was supposed to be the one part of the world that I can control!" But all computer ailments do turn out to be solvable, sometimes simply by asking around, sometimes by paying for a new part or for the healing touch of a technician. The world of the computer is a place of happy endings.

  Another engaging thing about the computer is that its screen can act as a window into any kind of reality at all. Particularly if you write or use graphics programs, you have the ability to explore worlds never before seen by human eye. Physical travel is wearying, and travel away from Earth is practically impossible. But with a computer you can go directly to new frontiers just as you are.

  The immediacy of a modern computer's response gives the user the feeling that he or she is interacting with something that is real and almost alive. The space behind the screen merges into the space of the room, and the user enters a world that is part real and part computer - the land of cyberspace. Going outside after a long computer session, the world will look different, with physical objects and processes taking on the odd, numinous chunkiness of computer graphics and computer code. Sometimes new aspects of reality will become evident.

  I've always felt like television is, on the whole, a bad thing. It's kind of sad to be sitting there staring at a flickering screen and being manipulated. Using a computer is more interactive than watching television, and thus seems more positive. But even so, computers are somewhat like television and are thus to some extent forces of evil. I was forcefully reminded of this just yesterday, when my son Tom and I stopped in at the Boardwalk amusement park in Santa Cruz.

  Tom's twenty-five now, so we were cool about it and only went on two rides. The first was the Big Dipper, a wonderful old wooden roller coaster rising up right next to the Monterey Bay. The streaming air was cool and salty, the colors were bright and sun-drenched, and the cars moved though a long tunnel of sound woven from screams and rattles and carnival music and distant waves. It was wonderful.

  The second ride we went on was a Virtual Reality ride in which nine people are squeezed into a windowless camper van mounted on

  Page 63

  hydraulic legs. On the front wall of the airless little van was a big screen showing a first-person view of a ride down - a roller coaster! As the virtual image swooped and jolted, the van's hydraulic jacks bucked and shuddered in an attempt to create a kinesthetic illusion that you were really in the cyberspace of the virtual ride. Compared to fresh memory of the true roller coaster, the Virtual Reality ride was starkly inadequate and manipulative.

  Compared to reality, computers will always be second best. But computers are here for us to use, and if we use them wisely, they can teach us to enjoy reality more than ever.

  Unpublished. Written August, 1996.

  Page 64

  Cellular Automata

  What Are Cellular Automata?

  Cellular automata are self-generating computer graphics movies. The most important near-term application of cellular automata will be to commercial computer graphics; in coming years you won't be able to watch television for an hour without seeing some kind of CA.

  Three other key applications of cellular automata will be to simulation of biological systems (artificial life), to simulation of physical phenomena (such as heat-flow and turbulence), and to the design of massively parallel computers.

  Most of the cellular automata I've investigated are two-dimensional cellular automata. In these programs the computer screen is divided up into "cells" which are colored pixels or dots. Each cell is repeatedly "updated" by changing its old color to a new color. The net effect of the individual updates is that you see an ever-evolving sequence of screens. A graphics program of this nature is specifically called a cellular automaton when it is (1) parallel, (2) local, and (3) homogeneous.

  1) Parallelism means that the individual cell updates are performed independently of each other. That is, we think of all of the updates being done at once. (Strictly speaking, your computer only updates one cell at a time, but we use a buffer to store the new cell values until a whole screen's worth has been computed to refresh the display.)

  2) Locality means that when a cell is updated, its new color value is based solely on the old color values of the cell and of its nearest neighbors.

  3) Homogeneity means that each cell is updated according to the same rules. Typically the color values of the cell and of its nearest eight neighbors are combined according to some logico-algebraic formula, or are used to locate an entry in a preset lookup table.

  Page 65

  A cellular automaton rule called "Ruglap."

  (Generated by Cellab.)

  Cellular automata can act as good models for physical, biological and sociological phenomena. The reason for this is that each person, or cell, or small region of space "updates" itself independently (parallelism), basing its new state on the appearance of its immediate surroundings (locality) and on some generally shared laws of change (homogeneity).

  As a simple example of a physical CA, imagine sitting at the edge of a swimming pool, stirring the water with your feet. How quickly the pool's surface is updated! The "computation" is so fast because it is parallel: all the water molecules are computing at once (parallelism). And how does a molecule compute? It reacts to forces from its neighbors (locality), in accordance with the laws of physics (homogeneity).

  Why Cellular Automata?

  The remarkable thing about CAs is their ability to produce interesting and logically deep patterns on the basis of very simply stated preconditions. Iterating the steps of a CA computation can produce fabulously rich output. A good CA is like an acorn which grows an oak tree, or more accurately, a good CA is like the DNA inside the acorn, busily orchestrating the protein nanotechnology that builds the tree.

  One of computer science's greatest tasks at the turn of the Mil-

  Page 66

  lennium is to humanize our machines by making them "alive." The dream is to construct intelligent artificial life (called "A-life" for short). In Cambridge, Los Alamos, Silicon Valley and beyond, this is
the programmer's Great Work as surely as the search for the Philosopher's Stone was the Great Work of the medieval alchemists.

 

‹ Prev