Hackers

Home > Other > Hackers > Page 15
Hackers Page 15

by Steven Levy


  Gosper in particular had difficulty accepting joint control of the PDP-6. His behavior reminded Fredkin of Rourke, the architect in Ayn Rand’s The Fountainhead, who designed a beautiful building; when Rourke’s superiors took control of the design and compromised its beauty, Rourke blew up the building. Fredkin later recalled Gosper telling him that if time sharing were implemented on the PDP-6, Gosper would be compelled to physically demolish the machine. “Just like Rourke,” Fredkin later recalled. “He felt if this terrible thing was to be done, you would have to destroy it. And I understood this feeling. So I worked out a compromise.” The compromise allowed the machine to be run late at night in single-user mode so the hackers could run giant display programs and have the PDP-6 at their total command.

  The entire experiment in time sharing did not work out badly at all. The reason was that a special, new time-sharing system was created, a system that had the Hacker Ethic in its very soul.

  • • • • • • • •

  The core of the system was written by Greenblatt and Nelson, in weeks of hard-core hacking. After some of the software was done, Tom Knight and others began the necessary adjustments to the PDP-6 and the brand-new memory addition—a large cabinet with the girth of two laundromat-size washing machines, nicknamed Moby Memory. Although the administration approved of the hackers’ working on the system, Greenblatt and the rest exercised full authority on how the system would turn out. An indication of how this system differed from the others (like the Compatible Time-sharing System) was the name that Tom Knight gave the hacker program: the Incompatible Time-sharing System (ITS).

  The title was particularly ironic because, in terms of friendliness to other systems and programs, ITS was much more compatible than CTSS. True to the Hacker Ethic, ITS could easily be linked to other things—that way it could be infinitely extended so users could probe the world more effectively. As in any time-sharing system, several users would be able to run programs on ITS at the same time. But on ITS, one user could also run several programs at once. ITS also allowed considerable use of the displays, and had what was for the time a very advanced system of editing that used the full screen (“years before the rest of the world,” Greenblatt later boasted). Because the hackers wanted the machine to run as swiftly as it would have done had it not been time-shared, Greenblatt and Nelson wrote machine language code which allowed for unprecedented control in a time-sharing system.

  There was an even more striking embodiment of the Hacker Ethic within ITS. Unlike almost any other time-sharing system, ITS did not use passwords. It was designed, in fact, to allow hackers maximum access to any user’s file. The old practice of having paper tapes in a drawer, a collective program library where you’d have people use and improve your programs, was embedded in ITS; each user could open a set of personal files, stored on a disk. The open architecture of ITS encouraged users to look through these files, see what neat hacks other people were working on, look for bugs in the programs, and fix them. If you wanted a routine to calculate sine functions, for instance, you might look in Gosper’s files and find his ten-instruction sine hack. You could go through the programs of the master hackers, looking for ideas, admiring the code. The idea was that computer programs belonged not to individuals but to the world of users.

  ITS also preserved the feeling of community that the hackers had when there was only one user on the machine, and people could crowd around him to watch him code. Through clever crossbar switching, not only could any user on ITS type a command to find out who else was on the system, but he could actually switch himself to the terminal of any user he wanted to monitor. You could even hack in conjunction with another user—for instance, Knight could log in, find out that Gosper was on one of the other ports, and call up his program—then he could write lines of code in the program Gosper was hacking.

  This feature could be used in all sorts of ways. Later on, after Knight had built some sophisticated graphics terminals, a user might be wailing away on a program and suddenly on screen there would appear this six-legged . . . bug. It would crawl up your screen and maybe start munching on your code, spreading little phosphorous crumbs all over. On another terminal, hysterical with high-pitched laughter, would be the hacker who was telling you, in this inscrutable way, that your program was buggy. But even though any user had the power not only to do that sort of thing, but to go in your files and delete (“reap,” as they called it) your hard-hacked programs and valuable notes, that sort of thing wasn’t done. There was honor among hackers on ITS.

  The faith that the ITS had in users was best shown in its handling of the problem of intentional system crashes. Formerly, a hacker rite of passage would be breaking into a time-sharing system and causing such digital mayhem—maybe by overwhelming the registers with looping calculations—that the system would “crash.” Go completely dead. After a while a hacker would grow out of that destructive mode, but it happened often enough to be a considerable problem for people who had to work on the system. The more safeguards the system had against this, the bigger the challenge would be for some random hacker to bring the thing to its knees. Multics, for instance, required a truly nontrivial hack before it bombed. So there’d always be macho programmers proving themselves by crashing Multics.

  ITS, in contrast, had a command whose specific function was crashing the system. All you had to do was type KILL SYSTEM, and the PDP-6 would grind to a halt. The idea was to take all the fun away from crashing the system by making it trivial to do that. On rare occasions, some loser would look at the available commands and say, “Wonder what KILL does?” and bring the system down, but by and large ITS proved that the best security was no security at all.

  Of course, as soon as ITS was put up on the PDP-6 there was a flurry of debugging, which, in a sense, was to go on for well over a decade. Greenblatt was the most prominent of those who spent full days “hacking ITS”—seeking bugs, adding new features, making sections of it run faster . . . working on it so much that the ITS environment became, in effect, a home for systems hackers.

  In the world that was the AI lab, the role of the systems hacker was central. The Hacker Ethic allowed anyone to work on ITS, but the public consequences of systems hacking threw a harsh spotlight on the quality of your work—if you were trying to improve the MIDAS assembler or the ITS-DDT debugger, and you made a hideous error, everyone’s programs were going to crash, and people were going to find out what loser was responsible. On the other hand, there was no higher calling in hackerism than quality systems hacking.

  The planners did not regard systems hacking with similar esteem. The planners were concerned with applications—using computers to go beyond computing, to create useful concepts and tools to benefit humanity. To the hackers, the system was an end in itself. Most hackers, after all, had been fascinated by systems since early childhood. They had set aside almost everything else in life once they recognized that the ultimate tool in creating systems was the computer: not only could you use it to set up a fantastically complicated system, at once byzantine and elegantly efficient, but then, with a “Moby” operating system like ITS, that same computer could actually be the system. And the beauty of ITS was that it opened itself up, made it easy for you to write programs to fit within it, begged for new features and bells and whistles. ITS was the hacker living room, and everyone was welcome to do what he could to make himself comfortable; to find and decorate his own little niche. ITS was the perfect system for building . . . systems!

  It was an endlessly spiraling logical loop. As people used ITS, they might admire this feature or that, but most likely they would think of ways to improve it. This was only natural, because an important corollary of hackerism states that no system or program is ever completed. You can always make it better. Systems are organic, living creations: if people stop working on them and improving them, they die.

  When you completed a systems program, be it a major effort like an assembler or debugger or something quick and (you hoped)
elegant, like an interface output multiplexer, you were simultaneously creating a tool, unveiling a creation, and fashioning something to advance the level of your own future hacking. It was a particularly circular process, almost a spiritual one, in which the systems programmer was a habitual user of the system he was improving. Many virtuoso systems programs came out of remedies to annoying obstacles which hackers felt prevented them from optimum programming. (Real optimum programming, of course, could only be accomplished when every obstacle between you and the pure computer was eliminated—an ideal that probably won’t be fulfilled until hackers are somehow biologically merged with computers.) The programs ITS hackers wrote helped them to program more easily, made programs run faster, and allowed programs to gain from the power that comes from using more of the machine. So not only would a hacker get huge satisfaction from writing a brilliant systems program—a tool which everyone would use and admire—but from then on he would be that much further along in making the next systems program.

  To quote a progress report written by hacker Don Eastlake five years after ITS was first running:

  The ITS system is not the result of a human wave or crash effort. The system has been incrementally developed almost continuously since its inception. It is indeed true that large systems are never “finished.” . . . In general, the ITS system can be said to have been designer implemented and user designed. The problem of unrealistic software design is greatly diminished when the designer is the implementor. The implementor’s ease in programming and pride in the result is increased when he, in an essential sense, is the designer. Features are less likely to turn out to be of low utility if users are their designers and they are less likely to be difficult to use if their designers are their users.

  The prose was dense, but the point was clear—ITS was the strongest expression yet of the Hacker Ethic. Many thought that it should be a national standard for time-sharing systems everywhere. Let every computer system in the land spread the gospel, eliminating the odious concept of passwords, urging the unrestricted hands-on practice of system debugging, and demonstrating the synergistic power that comes from shared software, where programs belong not to the author but to all users of the machine.

  In 1968, major computer institutions held a meeting at the University of Utah to come up with a standard time-sharing system to be used on DEC’s latest machine, the PDP-10. The Ten would be very similar to the PDP-6, and one of the two operating systems under consideration was the hackers’ Incompatible Time-sharing System. The other was TENEX, a system written by Bolt Beranek and Newman that had not yet been implemented. Greenblatt and Knight represented MIT at the conference, and they presented an odd picture—two hackers trying to persuade the assembled bureaucracies of a dozen large institutions to commit millions of dollars of their equipment to a system that, for starters, had no built-in security.

  They failed.

  Knight would later say that it was political naiveté that lost it for the MIT hackers. He guessed that the fix was in even before the conference was called to order—a system based on the Hacker Ethic was too drastic a step for those institutions to take. But Greenblatt later insisted that “we could have carried the day if [we’d] really wanted to.” But “charging forward,” as he put it, was more important. It was simply not a priority for Greenblatt to spread the Hacker Ethic much beyond the boundaries of Cambridge. He considered it much more important to focus on the society at Tech Square, the hacker Utopia which would stun the world by applying the Hacker Ethic to create ever more perfect systems.

  Chapter 7. Life

  They would later call it a Golden Age of hacking, this marvelous existence on the ninth floor of Tech Square. Spending their time in the drab machine room and the cluttered offices nearby, gathered closely around terminals where rows and rows of green characters of code would scroll past them, marking up printouts with pencils retrieved from shirt pockets, and chatting in their peculiar jargon over this infinite loop or that losing subroutine, the cluster of technological monks who populated the lab was as close to paradise as they would ever be. A benevolently anarchistic lifestyle dedicated to productivity and PDP-6 passion. Art, science, and play had merged into the magical activity of programming, with every hacker an omnipotent master of the flow of information within the machine. The debugged life in all its glory.

  But as much as the hackers attempted to live the hacker dream without interference from the pathetically warped systems of the “real world,” it could not be done. Greenblatt and Knight’s failure to convince outsiders of the natural superiority of the Incompatible Time-sharing System was only one indication that the total immersion of a small group of people into hackerism might not bring about change on the massive scale that all the hackers assumed was inevitable. It was true that, in the decade since the TX-0 was first delivered to MIT, the general public and certainly the other students on campus had become more aware of computers in general. But they did not regard computers with the same respect and fascination as did the hackers. And they did not necessarily regard the hackers’ intentions as benign and idealistic.

  On the contrary, many young people in the late 1960s saw computers as something evil, part of a technological conspiracy where the rich and powerful used the computer’s might against the poor and powerless. This attitude was not limited to students protesting, among other things, the now exploding Vietnam War (a conflict fought in part by American computers). The machines which stood at the soul of hackerism were also loathed by millions of common, patriotic citizens who saw computers as a dehumanizing factor in society. Every time an inaccurate bill arrived at a home, and the recipient’s attempts to set it right wound up in a frustrating round of calls—usually leading to an explanation that “the computer did it,” and only herculean human effort could erase the digital blot—the popular contempt toward computers grew. Hackers, of course, attributed those slipups to the brain-damaged, bureaucratic, batch-processed mentality of IBM. Didn’t people understand that the Hacker Ethic would eliminate those abuses by encouraging people to fix bugs like thousand-dollar electric bills? But in the public mind there was no distinction between the programmers of Hulking Giants and the AI lab denizens of the sleek, interactive PDP-6. And in that public mind all computer programmers, hackers or not, were seen either as wild-haired mad scientists plotting the destruction of the world or as pasty-skinned, glassy-eyed automatons, repeating wooden phrases in dull monotones while planning the next foray into technological big-brotherism.

  Most hackers chose not to dwell on those impressions. But in 1968 and 1969, the hackers had to face their sad public images, like it or not.

  A protest march that climaxed at Tech Square dramatically indicated how distant the hackers were from their peers. Many of the hackers were sympathetic to the antiwar cause. Greenblatt, for instance, had gone to a march in New Haven, and had done some phone line hookups for antiwar radicals at the National Strike Information Center at Brandeis. And hacker Brian Harvey was very active in organizing demonstrations; he would come back and tell in what low esteem the AI lab was held by the protesters.

  There was even some talk at antiwar meetings that some of the computers at Tech Square were used to help run the war. Harvey would try to tell them it wasn’t so, but the radicals would not only disbelieve him but get angry that he’d try to feed them bullshit.

  The hackers shook their heads when they heard of that unfortunate misunderstanding. One more example of how people didn’t understand! But one charge leveled at the AI lab by the antiwar movement was entirely accurate: all the lab’s activities, even the most zany or anarchistic manifestations of the Hacker Ethic, had been funded by the Department of Defense. Everything, from the Incompatible Time-sharing System to Peter Samson’s subway hack, was paid for by the same Department of Defense that was killing Vietnamese and drafting American boys to die overseas.

  The general AI lab response to that charge was that the Defense Department’s Advanced Research Projects Agenc
y (ARPA), which funded the lab, never asked anyone to come up with specific military applications for the computer research engaged in by hackers and planners. ARPA had been run by computer scientists; its goal had been the advancement of pure research. During the late 1960s a planner named Robert Taylor was in charge of ARPA funding, and he later admitted to diverting funds from military, “mission-oriented” projects to projects that would advance pure computer science. It was only the rarest hacker who called the ARPA funding “dirty money.”

  Almost everyone else, even people who opposed the war, recognized that ARPA money was the lifeblood of the hacking way of life. When someone pointed out the obvious—that the Defense Department might not have asked for specific military applications for the Artificial Intelligence and systems work being done, but still expected a bonanza of military applications to come from the work (who was to say that all that “interesting” work in vision and robotics would not result in more efficient bombing raids?)—the hackers would either deny the obvious (Greenblatt: “Though our money was coming from the Department of Defense, it was not military”) or talk like Marvin Minsky: “There’s nothing illegal about a Defense Department funding research. It’s certainly better than a Commerce Department or Education Department funding research . . . because that would lead to thought control. I would much rather have the military in charge of that . . . the military people make no bones about what they want, so we’re not under any subtle pressures. It’s clear what’s going on. The case of ARPA was unique, because they felt that what this country needed was people good in defense technology. In case we ever needed it, we’d have it.”

  Planners thought they were advancing true science. Hackers were blithely formulating their tidy, new-age philosophy based on free flow of information, decentralization, and computer democracy. But the antimilitary protesters thought it was a sham, since all that so-called idealism would ultimately benefit the War Machine that was the Defense Department. The antiwar people wanted to show their displeasure, and the word filtered up to the Artificial Intelligence lab one day that the protesters were planning a march ending with a rally right there on the ninth floor. There, protesters would gather to vividly demonstrate that all of them—hackers, planners, and users—were puppets of the Defense Department.

 

‹ Prev