The people building the mainframes might not have put it quite so apocalyptically, but they agreed that old notions of privacy had disappeared nearly as soon as the first UNIVAC hit the market. “In fact, there is at this time very little privacy in the private life of all of us,” observed pioneering computer designer Evelyn Berezin in a letter to Gallagher. The thousands of machines abuzz in government agencies and corporate computer banks already possessed an astounding amount of personal information, lightly policed and largely unprotected. Programmers might not yet know what to do with all this data, but accelerating computer power indicated that they soon would. As Paul Baran of RAND reminded one inquiring journalist, “Behind all this creating of records is the implicit assumption that they will someday be of use.”26
In the grim summer of Nixon’s resignation, privacy became the issue that nearly everyone in a fractious and wounded Washington could agree upon. “The data bank society is nearly upon us now,” warned conservative conscience Barry Goldwater at hearings convened by Ervin that June. “We must program the programmers while there is still some personal liberty left.” After a November election that saw a wave of young, reform-minded candidates (most of them Democrats) elected to Congress as “Watergate babies,” Congress hurriedly bundled together a flurry of legislative proposals to pass the Privacy Act of 1974. Shutting the door on a truly dreadful year in politics, Gerald Ford signed it into law on New Year’s Eve.27
The opening lines of the Privacy Act made it clear that computers had been its catalyst: “Increasing use of computers and sophisticated information technology, while essential to the efficient operations of the Government, has greatly magnified the harm to individual privacy that can occur.” And like most of the Congressional hearings on the subject over the years prior to its passage, the Act aimed its powers squarely at the federal government. Even though corporations were among the most enthusiastic practitioners of the dark arts of surveillance, personality testing, and sucking massive amounts of consumer information into giant data banks, the Congressional investigations focused on the villain they could see, and whose budget they controlled: the federal bureaucracy.28
In being so relentlessly focused on the government, America’s privacy warriors paid little attention to what the technology industry was doing, or ought to be doing. Few regulations fettered the way in which American companies gathered data on the people who used their products. The citizen computer user might be able to put some limits on what the government knew, but she had little recourse when it came to controlling what companies might discover. This regulatory latitude, of course, would ultimately make possible one of modern Silicon Valley’s greatest business triumphs: to gather, synthesize, and personalize vast amounts of information, and profit richly from it.
SMALL IS BEAUTIFUL
At the same time as politicians and activists were pointing to runaway tech as a source of society’s many ills, however, they embraced new applications of technology as the means to make it right. The computers-in-education philosophy espoused by people like Dean Brown was one manifestation of a broader push to think about computers as tools for social change. The 1970 annual meeting of the Association for Computing Machinery, the ACM—usually a days-long geek’s paradise of technical papers—devoted the entire program to “how computers can help men solve problems of the cities, health, environment, government, education, the poor, the handicapped and society at large.” Consumer crusader Ralph Nader was a keynote speaker. The Nixon Administration’s short-lived Technology Opportunities Program reflected the prevailing mood, asking industry to submit its suggestions for how computer and communications technologies could solve social problems.29
This played out in popular culture as well. Yale law professor and privacy advocate Charles Reich’s homage to countercultural values, The Greening of America, rested atop the best-seller lists for weeks after its September 1970 release. Just as in the calls to action Mario Savio shouted through his megaphone, Reich framed the modern world as a machine in need of a reboot: “Americans have lost control of the machinery of their society, and only new values and a new culture can restore control.” British economist E. F. Schumacher had a similarly technological emphasis in his 1973 bestseller Small Is Beautiful. The relentless push for economic growth must give way to a new philosophy of “enoughness.” Large and inhuman systems needed to be replaced by “a technology with a human face.”30
Underscoring the turn against bigness, the largest computer company of them all had found itself under siege from competitors and regulators. On the last working day of the Johnson Administration in 1969, the Department of Justice filed an antitrust lawsuit against IBM. It wasn’t the first time Big Blue had been in the government’s antimonopolistic crosshairs, having been the target of a Truman-era action that had ultimately forced the split between its computer hardware and services businesses. Now, the soaring market share and cash generated by IBM’s wildly successful System/360 line attracted the government’s attention. The lawsuit formed a backdrop as liberals in Congress began to question business monopoly. In 1972, Michigan Democrat Philip A. Hart, known as the “conscience of the Senate,” introduced a sweepingly antimonopolistic “Industrial Reorganization Act” into Congress. Hart described it as “the greatest effort which has been put forth to finding a solution for economic concentration.”31
The adverse effects of this concentration on consumers seemed obvious when it came to the computer industry. Automation of everything from paychecks to airline reservations may have increased business efficiency, but it created everyday hassles when the system messed up. Every misdirected utility bill or lost hotel booking became a proof point for the computer skeptics. Industry executives found themselves on the defensive. “Perhaps if we look inside a little bit,” one protested to Hart’s committee, “you’ll observe that it is not a computer that ever makes a goof; it’s the people who use it and the people who write programs for it.”32
Baby-boomer activists Lee Felsenstein and Liza Loop never would have imagined they’d have much in common with a suit-and-tie executive of a big computer company. But all were pushing the same message: the power of the computer came from its user.
CHAPTER 9
The Personal Machine
Down at Stanford, similar ideas had been brewing.
As the turbulent politics of the 1960s swirled around it, the once placid, seemingly apolitical Farm had turned into a hotbed of student activism. Like their counterparts at Berkeley, Stanford students mobilized for civil rights, packing Memorial Auditorium to hear Martin Luther King Jr. speak on the eve of 1964’s Freedom Summer, mobilizing in support of striking California farmworkers in 1965, and hosting a “Black Power Day” featuring Stokely Carmichael in 1966. By 1967, this activist energy had shifted much of its focus to the war in Vietnam. That February, Stanford undergraduates screamed down a visiting Vice President Hubert Humphrey and held all-night peace vigils in the university’s Memorial Church. Not too long after, students burned an effigy of Hoover Institution director Glenn Campbell on the steps of Hoover Tower.1
Fred Terman had spent two decades turning Leland and Jane Stanford’s sentimental project into one of the nation’s preeminent hubs of defense research. Now, his steeples of excellence became the targets of ferocious student anger about the Vietnam War. For nine days in April 1969, several hundred students occupied the Stanford Applied Electronics Lab, demanding that the university put an end to classified research. Soon after, university administrators cut ties with SRI and its controversial portfolio of classified projects. The decision disappointed the students, who had hoped that SRI would be shut down altogether.2
Had that happened, Stanford would have squelched an operation that was building an entirely new universe of connected, human-scale computing—the home of Shaky the Robot, of Dean Brown’s education lab, and of Doug Engelbart’s “research center for augmenting human intellect.”
In Engelbart
’s emphasis on networked collaboration, this low-key member of the Greatest Generation was completely in sync with the radical political currents swirling around the Stanford campus and the bland suburban storefronts of the South Bay. Just down the road from SRI’s Menlo Park facility was Kepler’s Books, which owner Roy Kepler had turned into an antiwar and countercultural salon. Beat poets, Joan Baez, and the Grateful Dead all made appearances at Kepler’s, and the store’s book talks and rap sessions became can’t-miss events for many in the local tech community. That included members of the Engelbart lab, who’d drop in on their way to catch the commuter train home. And Engelbart’s vision of expanding the mind’s capabilities through networked technology had much in common with that of Michael Murphy, a Stanford graduate who co-founded the Esalen Institute on the Big Sur coast in 1962. Esalen’s goal, Murphy told a Life reporter, was “reaching a terra incognita of consciousness.” While Esalen had meditation and encounter sessions, Engelbart used computers to, as his friend Paul Saffo later put it, “create a new home for the mind.”3
Engelbart’s December 1968 demo had been a revelation and an inspiration to the clan of Bay Area programmers and visionaries who were thinking about computers as tools for work, education, and play. Dean Brown’s lab used Engelbart’s mouse to test how computers augmented student learning. The event also brought new converts to the movement, notably Stewart Brand, who had joined the demo team as a journeyman videocam operator, and left having been turned on to the power of networked computing. Brand and Albrecht’s collaboration, the Portola Institute, and the Whole Earth Catalog followed. The demo “quite literally branched the course of computing off the course it had been going for the previous ten years,” remembered Saffo, “and things have never quite been the same again.”4
THE IDEA FACTORY
Not too long after, three thousand miles away from the robot-trolled halls of SRI, a group of corporate executives were sitting in a wood-paneled office, trying to figure out where the next generation of their company’s products would come from. Xerox was a relatively young company, but its ascent had been rocket-like, and astoundingly lucrative. After developing some of the first office photocopy machines less than two decades earlier, it had cornered the market as thoroughly as IBM dominated mainframe computing.
As the cash rolled in, Xerox decided to follow the example of its more venerable predecessors like AT&T, and set up a stand-alone research facility. And where better to do it than Palo Alto? Top electronics firms had been setting up labs in Fred Terman’s orbit for two decades by then, and no other place could match the combination of Class A real estate, Class A engineering talent, and Class A weather. “Xerox Plans Laboratory for Research in California,” read the tiny item buried inside The New York Times business section in the spring of 1970. Its purpose, said Xerox research chief Jacob Goldman, was “to advance data processing technology.”5
The innocuous announcement marked the launch of a venture that eventually turned Doug Engelbart’s vision into market reality. Ironically, although Xerox bankrolled the enterprise, the company was not the ultimate beneficiary of the breakthroughs it sponsored. It was not a computer company, and the copier business was simply too lucrative to justify creating entirely new product lines and sales channels in order to become one. Instead, its Palo Alto Research Center, or PARC, became the seedbed for new, mostly California-based companies whose profits ultimately would dwarf those of the photocopier and mainframe industries combined. PARC became to personal computing what NASA had been to microchips: a deep-pocketed financier who pushed money toward research and development of blue-sky technology, and then mostly got out of the way.
The people who filled PARC’s halls in the first years of the 1970s were tightly networked into the convivial Bay Area ecosystem of computer professionals. At a time and place dominated by the semiconductor industry, “with its famously macho disdain for women,” as PARC scientist Lynn Conway put it, there was remarkable gender diversity in its ranks. The new hires took advantage of Xerox’s abundant resources and loose oversight to creatively interpret Goldman’s definition of “data processing technology,” pursuing projects inspired by Doug Engelbart’s ideas about augmented intelligence and by hacker culture more generally. Engelbart’s SRI operation had drifted after the great demo—investors couldn’t figure out the devices’ commercial potential—and several members of his team moved over to PARC. Still more came from the academic engineering diaspora of Stanford and Berkeley.
There was Alan Kay, who wanted to develop a computer small enough to fit in a book bag. Down the hall, Bob Metcalfe and David Boggs invented a way to connect multiple computers that they called the Ethernet. Across the way were Adele Goldberg and Dan Ingalls, also computer-education evangelists, who collaborated with Kay on a transformative and classroom-friendly new computer language called Smalltalk. Conway, who’d already made major breakthroughs in high-performance computer architecture in an earlier career at IBM, was recruited from Memorex. And coming in to head PARC’s Computer Science Laboratory was none other than Bob Taylor, the man who only a few years before had pulled together that marvelous academic network called the ARPANET.6
Xerox had assembled an all-star team. But its Palo Alto facility first entered the broader public consciousness as a looser, more rebellious outfit, courtesy of an article by Stewart Brand that appeared in Rolling Stone in December 1972. Titled “Spacewar: Fanatic Life and Symbolic Death Among the Computer Bums,” Brand’s piece told of techies commandeering the facility’s computer networks to play midnight video games, naming their offices after J. R. R. Tolkien characters, growing their hair long, and having little regard for traditional authority. Alan Kay told Brand, “A true hacker is not a group person. He’s a person who loves to stay up all night, he and the machine in a love-hate relationship . . . They’re kids who tended to be brilliant but not very interested in conventional goals. And computing is just a fabulous place for that. . . . It’s a place where you can still be an artisan.” The executives back at Xerox headquarters were so horrified by Brand’s story that they prohibited the PARC crew from ever again talking to the press. In the meantime, “Spacewar” became a founding document of Silicon Valley legend.7
The beards, beanbag chairs, and midnight video games made it hard for some contemporary observers to understand that these men and women were in the process of developing the foundational technologies for desktop computing and networking. Only three months after “Spacewar” appeared in print, the PARC team produced a prototype desktop computer. Called the Alto, the machine featured a keyboard and screen. It had a mouse. It had a graphical interface instead of text. Documents appeared on the screen looking just like they would when printed on paper. The machine even had electronic mail. Unveiled less than five years after the mother of all demos, it took Engelbart’s tools of the far-out future and put them in a machine that could fit on an office desk. It was unlike nearly every other computer in existence, in that you did not need to be a software programmer to use it.8
BEING TOM SWIFT
Beyond the leafy surroundings of Xerox’s dream factory, Lee Felsenstein’s quest for people-powered computing continued. Like most everyone, he’d spent quality time in the PARC beanbag chairs. While he was wowed by the technology, the social-justice crusader in him disliked the idea of being beholden to a corporate overlord. The Alto had cost $12,000 to build. The retail price tag promised to be more than three times as much.
Felsenstein wanted to build networks that were grounded in community, and cheap enough to be accessible to nearly anyone. He’d been a co-conspirator with Pam Hardt and the Resource One crowd in building the pioneering electronic bulletin board Community Memory, whose terminals still dotted Bay Area record stores and bean-sprout cafes. But the units were merely screens that relied on data from a central computer, and the finicky network tended to break down easily. What the world needed now, the frustrated hacker concluded, was a “smart” terminal with
its own memory system.9
The popular science magazines and their “construction projects” for home hobbyists remained a go-to resource for DIY engineers like Felsenstein, and the September 1973 edition of Radio-Electronics (subtitle: “For Men with Ideas in Electronics”) featured a cover story about a device that might just solve his problems. The “TV typewriter” was the brainchild of Don Lancaster, an aerospace engineer who’d exiled himself to the Arizona desert to become a fire spotter and back-to-the-land outdoorsman. Lancaster’s simple device was able to transmit words typed on a keyboard onto a television screen. It took the computer hobbyist community by storm. Here was a connection between keystrokes and on-screen characters that allowed homebrewers to do a featherweight version of PARC’s Alto.
Lancaster’s idea inspired Felsenstein to post notices on Community Memory and troll for ideas at PCC’s weekly potlucks, looking for input on how to build a device that was like the TV typewriter, but that had amped-up intelligence. It might have been possible to do something like this using one of Intel’s microprocessors, but those cost thousands of dollars apiece. Felsenstein wanted something cheap, smart, and easy to build.
The result was “The Tom Swift Terminal,” named after an old series of adventure books that had been a staple of mid-twentieth-century American boyhoods. Felsenstein published the schematic in the PCC newsletter in late 1974. The Tom Swift user could not only scroll up or down on the screen, but could plug in different preprinted circuit boards to add new functions—printing, calculating, playing games—regulated through a “bus” that relayed communications from peripheral to terminal. In essence, the system took apart a computer architecture usually sealed within a mass-manufactured computer or chip, and turned it into stand-alone components that any electronics nerd could build themselves. “If work is to become play,” the spread proclaimed at the top, “then tools must become toys.”
The Code Page 16