The ultimate destination of this scenic driving tour was HP, where Bill Hewlett showed him the company’s assembly line and some of its latest space-age models. De Gaulle nodded pleasantly, thinking all the while about the ways France might build similarly gleaming California-style factories and laboratories. Undersecretary of State Douglas Dillon, the president’s official host for his California visit, declared that the president had been “much impressed.” Five years later, increasingly alarmed at U.S. computer makers’ aggressive forays into French markets, de Gaulle announced “Le Plan Calcul.” The massive, multiyear effort toward building a homegrown French computer industry featured expansive education programs, corporate partnerships, and—yes—research parks.
De Gaulle was among the first, but hardly the last, to look to the Santa Clara Valley as an alluring model well worth imitating. Worshipful delegations of curious officials began to make pilgrimages in the French leader’s wake: parliamentarians from Japan, university administrators from Canada, economic development functionaries from Scotland. Before the 1960s were out, Fred Terman had started traveling the globe as an economic development consultant, explaining the secrets of his industrious little valley to eager foreign audiences.16
By that time, even more of the world’s eyes were watching, for the region had become known not just for HP and Lockheed and the perpetual motion machine of Fred Terman, but for a new breed of tech firms. These were operations for whom defense contracting wasn’t the primary market, and whose products were making all kinds of machines—from computers to cars to contraptions running assembly lines—faster and lighter and more powerful. They were the companies that put the silicon in Silicon Valley, and the space race helped launch them into the stratosphere.
FUNNY LITTLE COMPANIES
This was what happened to Fairchild Semiconductor. Three months into their start-up’s existence, without having yet made a single chip, the Traitorous Eight landed a contract to manufacture 100 silicon transistors for an onboard computer for “the manned missile,” a new long-range bomber. Wisely, Bob Noyce and Gordon Moore were adamant that Fairchild conduct its own research, rather than depending on government contracts that would not let them own resulting patents. “‘Government funding of R&D has a deadening effect upon the incentives of the people,” declared Noyce. “This is not the way to get creative, innovative work done.” But having the government as a customer? That was less of a problem. In 1958, 80 percent of Fairchild’s book of business came from government contracts. That was only a preview of an even bigger windfall.17
In those Fairchild labs in early 1959, Jean Hoerni discovered a way to place multiple transistors on a single silicon wafer by protecting them with a coating of chemical oxide. Hoerni’s “planar process” allowed his colleague Bob Noyce to experiment with linking the transistors together, creating an integrated circuit, or IC, more powerful than any device before it. Another advantage: the material. Back in Dallas, Jack Kilby of Texas Instruments came up with the same idea nearly simultaneously, fabricating his IC using germanium instead of silicon. Noyce was the first to file a patent, in 1961, spurring a fierce patent fight between the two companies. Both men eventually were credited with the invention, but Noyce’s silicon device proved easier to manufacture at volume, giving the silicon-powered California firms an edge over the germanium-powered producers elsewhere.18
These elegant and tiny devices were no longer transistors—they were “chips” that would launch an entire industry and, ultimately, power a personal computing revolution. But that was all in the future, and the IC was painfully expensive to make and market. The first chips produced by Fairchild cost about $1,000 apiece—far more than ordinary individual transistors. Why would a corporate customer need such an expensive device? The ICs were so technical, so bleeding-edge, that people outside the small electrical engineering fraternity really didn’t understand their capabilities to store enormous amounts of information in a sliver of silicon no bigger than the head of a pin.
NASA understood. The space agency began using ICs in the guidance system of the Apollo rockets, then to improve the guidance system for Minuteman intercontinental ballistic missiles. Fairchild received hefty contracts for both. By 1963, demand created by the Apollo and Minuteman programs had driven the price of silicon chips down from $1,000 to $25—now putting them at a price point where a much broader market might buy in. The feds remained a stalwart customer, however. As the Apollo tests continued, the demand to go tinier, lighter, faster intensified. “Make those babies smaller and smaller,” the military brass barked at the chipmakers. “Give us more bang for the buck.”19
The demand for building more cheaply came from the top. Eisenhower had always been anxious about the influence and expense of the defense complex, and he had ordered DOD to trim its budgets toward the end of his term. His Democratic successors accelerated the cost-cutting—less out of nervousness about the power of the generals, and more because they had so much else they wanted to do. Kennedy and Johnson had bold ambitions to wage war on poverty, deliver new social programs, go to the moon, and cut taxes at the same time. And then there was that increasingly expensive little war in Vietnam. Deficit spending could only go so far; more fiscal reshuffling needed to happen, particularly in the defense and aerospace budget, where uncapped, sole-source contracts had led to ballooning project costs that the military wasn’t well-equipped to control.
With a push from the White House and Congress, defense agencies opened contracts to competitive bidding and started to adopt fixed-price contracts as the standard. As Johnson put it in an address to Congress in the first grieving weeks after Kennedy’s November 1963 assassination, the government needed to get “a dollar’s value for a dollar spent.” To do that, Defense Secretary Robert McNamara added pointedly, the military-industrial complex needed to bring in new players. “Subcontracts should be placed competitively,” he said, “to ensure the full play of the free enterprise system.” The pugnacious McNamara, former president of Ford Motor Company, had come into office pledging to run government more like a business. He was determined to deliver.20
The moves triggered a shakeout in the Northern California electronics industry. Large East Coast firms that had set up transistor and tube operations in the Valley cut back their operations, or got out of them altogether. Some firms merged, others were acquired. Eitel-McCullough and Varian, pioneering local firms that had dominated the microwave business, were left reeling. Engineers started calling it “the McNamara depression.” In contrast, companies that were in the IC business flourished. A swelling number of young men spun out new semiconductor firms of their own. Many had worked at Fairchild, and their start-ups became known, naturally, as “the Fairchildren.”21
Contributing to the blossoming of an industry was the extraordinary power of its marquee product. Inventors had been searching for faster and cheaper power sources since the dawn of the age of steam. The silicon chip delivered. With every year, semiconductor firms managed to multiply the interconnected logic transistors on each chip, making the computers surrounding them faster, smaller, and more powerful. Gordon Moore predicted in 1965 that the number of components per chip would double every year. An astounding prediction, but within a decade, “Moore’s Law” proved true. Once set in motion, silicon technology defied all the rules of economics, becoming the propulsive force behind a second industrial revolution.22
The case of the Valley chipmakers underscores that public investment mattered greatly, but that how that money was spent mattered even more. The decentralized, privatized, fast-moving public contracting environment encouraged entrepreneurship. The Valley already had a tightly fraternal Valley culture, where people were used to sharing ideas rather than clinging to them. As the space age reached its crescendo, this characteristic was reinforced by the fact that no single company could have a monopoly on the windfall. As the semiconductor market grew bigger and richer, the geographic concentration of the industr
y intensified. From the founding of Fairchild until man walked on the moon, nearly 90 percent of all the chip-making firms in America were located in the Santa Clara Valley.23
CHAPTER 4
Networked
The sunshine and silicon of Northern California might have drawn the attention of curious foreign leaders and dazzled reporters. Yet Boston—the city its nineteenth-century boosters crowned “the hub of the universe”—remained the Valley’s more popular and more serious older brother throughout that decade of missiles and moon shots. MIT continued to reign supreme in the computer-research world; spinoff companies from the university labs of Cambridge turned Boston into a start-up hub well before California earned that reputation. The big business of mainframe computing—one so dominated by IBM that its other market competitors were referred to as “the Seven Dwarfs”—remained rooted in the eastern half of the continent. It also was back East that other seminal technological developments of the space-age Sixties emerged. First came the minicomputer, which shrank down digital machines to a more manageable and relatively affordable size. Then, aided in part by the spread of the minicomputer, came the earliest networks that turned computers into tools of communication as well as calculation.
Although minis and networks weren’t born in Northern California, their widespread adaptation over the course of the decade vastly enlarged and diversified the world of computing. In doing so, they further altered the geography of the tech business. No longer was computer power restricted to very large and deep-pocketed institutions and a small priesthood of technical specialists. No longer was the game only one of selling electronic hardware (both whole computers and the components for them). It was about building and selling and distributing operating platforms and software services, as well as networks and devices to support communication.
The electronics business truly became an information business in the 1960s, creating new markets, engaging new users. And those first years—while decades ahead of the eventual explosion of online commercial platforms and software—set the stage for the Santa Clara Valley to one day become the most networked place of them all.
THINKING SMALL
But first: back to Boston, where, six months before Sputnik, two MIT researchers got an entrepreneurial itch to strike out on their own. Ken Olsen was one of those lucky young men who’d been able to be on the ground floor of some of the most exciting developments in digital computing, thanks to the public money coming into university labs during and after World War II. The glacial pace of academia, however, wasn’t a good fit for Olsen, an inveterate tinkerer who’d spent his boyhood summers working in a machine shop and fixing radios in his basement. Partnering up in early 1957 with a fellow researcher, Harlan Anderson, Olsen secured $70,000 from Harvard Business School professor Georges Doriot, who had established a tech-focused investment fund he called “venture capital” to support young and untested entrepreneurs. Olsen and Anderson departed MIT and moved into a shuttered textile mill in the old industrial town of Maynard. Their new company, Digital Equipment Corporation, was open for business.
Digital sold a new breed of computer, transistorized and programmable, a fraction of the size and price of mainframes. Its power source was Noyce and Kilby’s integrated circuit, made newly affordable by the scale and scope of the space program. Olsen called it the “programmed data processor,” or PDP. Retailing for under $20,000 at a time when mainframes could cost over $1 million, the mini officially ushered in the next generation of digital computing.1
The refrigerator-sized machine was still a far cry from the personal, desktop computers to come, but it brought its users in from the remote and chilly world of punch cards and batch processing, allowing them to program and perform computer operations in real time. By the decade’s end, the IC-powered model Digital introduced in 1965, the PDP-8, had become a familiar sight in computer labs across the country, and a gateway to the digital world for an entire generation of tinkerers and homebrewers and future Ken Olsens: college kids and grad students playing computer games after hours, young coders writing their first programs, schoolchildren who became test subjects for early educational software.
Just as Fairchild gave birth to the chip business in the Valley, Digital turned Boston into the capital of minicomputing, an industry that would employ hundreds of thousands and generate billions in revenue for over two decades. In 1968, a key engineer on the PDP-8 project departed Digital to found another minicomputer company, Data General. Four years later came Prime Computer, half an hour’s drive down the road in Natick, another aging industrial town. In crumbling brick factories that once had pumped out boots and blankets for Civil War soldiers, the mini makers of Massachusetts built machines that ushered in a second, transistorized generation of digital computing, hacking away at IBM’s dominance of the market more than most of its mainframe competitors ever could.
Into the stuffy confines of the Boston business world, the minicomputer companies imported some of the renegade, improvisational spirit of the postwar electronics lab—the same spirit that was the hallmark of California companies from HP to Fairchild and its children. Ken Olsen hadn’t worked anywhere but academia before he founded his company. He had limited patience for management experts and blue-suited sales types. A devout churchgoer with little interest in the trappings of great wealth, he liked to spend his off hours paddling quietly on New England ponds in his favorite canoe. Olsen considered himself “a scientist first and foremost.”
While this too often made Olsen tone-deaf to the nuances of the business and consumer markets, it made his company remarkably effective at building the products that scientists and engineers needed. By the early 1970s, Digital had become the third-largest computer maker in the world. Doriot’s $70,000 investment—for which he obtained a 70 percent stake in the company—was worth $350 million. It was one of the most lucrative deals in high-tech history.2
SHARING TIME
About the same time that Ken Olsen set up shop in his Maynard mill, John McCarthy began musing about a better way to distribute computer power.
The Boston-born and Los Angeles–raised son of a labor organizer, McCarthy had the soul of a radical and the mind of a scientist. He had graduated high school two years ahead of schedule and earned degrees from Cal Tech and Princeton before joining the faculty of Dartmouth. Of the many scholars entranced by the computer’s potential to imitate and complement the human brain, McCarthy was the one who in 1955 put a name to the phenomenon and the field of research that rose around it: “artificial intelligence.”
By 1958, McCarthy had been scooped up by MIT, where he promptly established an Artificial Intelligence Laboratory with another rising faculty star, Marvin Minsky. The two men were close friends from graduate school and had been born within a month of each other thirty years before. Along with a shared generational sensibility, they were both convinced that computers could and should be far more than mere adding machines. But in order to accomplish that, electronic brains needed to change the way they interfaced with their human users. This meant that they had to stop making people wait their turn in line.3
For that was the reality of life in the computer world of the late 1950s. Mainframes were immensely powerful, but they could only process one batch of data at a time. Impatient researchers had to line up with punch cards in hand, then spend hours or even days waiting for an answer to their query. And, in those very buggy days of early programming, they could only run the program in the first place after they’d spent considerable time refining their instructions—another stage that took even more time and further batches of cards. There had to be a better way.
The fix, argued McCarthy, was to adapt a mainframe so that multiple users could work with the machine at once, creating a hub-and-spoke system with a mainframe at the center and user terminals at the periphery, networked by coaxial cable. Instead of waiting, users could receive results in seconds, and immediately try again if there were errors in th
e data. Instead of submitting punch cards, people would type instructions on terminals, in real time. “I think the proposal points to the way all computers will be operated in the future,” McCarthy wrote to the head of MIT’s computing lab in early 1959, “and we have a chance to pioneer a big step forward in the way computers are used.”4
John McCarthy was hardly the only person in Boston who was thinking about how to improve the human-computer interface. A ferment of conversation had been brewing around the issue ever since the late 1940s, when the father of cybernetics, Norbert Wiener, had led a legendary series of weekly seminars in Cambridge to debate questions of man and machine. The notion of getting computers to talk to one another wasn’t impossible to imagine: digital networking had been around nearly as long as the digital computer itself via another born-in-Boston system, the Semi-Automatic Ground Environment, or SAGE. Sponsored by the Air Force and designed at MIT’s Lincoln Laboratory, the project linked military computers to radar, creating a digital command-and-control system for air defense. First beta-tested on Cape Cod in 1953, this very first online network launched nationally in that fateful post-Sputnik summer of 1958.5
And just on the other side of town McCarthy could find a psychologist named Joseph C. R. Licklider—known to all as “Lick”—who had become consumed with questions of how the radically different timescales of computer thinking (lightning-fast) and human reaction (not so fast) might be overcome by better design. In 1960, he mused on this problem in a paper entitled “Man-Computer Symbiosis,” a short treatise that became one of the most influential and enduring documents in high-tech history. In seven pithy pages, Licklider laid out a new world order, one where men (the masculine being proxy for all genders) did the creative thinking: “set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations.” Computers would do the routinized tasks of data gathering and calculation that, according to Lick’s estimates, sucked up 85 percent of human researchers’ time. But such symbiosis would require new tools—including computer time-sharing.6
The Code Page 7