Where Wizards Stay Up Late: The Origins of the Internet

Home > Other > Where Wizards Stay Up Late: The Origins of the Internet > Page 24
Where Wizards Stay Up Late: The Origins of the Internet Page 24

by Katie Hafner


  The technical lessons of radio and satellite experiments were less significant than the broader networking ideas they inspired. It was obvious there would be more networks. Several foreign governments were building data systems, and a growing number of large corporations were starting to develop networking ideas of their own. Kahn began wondering about the possibility of linking the different networks together.

  The problem first occurred to him as he was working on the packet-radio project in 1972. “My first question was, ‘How am I going to link this packet-radio system to any computational resources of interest?’” Kahn said. “Well, my answer was, ‘Let’s link it to theARPANET ,’ except that these were two radically different networks in many ways.” The following year, another ARPA effort, called the Internetting Project, was born.

  By the time of the 1972 ICCC demonstration in Washington, the leaders of several national networking projects had formed an International Network Working Group (INWG), with Vint Cerf in charge. Packet-switching network projects in France and England were producing favorable results. Donald Davies’s work at the U.K.’s National Physical Laboratory was coming along splendidly. In France, a computer scientist named Louis Pouzin was building Cyclades, a French version of theARPANET. Both Pouzin and Davies had attended the ICCC demonstration in Washington. “The spirit after ICCC,” said Alex McKenzie, BBN’s representative to the INWG, “was, ‘We’ve shown that packet-switching really works nationally. Let’s take the lead in creating an international network of networks.’”

  Larry Roberts was enthusiastic about INWG because he wanted to extend the reach of theARPANETbeyond the DARPA-funded world. The British and the French were equally excited about expanding the reach of their national research networks as well. “Developing network-interconnection technology was a way to realize that,” said McKenzie. The INWG began pursuing what they called a “Concatenated Network,” orCATENETfor short, a transparent interconnection of networks of disparate technologies and speeds.

  An Internet

  The collaboration that Bob Kahn would characterize years later as the most satisfying of his professional career took place over several months in 1973. Kahn and Vint Cerf had first met during the weeks of testing at UCLA in early 1970, when they had forced the newbornARPANETinto catatonia by overloading the IMPs with test traffic. They had remained close colleagues, and now both were thinking extensively about what it would take to create a seamless connection among different networks. “Around this time,” Cerf recalled, “Bob started saying, ‘Look, my problem is how I get a computer that’s on a satellite net and a computer on a radio net and a computer on theARPANETto communicate uniformly with each other without realizing what’s going on in between?’” Cerf was intrigued by the problem.

  Sometime during the spring of 1973, Cerf was attending a conference at a San Francisco hotel, sitting in the lobby waiting for a session to start when he began doodling out some ideas. By now he and Kahn had been talking for several months about what it would take to build a network of networks, and they had both been exchanging ideas with other members of the International Network Working Group. It occurred to Cerf and Kahn that what they needed was a “gateway,” a routing computer standing between each of these various networks to hand off messages from one system to the other. But this was easier said than done. “We knew we couldn’t change any of the packet nets themselves,” Cerf said. “They did whatever they did because they were optimized for that environment.” As far as each net was concerned, the gateway had to look like an ordinary host.

  While waiting in the lobby, he drew this diagram:

  Reproduction of early Internet design ideas “Our thought was that, clearly, each gateway had to know how to talk to each network that it was connected to,” Cerf said. “Say you’re connecting the packet-radio net with theARPANET. The gateway machine has software in it that makes it look like a host to theARPANETIMPs. But it also looks like a host on the packet-radio network.”

  With the notion of a gateway now defined, the next puzzle was packet transmission. As with theARPANET, the actual path the packets traveled in an internet should be immaterial. What mattered most was that the packets arrive intact. But there was a vexing problem: All these networks—packet radio,SATNET, and theARPANET—had different interfaces, different maximum packet sizes, and different transmission rates. How could all those differences be standardized in order to shuttle packets among networks? A second question concerned the reliability of the networks. The dynamics of radio and satellite transmission wouldn’t permit reliability that was so laboriously built into theARPANET. The Americans looked to Pouzin in France, who had deliberately chosen an approach for Cyclades that required the hosts rather than the network nodes to recover from transmission errors, shifting the burden of reliability on to the hosts.

  It was clear that the host-to-host Network Control Protocol, which was designed to match the specifications of theARPANET, would have to be replaced by a more independent protocol. The challenge for the International Network Working Group was to devise protocols that could cope with autonomous networks operating under their own rules, while still establishing standards that would allow hosts on the different networks to talk to each other. For example,CATENETwould remain a system of independently administered networks, each run by its own people with its own rules. But when time came for one network to exchange data with, say, theARPANET , the internetworking protocols would operate. The gateway computers handling the transmission couldn’t care about the local complexity buried inside each network. Their only task would be to get packets through the network to the destination host on the other side, making a so-called end-to-end link.

  Once the conceptual framework was established, Cerf and Kahn spent the spring and summer of 1973 working out the details. Cerf presented the problem to his Stanford graduate students, and he and Kahn joined them in attacking it. They held a seminar that concentrated on the details of developing the host-to-host protocol into a standard allowing data traffic to flow across networks. The Stanford seminar helped frame key issues, and laid the foundation for solutions that would emerge several years later.

  Cerf frequently visited the DARPA offices in Arlington, Virginia, where he and Kahn discussed the problem for hours on end. During one marathon session, the two stayed up all night, alternately scribbling on Kahn’s chalkboard and pacing through the deserted suburban streets, before ending up at the local Marriott for breakfast. They began collaborating on a paper and conducted their next marathon session in Cerf’s neighborhood, working straight through the night at the Hyatt in Palo Alto.

  That September, Kahn and Cerf presented their paper along with their ideas about the new protocol to the International Network Working Group, meeting concurrently with a communications conference at the University of Sussex in Brighton. Cerf was late arriving in England because his first child had just been born. “I arrived in midsession and was greeted by applause because word of the birth had preceded me by e-mail,” Cerf recalled. During the Sussex meeting, Cerf outlined the ideas he and Kahn and the Stanford seminar had generated. The ideas were refined further in Sussex, in long discussions with researchers from Davies’and Pouzin’s laboratories.

  When Kahn and Cerf returned from England, they refined their paper. Both men had a stubborn side. ”We’d get into this argumentative state, then step back and say, ‘Let’s find out what it is we’re actually arguing about.’ ” Cerf liked to have everything organized before starting to write; Kahn preferred to sit down and write down everything he could think of, in his own logical order; reorganization came later. The collaborative writing process was intense. Recalled Cerf: “It was one of us typing and the other one breathing down his neck, composing as we’d go along, almost like two hands on a pen.”

  By the end of 1973, Cerf and Kahn had completed their paper, “A Protocol for Packet Network Intercommunication.” They flipped a coin to determine whose name should appear first, and Cerf won the toss. The paper a
ppeared in a widely read engineering journal the following spring.

  Like Roberts’s first paper outlining the proposedARPANETseven years earlier, the CerfKahn paper of May 1974 described something revolutionary. Under the framework described in the paper, messages should be encapsulated and decapsulated in “datagrams,” much as a letter is put into and taken out of an envelope, and sent as end-to-end packets. These messages would be called transmission-control protocol, or TCP, messages. The paper also introduced the notion of gateways, which would read only the envelope so that only the receiving hosts would read the contents.

  The TCP protocol also tackled the network reliability issues. In theARPANET, the destination IMP was responsible for reassembling all the packets of a message when it arrived. The IMPs worked hard making sure all the packets of a message got through the network, using hop-by-hop acknowledgments and retransmission. The IMPs also made sure separate messages were kept in order. Because of all this work done by the IMPs, the old Network Control Protocol was built around the assumption that the underlying network was completely reliable.

  The new transmission-control protocol, with a bow to Cyclades, assumed that theCATENETwas completely unreliable. Units of information could be lost, others might be duplicated. If a packet failed to arrive or was garbled during transmission, and the sending host received no acknowledgment, an identical twin was transmitted.

  The overall idea behind the new protocol was to shift the reliability from the network to the destination hosts. “We focused on end-to-end reliability,” Cerf recalled. ”Don’t rely on anything inside those nets. The only thing that we ask the net to do is to take this chunk of bits and get it across the network. That’s all we ask. Just take this datagram and do your best to deliver it.”

  The new scheme worked in much the same way that shipping containers are used to transfer goods. The boxes have a standard size and shape. They can be filled with anything from televisions to underwear to automobiles—content doesn’t matter. They move by ship, rail, or truck. A typical container of freight travels by all three modes at various stages to reach its destination. The only thing necessary to ensure crosscompatibility is the specialized equipment used to transfer the containers from one mode of transport to the next. The cargo itself doesn’t leave the container until it reaches its destination.

  The invention of TCP would be absolutely crucial to networking. Without TCP, communication across networks couldn’t happen. If TCP could be perfected, anyone could build a network of any size or form, and as long as that network had a gateway computer that could interpret and route packets, it could communicate with any other network. With TCP on the horizon, it was now obvious that networking had a future well beyond the experimentalARPANET. The potential power and reach of what not only Cerf and Kahn, but Louis Pouzin in France and others, were inventing was beginning to occur to people. If they could work out all the details, TCP might be the mechanism that would open up worlds.

  As more resources became available over theARPANETand as more people at the sites became familiar with them, Net usage crept upward. For news of the world, early Net regulars regularly logged on to a machine at SRI, which was connected to the Associated Press news wire. During peak times, MIT students logged on at some other computer on the Net to get their work done. Acoustic and holographic images produced at UC Santa Barbara were digitized on machines at USC and brought back over the Net to an image processor at UCSB, where they could be manipulated further. The lab at UCSB was outfitted with custom-built image-processing equipment, and UCSB researchers translated high-level mathematics into graphical output for other sites. By August 1973, while TCP was still in the design phase, traffic had grown to a daily average of 3.2 million packets.

  From 1973 to 1975, the Net expanded at the rate of about one new node each month. Growth was proceeding in line with Larry Roberts’s original vision, in which the network was deliberately laden with large resource providers. In this respect, DARPA had succeeded wonderfully. But the effect was an imbalance of supply and demand; there were too many resource providers, and not enough customers. The introduction of terminal IMPs, first at Mitre, then at NASA’s Ames Research Center and the National Bureau of Standards with up to sixty-three terminals each, helped right the balance. Access at the host sites themselves was loosening up. The host machine at UCSB, for example, was linked to minicomputers in the political science, physics, and chemistry departments. Similar patterns were unfolding across the network map.

  Like most of the earlyARPANEThost sites, the Center for Advanced Computation at the University of Illinois was chosen primarily for the resources it would be able to offer other Net users. At the time Roberts was mapping out the network, Illinois was slated to become home to the powerful new ILLIAC IV, a massive, one-of-a-kind high-speed computer under construction at the Burroughs Corporation in Paoli, Pennsylvania. The machine was guaranteed to attract researchers from around the country.

  An unexpected twist of circumstances, however, led the University of Illinois to become the network’s first large-scale consumer instead of a resource supplier. Students on the Urbana campus were convinced the ILLIAC IV was going to be used to simulate bombing scenarios for the Vietnam War and to perform top-secret research on campus. As campus protests erupted over the impending installation, university officials grew concerned about their ability to protect the ILLIAC IV. When Burroughs finished construction of the machine, it was sent to a more secure facility run by NASA.

  But the Center for Advanced Computation already had its IMP and full access to the network. Researchers there took quickly to the newfound ability to exploit remote computing resources—so quickly, in fact, that the Center terminated the $40,000 monthly lease on its own high-powered Burroughs B6700. In its place, the university began contracting for computer services over theARPANET . By doing this, the computation center cut its computer bill nearly in half. This was the economy of scale envisioned by Roberts, taken to a level beyond anyone’s expectations. Soon, the Center was obtaining more than 90 percent of its computer resources through the network.

  Large databases scattered across the Net were growing in popularity. The Computer Corporation of America had a machine called the Datacomputer that was essentially an information warehouse, with weather and seismic data fed into the machine around the clock. Hundreds of people logged in every week, making it the busiest site on the network for several years.

  Abetted by the new troves of data, theARPANETwas beginning to attract the attention of computer researchers from a variety of fields. Access to the Net was still limited to sites with DARPA contracts, but the diversity of users at those sites was nonetheless creating a community of users distinct from the engineers and computer scientists who built theARPANET. Programmers helping to design medical studies could tie in to the National Library of Medicine’s rich MEDLINE database. The UCLA School of Public Health set up an experimental database of mental health program evaluations.

  To serve the growing user community, SRI researchers established a unique resource called theARPANETNewsin March 1973. Distributed monthly in ink-on-paper form, the journal was also available over the Net. A mix of conference listings, site updates, and abstracts of technical papers, the newsletter read like small-town gossip riddled with computer jargon. One of the more important items in theARPANETNewswas the “Featured Site” series, in which system managers from the growing list of host computers described what they had to offer. In May 1973 Case Western Reserve University, which was selling computer services to network users, described its PDP-10 in terms that sounded altogether like an ad from the Personals section: “Case is open to collaborative propositions involving barters of time with other sites for work related to interests here, and sales of time as a service.”

  Communicating by computer and using remote resources were still cumbersome processes. For the most part, the Net remained a user-hostile environment, requiring relatively sophisticated programming knowledge and an understanding
of the diverse systems running on the hosts. Demand was growing among users for “higher-level” application programs aimed at helping users tap into the variety of resources now available. The file-transfer and Telnet programs existed, but the user community wanted more tools, such as common editors and accounting schemes.

  SRI’s Network Information Center estimated the number of users at about two thousand. But a newly formed users’ interest group, calledUSING, was convinced there was a gap between the design of the network resources and the needs of the people trying to use those resources. Envisioning itself as a lobby group, a consumers’union even,USINGbegan immediately to draw up plans and recommendations for improving the delivery of computer services over theARPANET.

  But DARPA saw no need to share authority with a tiny self-appointed watchdog group made up of people the agency viewed as passengers on its experimental vehicle. The initiative died after about nine months with a terse memo from a DARPA program manager named Craig Fields, warning the group that it had overstepped its bounds. With neither funding nor official support for their effort forthcoming, members putUSINGinto a state of suspended animation from which it never emerged.

  Other problems developed for DARPA as the profile of the network began to rise. Like theUSINGinsurgency, most were relatively minor affairs. But together they illustrated the ongoing tensions related to DARPA’s stewardship of the network. One area of tension had to do with DARPA’s Pentagon masters. IPTO in particular managed to steer clear of the most blatantly military research. But while the Illinois students were wrong about the ILLIAC IV being used for simulated bombing missions against North Vietnam, therewereplans to use it for nuclear attack scenarios against the Soviet Union. Similarly, researchers of all sorts used seismic information stored on the Computer Corporation of America (CCA) database server, information that was being collected to support Pentagon projects involving underground atomic testing.

 

‹ Prev