by Matthew Lyon
But the ALOHA system gave Metcalfe far more than a doctorate. Xerox PARC was in the process of developing one of the first personal computers, called the Alto. The company saw that customers would want to connect the machines, so Metcalfe, one of the resident networking experts, was assigned the task of tying the Altos together. Without a store-and-forward model, preventing data packets from colliding was impossible. But simply scaling down the ARPANET by building a subnet of IMP-like store-and-forward computers would have been prohibitively expensive for a system designed to work in an office building.
Metcalfe had an idea, borrowed directly from the ALOHANET—in fact, he would call it the Alto Aloha Network: Let the data packets collide, then retransmit at a random time. But Metcalfe’s idea differed in several respects from the Hawaiian system. For one thing, his network would be a thousand times faster than ALOHANET. It would also include collision detection. But perhaps most important, Metcalfe’s network would be hardwired, running not by radio waves but on cables connecting computers in different rooms, or among clusters of buildings.
One computer wishing to send a data packet to another machine—say, a desktop workstation sending to a printer—listens for traffic on the cable. If the computer detects conflicting transmissions, it waits, usually for a few thousandths of a second. When the cable is quiet, the computer begins transmitting its packet. If, during the transmission, it detects a collision, it stops and waits before trying again—usually a few hundred microseconds. In both instances, the computer chooses the delay randomly, minimizing the possibility of retrying at the same instant selected by whatever device sent the signal that caused the collision. As the network gets busier, computers back off and retry over longer random intervals. This keeps the process efficient and the channel intact.
“Imagine you’re at a party and several people are standing around having a conversation,” said Butler Lampson, who helped Metcalfe develop the idea, describing the system. “One person stops talking and somebody else wants to talk. Well, there’s no guarantee that only one person wants to talk; perhaps several do. It’s not uncommon for two people to start talking at once. But what typically happens? Usually they both stop, there’s a bit of hesitation, and then one starts up again.”
Metcalfe and Lampson, along with Xerox researchers David Boggs and Chuck Thacker, built their first Alto Aloha system in Bob Taylor’s lab at Xerox PARC. To their great delight, it worked. In May 1973 Metcalfe suggested a name, recalling the hypothetical luminiferous medium invented by nineteenth-century physicists to explain how light passes through empty space. He rechristened the system Ethernet.
CSNET
DCA, the ARPANET’s caretaker, wasn’t the only R&D agency in Washington to have grown bureaucratic. Nowhere in Washington could you walk into your boss’s office anymore with a bright idea for a project and walk out twenty minutes later with a million dollars in support. In the mid-1970s, the only organization that bore any resemblance to the ARPA of old was the National Science Foundation. The foundation was created in 1950 to promote progress in science by funding basic research and strengthening education in science. By the late 1970s, NSF was on the rise in the computing field.
Not only was NSF a new likely source of sufficient funds, it was also the only organization whose officials could act on behalf of the entire scientific community. DARPA had provided the research base and new technology. Now NSF would carry it forward to a larger community.
Officials at NSF had been interested in creating a network for the academic computer science community for some time. In a 1974 report, an NSF advisory committee concluded that such a service “would create a frontier environment which would offer advanced communication, collaboration, and the sharing of resources among geographically separated or isolated researchers.” At that point, NSF was mostly concerned with spurring the development of what was still a fledgling discipline. Perhaps because computer science was still an emerging field on most campuses, nothing much came of the notion.
By the late 1970s, computer science departments had mushroomed. The advantages of the ARPANET were now clear. Rapid electronic communication with colleagues and easy resource-sharing meant tasks that usually took weeks could now be finished in hours. Electronic mail created a new world of fast samizdat, replacing the slow postal services and infrequent conferences. The network had become as essential to computer science research as telescopes were to astronomers.
But the ARPANET was threatening to split the community of computer researchers into haves and have-nots. In 1979 there were about 120 academic computer science departments around the country, but just fifteen of the sixty-one ARPANET sites were located at universities. Faculty candidates and graduate students alike were starting to accept or decline offers based on whether or not a school had access to the Net, putting research institutions without an ARPANET node at a disadvantage in the race to land top scholars and the research grants that followed them.
More important, an exodus of computing talent from academia to industry had caused a nationwide fear that the United States would not be able to train its next generation of computer scientists. The lure of private sector salaries was part of the problem. But scientists weren’t just being pulled into industry; they were also being pushed. Computer facilities at many universities were obsolete or underpowered, making it hard for people on campuses to stay abreast of the rapidly changing computer field.
Little could be done about the salary discrepancies between academia and industry. But the resource problem was essentially the same one that DARPA had faced a decade earlier. A network for computer scientists would reduce the need for duplicative efforts. And if this network was open to private research sites, there would be less pressure on researchers to leave the universities in order to keep up with their discipline.
Clear though the solution seemed, implementing it proved another matter. Linking the computer science departments to the ARPANET was out of the question. To be assigned a site, universities had to be involved in specific kinds of government-funded research, typically defense-related. Even then, it was costly to allocate new sites. ARPANET connections came in one size only: extra large. The system used costly leased telephone lines, and each node had to maintain two or more links to other sites. As a result, maintaining an ARPANET site cost more than $100,000 each year, regardless of the traffic it generated.
The computer scientists had to invent another way. In May 1979, Larry Landweber, head of the computer science department at the University of Wisconsin, invited representatives of six universities to Madison to discuss the possibility of building a new Computer Science Research Network, to be called CSNET. Although DARPA couldn’t provide financial support, the agency sent Bob Kahn to the meeting as an advisor. NSF, which had raised the academic network issue five years earlier, sent Kent Curtis, the head of its computer research division. After the meeting, Landweber spent the summer working with Peter Denning from Purdue, Dave Farber from the University of Delaware, and Tony Hearn who had recently left the University of Utah for the RAND Corporation, to flesh out a detailed proposal for the new network.
Their proposal called for a network open to computer science researchers in academia, government, and industry. The underlying medium would be a commercial service provider like TELENET. Because CSNET would be using slower links than those used by the ARPANET, and did not insist on redundant linkages, the system would be far less expensive. The network would be run by a consortium of eleven universities, at an estimated five-year cost of $3 million. Because of the DCA policy restricting ARPANET access to DOD contractors only, the proposal contained no gateway between the two networks. A draft of the proposal circulated by the group received enthusiastic praise. They sent the final version to NSF in November 1979.
But after nearly four months of peer review, NSF rejected the proposal, although it remained enthusiastic about the CSNET idea. So NSF sponsored a workshop to overcome the deficiencies that NSF’s reviewers found in the dra
ft proposal. Landweber and company returned to their drawing boards.
In the summer of 1980, Landweber’s committee came back with a way to tailor the architecture of CSNET to provide affordable access to even the smallest lab. They proposed a three-tiered structure involving ARPANET, a TELENET-based system, and an e-mail-only service called PhoneNet. Gateways would connect the tiers into a seamless whole.
Under the new proposal, NSF would support CSNET for a five-year startup period, after which it was to be fully funded by user fees. A university’s annual costs, a combination of dues and connection charges, ranged from a few thousand dollars for PhoneNet service (mostly for the long-distance phone connections) to $21,000 for a TELENET site.
As to how the network would be managed—a concern of the National Science Board, the NSF’s governing body—the plan took a novel approach. For the first two years, the NSF itself would play the role of manager for the university consortium. After that, responsibility would be handed off to the University Corporation for Atmospheric Research. UCAR was familiar with advanced computing work and had the expertise to handle a project involving so many academic institutions. More important, the Science Board knew UCAR and trusted its management skills. The Board agreed to provide nearly $5 million for the CSNET project.
By June 1983, more than seventy sites were on-line, obtaining full services and paying annual dues. At the end of the five-year period of NSF support in 1986, nearly all the country’s computer science departments, as well as a large number of private computer research sites, were connected. The network was financially stable and financially self-sufficient.
The experience that NSF gained in the process of starting up CSNET paved the way for more NSF ventures in computer networking.
In the mid-1980s, on the heels of CSNET’s success, more networks began to emerge. One, called BITNET (the Because It’s Time Network), was a cooperative network among IBM systems with no restrictions on membership. Another, called UUCP, was built at Bell Laboratories for file transfer and remote-command execution. USENET, which began in 1980 as a means of communication between two machines (one at the University of North Carolina and one at Duke University), blossomed into a distributed news network using UUCP. NASA had its own network called the Space Physics Analysis Network, or SPAN. Because this growing conglomeration of networks was able to communicate using the TCP/IP protocols, the collection of networks gradually came to be called the “Internet,” borrowing the first word of “Internet Protocol.”
By now, a distinction had emerged between ”internet” with a small i, and “Internet” with a capital I. Officially, the distinction was simple: “internet” meant any network using TCP/IP while “Internet” meant the public, federally subsidized network that was made up of many linked networks all running the TCP/IP protocols. Roughly speaking, an “internet” is private and the “Internet” is public. The distinction didn’t really matter until the mid-1980s when router vendors began to sell equipment to construct private internets. But the distinction quickly blurred as the private internets built gateways to the public Internet.
At around the same time, private corporations and research institutions were building networks that used TCP/IP. The market opened up for routers. Gateways were the internetworking variation on IMPs, while routers were the mass-produced version of gateways, hooking local area networks to the ARPANET. Sometime in the early 1980s a marketing vice president at BBN was approached by Alex McKenzie and another BBN engineer who thought the company should get into the business of building routers. It made sense. BBN had built the IMPs and the TIPs, and even the first gateway for the Internet as part of the packet radio program. But the marketing man, after doing some quick calculations in his head, decided there wasn’t much promise in routers. He was wrong.
Also in the middle of the 1980s several academic research networks in Europe sprang to life. In Canada there was CDNet. Gradually, however, each network built a gateway to the U.S. Government–sponsored Internet and borders began to dissolve. And gradually the Internet came to mean the loose matrix of interconnected TCP/IP networks worldwide.
By now, all research scientists with NSF support—not just computer scientists, but oceanographers, astronomers, chemists, and others—came to believe they were at a competitive disadvantage unless they had network access. And CSNET, which was to be used only by academic computer science departments, wasn’t the answer. But CSNET was the stepping stone to NSF’s major accomplishment, NSFNET.
The model of CSNET convinced NSF of the importance of networking to the scientific community. The professional advantages to be gained from the ability to communicate with one’s peers was incalculable. And since the agency had been working so closely with the computer scientists, it had a number of people internally who understood networking and were able to help manage programs. But NSF didn’t have the means to build a national network. Maintaining the ARPANET alone cost millions of dollars a year.
The creation in 1985 of five supercomputer centers scattered around the United States offered a solution. Physicists and others were agitating for a “backbone” to interconnect the supercomputer centers. The NSF agreed to build the backbone network, to be called NSFNET. At the same time, the NSF offered that if the academic institutions in a geographic region put together a community network, the agency would give the community network access to the backbone network. The idea was not only to offer access but also to give the regional networks access to each other. With this arrangement, any computer could communicate with any other through a series of links.
In response, a dozen or so regional networks were formed around the country. Each had the exclusive franchise in that region to connect to the NSFNET backbone. In Upstate New York, NYSERNET (for New York State Educational Research Network) was formed. In San Diego there was the California Educational Research Network, or CERFnet (althoughVint Cerf had no relationship to the network, the CERFnet founders invited him to its inauguration). The funding for the regional networks would come from the member companies themselves. The NSF provided the backbone as essentially a “free good” to the academic community in the sense that the regional networks didn’t pay to use it. On the other hand, NSF grants to universities to connect their campuses to the regional network were always two-year, strictly nonrenewable grants. This meant that after two years, universities were paying the cost of the regional connection out of their own pockets. Typical charges were between $20,000 and $50,000 per year for a high-speed connection.
TCP/IP versus OSI
In 1982 Vint Cerf announced that he was going to leave ARPA to take a job at MCI. Earlier that year he had met an MCI executive whose job was to get MCI into the data business. “His idea was to build a digital post office,” Cerf recalled. “I was immediately grabbed by the idea.” The reaction to Cerf’s leaving was shock. One colleague cried. “Vint was as close to a general as we had,” said another.
Cerf was leaving at a critical time for the network. The ARPANET was about to make its official transition to TCP/IP, but no one knew for certain whether the U.S. Government was serious about embracing it. The Defense Department had endorsed TCP/IP, but the civilian branch of the government had not. And there was mounting concern that the National Bureau of Standards would decide to support an emergent rival standard for network interconnection called the OSI Reference Model.
Several years earlier, the International Organization for Standardization, ISO, had begun to develop its own internetworking “reference” model, called OSI, or open-systems interconnection. Since the 1940s, ISO had specified worldwide standards for things ranging from wine-tasting glasses to credit cards to photographic film to computers. They hoped their OSI model would become as ubiquitous to computers as double-A batteries were to portable radios.
A battle of sorts was forming along familiar lines, recalling the confrontation between AT&T and the inventors of packet-switching during the birth of ARPANET. On the OSI side stood entrenched bureaucracy, with a s
trong we-know best attitude, patronizing and occasionally contemptuous. “There was a certain attitude among certain parts of the OSI community whose message was, ‘Time to roll up your toy academic network,’” recalled one ardent TCP/IP devotee. “They thought TCP/IP and Internet were just that—an academic toy.” No one ever claimed that what had started with the Network Working Group and continued throughout the academic community for years had been anything but ad hoc. Someone had written the first RFC in a bathroom, for heaven’s sake. Not only had the RFC series never been officially commissioned by ARPA, but some of the RFCs were, quite literally, jokes.
But the Internet community—people like Cerf and Kahn and Postel, who had spent years working on TCP/IP—opposed the OSI model from the start. First there were the technical differences, chief among them that OSI had a more complicated and compartmentalized design. And it was a design, never tried. As far as the Internet crowd was concerned, they had actually implemented TCP/IP several times over, whereas the OSI model had never been put to the tests of daily use, and trial and error.
In fact, as far as the Internet community was concerned, the OSI model was nothing but a collection of abstractions. “Everything about OSI was described in a very abstract, academic way,” Cerf said. “The language they used was turgid beyond belief. You couldn’t read an OSI document if your life depended on it.”
TCP/IP, on the other hand, reflected experience. It was up and running on an actual network. “We could try things out,” Cerf said. “In fact we felt compelled to try things out, because in the end there was no point in specifying something if you weren’t going to build it. We had this constant pragmatic feedback about whether things worked or didn’t.”