Book Read Free

Where Wizards Stay Up Late

Page 6

by Matthew Lyon


  Baran wasn’t the first at RAND to think about this problem. In fact, it was RAND’s stock in trade to study such things. RAND had been set up in 1946 to preserve the nation’s operations research capability developed during World War II. Most of its contracts came from the Air Force. The problem of the communications system’s survivability was something that RAND’s communications division was working on, but with limited success. Baran was one of the first to determine, at least on a theoretical level, that the problem was indeed solvable. And he was unquestionably the first to see that the way to solve it was by applying digital computer technology.

  Few of the electronics experts in other departments at RAND knew much about the emerging field of digital computer technology, and even fewer seemed interested. Baran recalled his sense of how different his own thinking was from theirs: “Many of the things I thought possible would tend to sound like utter nonsense, or impractical, depending on the generosity of spirit in those brought up in an earlier world.” And it wasn’t just his colleagues at RAND who cast a skeptical eye on Baran’s thinking. The traditional communications community at large quickly dismissed his ideas as not merely racy, but untenable.

  Instead of shying away, Baran just dove deeper into his work. RAND allowed investigators sufficient freedom to pursue their own ideas, and by late 1960 Baran’s interest and knowledge of networks had grown into a small independent project. Convinced of the merit of his ideas, he embarked on writing a series of comprehensive technical papers to respond to objections previously raised and explain in increasing detail what he was proposing. The work, as he explained years later, was done not out of intellectual curiosity nor any desire to publish. “It was done in response to the most dangerous situation that ever existed,” he said.

  At the Pentagon, Baran found planners who were thinking in unemotional terms about postattack scenarios and making quantitative estimates of the destruction that would result from a Soviet nuclear ballistic missile attack. “The possibility of a war exists, but there is much that can be done to minimize the consequences,” Baran wrote. “If war does not mean the end of the earth in a black-and-white manner, then it follows that we should do those things that make the shade of gray as light as possible: to plan now to minimize potential destruction and to do all those things necessary to permit the survivors of the holocaust to shuck their ashes and reconstruct the economy swiftly.”

  Baran’s first paper revealed glimpses of his nascent, revolutionary ideas about the theory and structure of communications networks. He had arrived tentatively at the notion that a data network could be made more robust and reliable by introducing higher levels of redundancy. Computers were key. Independently of Licklider and others in computing’s avant-garde, Baran saw well beyond mainstream computing, to the future of digital technologies and the symbiosis between humans and machines.

  Baran was working on the problem of how to build communications structures whose surviving components could continue to function as a cohesive entity after other pieces were destroyed. He had long talks with Warren McCulloch, an eminent psychiatrist at MIT’s Research Laboratory of Electronics. They discussed the brain, its neural net structures, and what happens when some portion is diseased, particularly how brain functions can sometimes recover by sidestepping a dysfunctional region. “Well, gee, you know,” Baran remembered thinking, “the brain seems to have some of the properties that one would need for real stability.” It struck him as significant that brain functions didn’t rely on a single, unique, dedicated set of cells. This is why damaged cells can be bypassed as neural nets re-create themselves over new pathways in the brain.

  The notion of dividing a single large vulnerable structure into many parts, as a defense mechanism, can be seen in many other applications. The concept is not entirely dissimilar to the idea of segmented or compartmentalized structures used in modern ship hulls or gasoline tanker trucks. If only one or two areas of the skin are ruptured, only a section of the overall structure loses its utility, not the whole thing. Some terrorist groups and espionage operations employ a similar kind of compartmentalized organization to thwart authorities, who might eliminate one cell without jeopardizing the whole group.

  Theoretically it might be possible to set up a network with numerous redundant connections, and you would “start getting structures sort of like neural nets,” Baran said. But there was a technical limitation, since all signals on the telephone network were analog signals. The telephone-switching plan prohibited more than five links to be connected in tandem, because signal quality deteriorated rapidly with the increased number of tandem links. At each switched link, the signal would be slightly distorted and the quality incrementally degraded. This is similar to what happens when one makes copies of copies of audio tapes. With each new generation the quality deteriorates, eventually becoming hopelessly distorted.

  Unlike analog systems, digital technologies essentially convert information of all kinds, including sound and image, to a set of 1s and 0s. Digitized information can be stored efficiently and replicated an unlimited number of times within the circuits of a digital device, reproducing the data with almost perfect accuracy. In a communications context, information that is digitally encoded can be passed from one switch to the next with much less degradation than in analog transmission.

  As Baran wrote in his initial paper: “The timing for such thinking is particularly appropriate now, for we are just beginning to lay out designs for the digital data transmission system of the future.” Technologists could realistically envision new systems where computers would speak to one another, allowing for a network with enough sequentially connected links to create adequate levels of redundancy. These linked structures resemble—in a very modest way—the astonishingly complicated billions of linkages among the neurons in the brain. And digital computers offered speed. Mechanical telephone switches of the time took twenty or thirty seconds just to establish a single long-distance connection over a typical phone line.

  In speaking to various military commanders, Baran found that adequate communication in wartime requires the transmission of much more data than the conceptual “minimum essential communications.” Exactly how much more was hard to know, so Baran changed the objective to a network able to support almost any imaginable traffic volume.

  Baran’s basic theoretical network configuration was as simple as it was dramatically different and new. Telephone networks have always been constructed using central switching points. The most vulnerable are those centralized networks with all paths leading into a single nerve center. The other common design is a decentralized network with several main nerve centers around which links are clustered, with a few long lines connecting between clusters; this is basically how the long-distance phone system still looks in schematic terms today.

  Baran’s idea constituted a third approach to network design. He called his a distributed network. Avoid having a central communications switch, he said, and build a network composed of many nodes, each redundantly connected to its neighbor. His original diagram showed a network of interconnected nodes resembling a distorted lattice, or fish net.

  The question remained: How much redundancy in the interconnections between neighboring nodes would be needed for survivability? “Redundancy level” was Baran’s term for the degree of connectivity between nodes in the network. A distributed network with the absolute minimum number of links necessary to connect each node was said to have a redundancy level of 1, and was considered extremely vulnerable. Baran ran numerous simulations to determine the probability of distributed network survival under a variety of attack scenarios. He concluded that a redundancy level as low as 3 or 4—each node connecting to three or four other nodes—would provide an exceptionally high level of ruggedness and reliability. “Just a redundancy level of maybe three or four would permit almost as robust a network as the theoretical limit,” he said. Even after a nuclear attack, it should be possible to find and use some pathway through the remaining
network.

  “That was a most fortunate finding because it meant that we would not need to buy a huge amount of redundancy to build survivable networks,” Baran said. Even low-cost, unreliable links would suffice as long as there were at least three times the minimum number of them.

  Baran’s second big idea was still more revolutionary: Fracture the messages too. By dividing each message into parts, you could flood the network with what he called “message blocks,” all racing over different paths to their destination. Upon their arrival, a receiving computer would reassemble the message bits into readable form.

  Conceptually, this was an approach borrowed more from the world of freight movers than communications experts. Think of each message as if it were a large house and ask yourself how you would move that house across the country from, say, Boston to Los Angeles. Theoretically, you could move the whole structure in one piece. House movers do it over shorter distances all the time—slowly and carefully. However, it’s more efficient to disassemble the structure if you can, load the pieces onto trucks, and drive those trucks over the nation’s interstate highway system—another kind of distributed network.

  Not every truck will take the same route; some drivers might go through Chicago and some through Nashville. If a driver learns that the road is bad around Kansas City, for example, he may take an alternate route. As long as each driver has clear instructions telling him where to deliver his load and he is told to take the fastest way he can find, chances are that all the pieces will arrive at their destination in L.A. and the house can be reassembled on a new site. In some cases the last truck to leave Boston might be the first to arrive in L.A., but if each piece of the house carries a label indicating its place in the overall structure, the order of arrival doesn’t matter. The rebuilders can find the right parts and put them together in the right places.

  In Baran’s model, these pieces were what he called “message blocks,” and they were to be of a certain size, just as (in the truck analogy) most tractor-trailer vehicles share the same configuration. The advantage of the packet messaging technique was realized primarily in a distributed network that offered many different routes.

  Baran’s innovation also provided a much needed solution to the “bursty” nature of data communications. At the time, all communications networks were circuit-switched, which meant that a communications line was reserved for one call at a time and held open for the duration of that session. A phone call between teenagers will tie up a line for as long as they commiserate over their boyfriends and tell stories about their rivals. Even during pauses in their conversation, the line remains dedicated to that conversation until it ends. And technically this makes a lot of sense, since people tend to keep up a fairly steady flow of talk during a phone call.

  But a stream of data is different. It usually pours out in short bursts followed by empty pauses that leave the line idle much of the time, wasting its “bandwidth,” or capacity. One well-known computer scientist liked to use the example of a bakery with one counter clerk, where customers usually arrive in random bursts. The clerk has to stay at the counter throughout the day, sometimes busy, sometimes idle. In the context of data communications network, it’s a highly inefficient way to utilize a long-distance connection.

  It would be dramatically more cost-effective, then, to send data in “blocks” and allocate bandwidth in such a way that different messages could share the line. A message would be divided into specific blocks, which would then be sent out individually over the network through multiple locations, and reassembled at their destination. Because there were multiple paths over which the different blocks could be transmitted, they could arrive out of sequence, which meant that once a complete message had arrived, however helter-skelter, it needed to be reassembled in the proper order. Each block would therefore need to contain information identifying the part of the message to which it belonged.

  What Baran envisioned was a network of unmanned switches, or nodes—stand-alone computers, essentially—that routed messages by employing what he called a “self-learning policy at each node, without need for a central, and possibly vulnerable, control point.” He came up with a scheme for sending information back and forth that he called “hot potato routing,” which was essentially a rapid store-and-forward system working almost instantaneously, in contrast with the old post-it-and-forward teletype procedure.

  In Baran’s model, each switching node contained a routing table that behaved as a sort of transport coordinator or dispatcher. The routing table at each node reflected how many hops, or links, were required to reach every other node in the network. The table indicated the best routes to take and was constantly updated with information about neighboring nodes, distances, and delays—much like dispatchers or truck drivers who use their CB radios to keep one another informed of accidents, construction work, detours, and speed traps. The continuous updating of the tables is also known as “adaptive” or “dynamic” routing.

  As the term “hot potato” suggests, no sooner did a message block enter a node than it was tossed out the door again as quickly as possible. If the best outgoing path was busy—or blown to bits—the message block was automatically sent out over the next best route. If that link was busy or gone, it followed the next best route, and so forth. And if the choices ran out, the data could even be sent back to the node from which it originated.

  Baran, the inventor of the scheme, also became its chief lobbyist. He hoped to persuade AT&T of its advantages. But it wasn’t easy. He found that convincing some of the communications people within RAND of the feasibility of his ideas was difficult enough. The concepts were entirely unheard of in traditional telecommunications circles. Eventually he won his colleagues’support. But winning over AT&T management, who would be called upon if such a network were to be built, proved nearly impossible.

  Baran’s first task was to show that the nation’s long-distance communications system (consisting of almost nothing but AT&T lines) would fail in a Soviet first strike. Not only did AT&T officials refuse to believe this but they also refused to let RAND use their long-distance circuit maps. RAND resorted to using a purloined set of AT&T Long Lines maps to analyze the telephone system’s vulnerability.

  Vulnerability notwithstanding, the idea of slicing data into message blocks and sending each block out to find its own way through a matrix of phone lines struck AT&T staff members as totally preposterous. Their world was a place where communications were sent as a stream of signals down a pipe. Sending data in small parcels seemed just about as logical as sending oil down a pipeline one cupful at a time.

  AT&T’s officials concluded that Baran didn’t have the first notion of how the telephone system worked. “Their attitude was that they knew everything and nobody outside the Bell System knew anything,” Baran said. “And somebody from the outside couldn’t possibly understand or appreciate the complexity of the system. So here some idiot comes along and talks about something being very simple, who obviously does not understand how the system works.”

  AT&T’s answer was to educate. The company began a seminar series on telephony, held for a small group of outsiders, including Baran. The classes lasted for several weeks. “It took ninety-four separate speakers to describe the entire system, since no single individual seemed to know more than a part of the system,” Baran said. “Probably their greatest disappointment was that after all this, they said, ‘Now do you see why it can’t work?’And I said, ‘No.’”

  With the exception of a few supporters at Bell Laboratories who understood digital technology, AT&T continued to resist the idea. The most outspoken skeptics were some of AT&T’s most senior technical people. “After I heard the melodic refrain of ‘bullshit’often enough,” Baran recalled, “I was motivated to go away and write a series of detailed memoranda papers, to show, for example, that algorithms were possible that allowed a short message to contain all the information it needed to find its own way through the network.” With each objection answer
ed, another was raised and another piece of a report had to be written. By the time Baran had answered all of the concerns raised by the defense, communications, and computer science communities, nearly four years had passed and his volumes numbered eleven.

  In spite of AT&T’s intransigence, Baran believed he was engaged in what he called an “honest disagreement” with phone company officials. “The folks at AT&T headquarters always chose to believe their actions were in the best interest of the ‘network,’which was by their definition the same as what was best for the country,” he said.

  By 1965, five years after embarking on the project, Baran had the full support of RAND, and that August RAND sent a formal recommendation to the Air Force that a distributed switching network be built, first as a research and development program and later as a fully operational system: “The need for a survivable . . . flexible, user-to-user communications system is of overriding importance. We do not know of any comparable alternative system proposals to attain this capability, and we believe that the Air Force should move swiftly to implement the research and development program proposed herein.”

  The Air Force agreed to it. Now the only people holding out were the AT&T officials. The Air Force told AT&T it would pay the phone company to build and maintain the network. But the phone company was not to be swayed.

  The Air Force, determined not to let the plan die on the drawing boards, decided that it should proceed without AT&T’s cooperation. But the Pentagon decided to put the newly formed Defense Communications Agency (DCA), not the Air Force, in charge of building the network. Baran pictured nothing but trouble. The agency was run by a group of old-fashioned communications officers from each of the various services, with no experience in digital technology. To make matters worse, DCA’s enthusiasm for the project was on a par with the response from AT&T. “So I told my friends in the Pentagon to abort this entire program—because they wouldn’t get it right,” Baran recalled. “It would have been a damn waste of government money and set things back. The DCA would screw it up and then no one else would be allowed to try, given the failed attempt on the books.”

 

‹ Prev