From Gutenberg to Google

Home > Other > From Gutenberg to Google > Page 13
From Gutenberg to Google Page 13

by Tom Wheeler


  A Hint of Things to Come

  The seeds of the information age were planted when the telegraph separated the speed of information transmission from the speed of transportation. The binary signaling that drives today’s networks and devices began with the dots and dashes of the telegraph. Perhaps even more critical than its technological effects, however, is how the telegraph created the sociological and economic reality that drives our time. By removing time as a factor in the distribution of information, the telegraph introduced the modern imperative that if is it possible to possess information, then it is necessary to possess it.

  Five years after Morse’s fateful message, Patent Commissioner Thomas Ewbank’s 1849 Report to Congress unknowingly wed the two technologies that would define the future. Writing about the wonderment of electronic distribution, Ewbank marveled, “Morse and his compeers have bridled the most subtle, fitful, and terrific of agents, taught it to wait … and when charged with a message, to assume the character of a courier whose speed rivals thought and approaches volition.”

  Then Ewbank saw into our generation: “If machinery don’t think: it does that which nothing but severe and prolonged thinking can do, and it does it incomparably better.” Channeling Babbage, the commissioner wrote: “In the composition of astronomical and nautical tables, accuracy is everything … [but] perfection in elaborate and difficult calculations is unattainable with certainty by human figuring … [thus] automata have been made to work out arithmetical problems with positive certainty and admirable expedition.”80

  It was the hint of things to come: the marriage of high-speed delivery of information with machines’ ability to “think.”

  Connections

  Alexander Graham Bell was working on improvements to telegraph technology when he discovered how to put sound on wires. The demand for telegraph service had precipitated a search for technologies that could compress multiple messages onto a single line and thus avoid the expense of stringing more wire. Bell had hypothesized that by sending telegraph signals at different musical pitches (frequencies) he could shoehorn several messages into a single wire simultaneously. To create his tones, Bell used the same technology as is used in a clarinet or other woodwind instrument, a vibrating reed. The dots and dashes created by the vibration of the reed would play out at the other end only when they resonated with a similarly tuned reed.

  On June 2, 1875, one of Bell’s reeds became stuck. When his assistant, Thomas Watson, hit the reed in an attempt to free it, Bell heard the resulting noise on the other end of the line.81 Sound had been transmitted electronically! The same electromagnetism that made the telegraph sounder key rise and fall, Bell discovered, could carry sound when the sound-creating device on one end of the line had the same harmonic characteristics as the device on the other end.

  That evening Watson constructed two devices in which an armature was mounted so that one end touched a stretched membrane and the other end an electromagnet. At the originating end a cone directed sound against the membrane so as to activate the electromagnet and produce a current. The other end, connected by wire, reversed the process as the armature, stimulated by the electromagnet, vibrated against the membrane. Watson recorded that he “could unmistakably hear the tones of [Bell’s] voice and almost catch a word now and then.”82

  The electromagnetically controlled movement of an armature was driven by exactly the same physics that govern the telegraph. What Bell and Watson needed, however, was something more than the simple stopping and starting of a current flow. They needed to shape the current to resemble the acoustic signals of speech.83 It took many months of being sequestered in the attic of Bell’s Boston boardinghouse before the pair discovered the means of creating such a variable current.

  To turn the sound into an electric current, Bell captured the sound waves on a membrane that vibrated against an armature connected to a battery. Then, to capture the variations a voice makes in conversation, impedance was introduced between the armature and completion of the circuit. The first successful impedance was provided by acid water. One end of a pinlike armature touched the membrane while the other end fluctuated up and down in the acid water and its electric current. As the needle vibrated, it moved closer to or farther away from the circuit contact immersed in the liquid. Though the acidic solution conducted electricity, it did so imperfectly. Thus the closer to the contact the needle came, the stronger the current it transmitted. Unlike the pulsing binary off and on of the telegraph, Bell’s circuit produced a waveform similar to that of sound.84

  On March 10, 1876, Bell spilled a bit of the acid and shouted for help. “Mr. Watson—Come here—I want to see you.” As Bell recorded in his notebook that day: “To my delight he came and declared that he had heard and understood what I said.”

  “Mr. Watson—Come here” joined “What hath God wrought” in immortality. Later that evening, Alexander Graham Bell wrote to his father, “I feel that I have at last found the solution of a great problem and the day is coming when telegraph wires will be laid on to houses just like water and gas is, and friends will converse with each other without leaving home.”85

  In 1876, Bell filed for a patent on his discovery. The patent was described as “Improvements in Telegraphy.”86

  The telephone could be considered a technological step backward in the progression from the telegraph to the internet. Whereas the telegraph was a binary on-off signal, just like today’s digital impulses, the telephone was an analog waveform—a speech-shaped electrical current. The telephone’s contribution to the march to the digital future lay not in its technology but rather in the way in which it became ubiquitous. The technological step backward would become a monumental step forward when early digital information was disguised as a telephone call to piggyback on Bell’s widespread network.

  Part III

  The Road to Revolution

  The nineteenth century belonged to those who harnessed steam and sparks. By 1910, there were 351,767 miles of railroad track (compared with 204,000 miles of surfaced roads) linking the nation’s villages and towns and drawing resources into urban production centers. As the tracks brought raw materials to central points to be processed, the people followed. The population of America’s major cities increased seventeenfold between the early days of the railroad, in 1840, and 1910, by which time almost one-third of the nation’s population resided in urban areas of greater than 100,000 people.1

  Telegraph lines similarly spread like a skein across the landscape. By 1900, one company, Western Union, operated more than a million miles of wires carrying messages by spark.2 Foretelling the future, Alexander Graham Bell’s accident was also spreading. Shortly after the turn of the century, in 1907 there were 7.6 million telephones in the United States as the communications device that talked appeared in offices and homes.3

  As these networks expanded into the twentieth century, they were incubating forces that would reshape that century and set the stage for the future. When these forces combined, they initiated the third great period of network revolution.

  The networks of the nineteenth century had transformed the nature of physical connections by overcoming the constraints of distance and time. The networks of the twentieth century would add computational mathematics to interconnect virtually all information in incrementally costless delivery via a network of networks. Arriving at this point required a multistep process involving the computing devices themselves, the networks that connected them, and, ultimately, the delivery of connected computing power anywhere without wires.

  The core breakthroughs that had thrice redefined physical networks became the foundation of the new virtual networks. The concepts in Charles Babbage’s analytical engine evolved to become the computer. Binary electric signals that sent telegraph messages through regenerating relays evolved to become the logic circuits of those computers. And Johannes Gutenberg’s disassembly of information into small pieces for subsequent reassembly became the format with which computers solved problems
and networks exchanged information.

  Five

  Computing Engines

  The cold December wind cut through central Iowa, slicing everything it touched, as the young professor took leave from the warmth of his family’s evening dinner to return to his office on campus.

  John Vincent Atanasoff, a thirty-four-year-old associate professor of physics at Iowa State College (now Iowa State University) in Ames, was obsessed with the idea that complex algebraic equations could be solved by a machine. “I went out to the office intending to spend the evening trying to resolve some of these questions,” he would later recall, but “I was in such a mental state that no resolution was possible.”1

  It was 1937 and the state of the computational art had been unchanged for centuries; a machine could calculate (that is, add numbers), but human intervention was required to solve variable equations with a large number of calculations. The term “computer,” in fact, referred to human beings sitting at endless rows of desks working with pencil and paper, slide rule, or mechanical calculator to produce one piece of a complex algorithm, which would then be combined with the work product of other “computers” in a laborious and slow march to an answer.

  That cold December evening, Professor Atanasoff could find no inspiration at his desk. Thus, “I did something that I had done on such occasions…. I went out to my automobile, got in, and started driving.”2

  Leaving campus, he turned left onto Lincoln Highway and drove through Ames. Progressing eastward through Nevada (pronounced “Ne-vay-da” by the locals), the professor fell into a trance, with part of his brain minding the road ahead and the rest subconsciously churning away on his intractable problem.

  Suddenly, he was almost 200 miles from Ames. The Mississippi River was approaching. It had been a surprisingly long drive that passed quickly owing as much to the professor’s heavy foot as to the distraction of his musing. The opposite side of the Big Muddy beckoned with an opportunity not available in liquor-controlled Iowa: a roadside tavern and a glass of whiskey. “I drove into Illinois and turned off the highway into a little road, and went to a roadhouse, which had bright lights…. I sat down and ordered a drink…. As the delivery of the drink was made, I realized that I was no longer so nervous and my thoughts turned again to computing machines.”3

  At a corner table, away from the bar, sipping roadhouse bourbon and soda, John Atanasoff’s thoughts began to sharpen. When he left the tavern, he had assembled in his mind the basic framework of a modern computer:

  Electronics would replace the gears and levers of calculators to create a logic circuit.

  Because electricity had two states—on and off—the machine would dispense with the decimal system’s ten digits in favor of a binary digital system.

  Vacuum tubes would provide the digital on-off signals; different on-off patterns among a collection of tubes would represent different numbers.

  Memory storage would utilize capacitors capable of storing electricity that were “jogged” (Atanasoff’s term) occasionally with additional electricity to keep them from losing what they were storing.

  It was among the cornfields of Ames, Iowa, not at some highbrow eastern research university, that the world’s first electronic digital computer came into being. During the winter of 1938–39, assisted by graduate student Clifford Berry, John Atanasoff assembled his ideas into a machine the size of a desk (76 inches long, 36 deep, and 40 high). Sitting in a basement corner of the Iowa State physics building, the Atanasoff-Berry Computer could solve twenty-nine linear equations with twenty-nine unknowns.4

  The project’s cost was $650, $450 of which was Berry’s graduate stipend.

  It was a discovery as seminal as Gutenberg’s first tray of lead letters. Like Gutenberg, who saw his discovery suborned by financial chicanery, Atanasoff’s breakthrough was soon pilfered.5

  Calculating Machines

  Until the first third of the twentieth century, the mechanization of mathematics had remained essentially unchanged for 5,000 years. The one exception was Charles Babbage, whom we last met in chapter 3 wistfully musing, “I wish to God these calculations had been executed by steam.”6

  Babbage’s first attempt to mechanize math, the difference engine of 1822, was a mechanical iteration of the same concept the Babylonians had put to work with the abacus several millennia earlier. With its columns of tokens attached to rods representing units, tens, hundreds, thousands, and so on, the Babylonian abacus established the construct for mathematical manipulation: the adding or subtracting from each column to produce a conclusion.

  The first effort to mechanize the functions of the abacus was in 1642.7 Blaise Pascal, a nineteen-year-old French “wonder-jeune,” created a shoebox-sized device, appropriately named the Pascaline, which contained interacting cogged-gear wheels. Windows on the top of the box displayed the numerical settings of the gears beneath. When the next number was dialed in, the gears acted just like the tokens on the abacus, moving the necessary number of notches and displaying the corresponding sum.8 The nifty innovation of the Pascaline was a toggle lever between the gears that handled the “carry” function. When a gear finished its revolution the toggle would fall into place to advance the gear to the left by one position.9

  Babbage’s difference engine followed a similar construction of rods, gears, and levers, all powered by weights raised by a steam engine. Recall that Babbage was working on astronomical tables at the time of his exclamation about a steam-powered calculator. The calculation of these tables was a mind-numbing iterative process involving the repetitive application of common inputs (a multiplication table is an example of common input calculation). It was this constant application of a common input that started Babbage wondering whether the process might be susceptible to mechanization.10

  Babbage built only a small proof-of-concept portion of the difference engine.11 As he painstakingly laid out his concept in hundreds of pages of illustrations, however, by 1837 his thinking had broadened to a more advanced engine capable of using the results of a previous calculation to begin a new calculation.12 Here was a conceptual breakthrough—a machine in which previous calculations would automatically feed it the next calculation. He called this concept his “analytical engine.”

  In his move from the difference engine to the analytical engine, Charles Babbage became the first person to close the gap between calculating and computing. A calculating device (whether an abacus, Pascaline, or difference engine) performs a single function on a single variable—adding or subtracting (or multiplying and dividing by repeated iteration). A computing device is capable of calculating multiple variables and then acting based upon its own computations. While the analytical engine still in many ways behaved like a giant Pascaline, Babbage had conceived of new functions that today we recognize as the essential components of a computer. He described them in nineteenth-century terms:

  “Programs”: instructions input via punch cards, a technique that would be used to input computer data well into the second half of the twentieth century.

  “Store”: the component to receive and hold instructions. Today we call it random access memory (RAM).

  “Mill”: the quaint industrial-age term applied to the component in which the numbers were manipulated by geared wheels interacting with each other. Today we would call it a central processing unit (CPU).

  “Barrel”: the control unit that instructed the mill when and how to operate. Babbage envisioned protrusions that pushed various rods to create the processing sequence, similar to the studded cylinder in a music box.

  “Memory”: to solve the carry problem, the adding function was separated from the carry both in function and in timing; the machine would hold a pending carriage while the addition was under way.

  As with his previous effort, Babbage never physically constructed the analytical engine. Its intellectual breakthrough, however, survives in hundreds of pages of diagrams and descriptions.

  At a time when the mechanization of mathematics
was still based on the concepts of the abacus, Charles Babbage conceptualized a computing device.

  The magnitude of Charles Babbage’s breakthroughs defied the capabilities of the English language. Searching for a way to describe the concept of a machine that would produce a result that would guide its next action, Babbage fell back on the technological marvel of his age. The analytical engine was “a locomotive that lays down its own railway.”13

  Despite its status as one of the great intellectual achievements of the nineteenth century, Charles Babbage’s vision of a computing device died with him in 1871.14 Babbage’s breathtaking concepts were relegated to the attic of quirky ideas.15

  The Industrial Revolution was one of the culprits in the demise of Babbage’s ideas. As industrial activity expanded, so did the need for the calculation of everything from boxcars to their contents, not to mention the large cash flows across multiple sources and uses. Science may need multivariable computation, but commerce required good old number crunching. It was in this period that the first “computers”—the human kind—were assembled to perform accounting functions, and increasingly complex calculators, derivatives of the Pascaline, began appearing on their desks.

  Around the world, inventors took advantage of the industrial age’s advances in machine tooling to make the gears of a calculator perform with precision similar to the gears in a Swiss watch. One such American inventor, a former bank clerk named William S. Burroughs, developed a manual calculator (all calculators were advanced by a manual function such as pulling a lever) in which the numbers were entered by pushing keys and the results were printed out on a roll of paper. The Burroughs Adding Machine Company became the dominant player in the calculator market.16

 

‹ Prev