Asimov's New Guide to Science

Home > Science > Asimov's New Guide to Science > Page 119
Asimov's New Guide to Science Page 119

by Isaac Asimov


  The engine’s physical movements were to be performed by rods, cylinders, gear racks, and geared wheels cut in accordance with the ten-digit decimal system. Bells would tell attendants to feed in certain cards, and louder bells would tell them whether they had inserted a wrong card.

  Unfortunately, Babbage, a hot-tempered and eccentric person, periodically tore his machines apart to rebuild them in more complex fashion as new ideas came to him, and he inevitably ran out of money.

  Even more important was the fact that the mechanical wheels and levers and gears on which he had to depend were simply not up to the demands he put upon them. The Babbage machines required technology more subtle and responsive than those that sufficed for a Pascal machine, and such a technology had not yet arrived.

  For these reasons, Babbage’s work petered out and was forgotten for a century. When calculating machines of the Babbage type were eventually constructed successfully, it was because his principles were independently rediscovered.

  ELECTRONIC COMPUTERS

  A more successful application of punch cards to the task of calculation arose out of the demands of the United States census. The American Constitution directs a census every ten years and a statistical survey of the nation’s population and economy proved invaluable. In fact, every ten years, not only did the population and wealth of the nation increase, but the statistical detail demanded increased as well. The result was that, increasingly, it took enormous time to work out all the statistics. By the 1880s, it began to seem that the 1880 census might not be truly complete till the 1890 census was nearly due.

  It was then that Herman Hollerith, a statistician with the Census Bureau, worked out a way of recording statistics by a system of the mechanical formation of holes in appropriate positions in cards. The cards themselves were nonconductors, but electrical currents could pass along contacts made through the holes; and in this way, counting and other operations could be carried through automatically by electrical currents—an important, and even crucial, advance on Babbage’s purely mechanical devices. Electricity, you see, was up to the job.

  Hollerith’s electromechanical tabulating machine was successfully used in the U.S. Censuses of 1890 and 1900. The 1890 census of 65 million people took two and a half years to tabulate even with the Hollerith device. By 1900, he had improved his machines, however, so that cards could be automatically fed through brushes for reading, and the new and larger 1900 census had its count completed in a little over one and a half years.

  Hollerith founded a firm that later became International Business Machines (IBM). The new company, and Remington Rand, under the leadership of Hollerith’s assistant, John Powers, steadily improved the system of electromechanical computations over the next thirty years.

  They had to.

  The world economy, with advancing industrialization, was steadily becoming more complex; and, increasingly, the only way to run the world successfully was to know more and more about the details of the statistics involved, of numbers, of information. The world was becoming an information society, and it would collapse under its own weight if humanity did not learn to collect, understand, and respond to the information quickly enough.

  It was this sort of unforgiving pressure, of having to handle increasing quantities of information, that drove society forward toward the invention of successively more subtle, variegated, and capacious computing devices throughout the twentieth century.

  Electromechanical machines became faster and were used through the Second World War, but their speed and reliability was limited as long as they depended on moving parts like switching relays and on electromagnets that controlled counting wheels.

  In 1925, the American electrical engineer Vannevar Bush and his colleagues constructed a machine capable of solving differential equations. It could do what Babbage had hoped to do with his machine, and was the first successful instrument that we would today call a computer. It was electromechanical.

  Also electromechanical, but even more impressive, was a machine designed in 1937 by Howard Aiken; of Harvard, working with IBM. The machine, the IBM Automatic Sequence Controlled Calculator, known at Harvard as Mark I, was completed in 1944 and was intended for scientific applications. It could perform mathematical operations involving up to twenty-three decimal places.

  In other words, two eleven-digit numbers could be multiplied, correctly, in three seconds. It was electromechanical; and since it dealt primarily with the manipulation of numbers, it is the first modern digital computer. (Bush’s device solved problems by converting numbers into lengths, as a slide rule does; and because it used analogous quantities, not numbers themselves, it was an analog computer.)

  For complete success, however, the switches in such computers had to be electronic. Mechanical interruption and reinstatement of electric currents, while far superior to wheels and gears, was still clumsy and slow, to say nothing of unreliable. In electronic devices, such as radio tubes, the electron flow could be manipulated far more delicately, accurately, and speedily, and it was this which was the next step.

  The first large electronic computer, containing 19,000 vacuum tubes, was built at the University of Pennsylvania by John Presper Eckert and John William Mauchly during the Second World War. It was called ENIAC, for Electronic Numerical Integrator and Computer. ENIAC ceased operation in 1955 and was dismantled in 1957, a hopelessly outmoded dotard at twelve years of age, but it left behind an amazingly numerous and sophisticated progeny. Whereas ENIAC weighed 30 tons and took up 1,500 square feet of floor space, the equivalent computer thirty years later—using switching units far smaller, faster, and more reliable than the old vacuum tubes—could be built into an object the size of a refrigerator.

  So fast was progress that by 1948, small electronic computers were being produced in quantity; within five years, 2,000 were in use; by 1961, the number was 10,000. By 1970, the number had passed the 100,000 mark, and that was scarcely a beginning.

  The reason for the rapid advance was that although electronics was the answer, the vacuum tube was not. It was large, fragile, and required a great deal of energy. In 1948, the transistor (see chapter 9) was invented; and thanks to such solid-state devices, electronic control could be carried through sturdily, compactly, and with trivial expenditure of energy.

  Computers shrank and grew cheap even as they increased their capacity and versatility enormously. In the generation after the invention of the transistor, new ways were found, in rapid succession, to squeeze ever more information capacity and memory into smaller and smaller bits of solid-state devices. In the 1970s, the microchip came into its own—a tiny bit of silicon on which numbers of circuits were etched under a microscope.

  The result was that computers became affordable to private individuals of no great wealth. It may be that the 1980s will see the proliferation of home computers as the 1950s saw the proliferation of home television sets.

  The computers that came into use after the Second World War already seemed to be “thinking machines” to the general public, so that both scientists and laypeople began to think of the possibilities, and consequences, of artificial intelligence, a term first used in 1956 by an M.I.T. computer engineer, John McCarthy.

  How much more so when, in just forty years, computers have become giants without which our way of life would collapse. Space exploration would be impossible without computers. The space shuttle could not fly without them. Our war machine would collapse into Second World War weaponry without them. No industry of any size, scarcely any office, could continue as presently constituted without them. The government (including particularly the Internal Revenue Service) would become even more helpless than it ordinarily is without them.

  And consequently new uses are being worked out for them. Aside from solving problems, doing graphics, storing and retrieving data, and so on, they can be bent to trivial tasks. Some can be programed to play chess with near-master ability, while some can be used for games of all kinds that by the 1980s had cau
ght the imagination of the younger public to the tune of billions of dollars. Computer engineers are laboring to improve the ability of computers to translate from one language to another, and to give them the ability to read, to hear, and speak.

  ROBOTS

  The question arises, inevitably, is there anything computers can, in the end, not do? Are they not, inevitably, going to do anything we can imagine? For instance, can a computer of the proper sort somehow be inserted into a structure resembling the human body, so that we can finally have true automata—not the toys of the seventeenth century, but artificial human beings with a substantial fraction of the abilities of human beings?

  Such matters were considered quite seriously by science-fiction writers even before the first modern computers were built. In 1920, a Czech playwright, Karel Capek, wrote R. U. R., a play in which automata are mass-produced by an Englishman named Rossum. The automata are meant to do the world’s work and to make a better life for human beings; but in the end they rebel, wipe out humanity, and start a new race of intelligent life themselves.

  Rossum comes from a Czech word, rozum, meaning “reason”; and R. U. R. stands for “Rossum’s Universal Robots,” where robot is a Czech word for “worker,” with the implication of involuntary servitude, so that it might be translated as “serf” or “slave.” The popularity of the play threw the old term automaton out of use. Robot has replaced it in every language, so that now a robot is commonly thought of as any artificial device (often pictured in at least vaguely human form) that will perform functions ordinarily thought to be appropriate for human beings.

  On the whole, though, science fiction writers did not treat robots realistically but used them as cautionary objects, as villains or heroes designed to point up the human condition.

  In 1939, however, Isaac Asimov,* only nineteen at the time, tiring of robots that were either unrealistically wicked or unrealistically noble, began to devote some of the science-fiction stories he was publishing to robots that were viewed merely as machines and built, as all machines are, with some rational attempt at adequate safeguards. Throughout the 1940s, he published stories of this sort; and in 1950, nine of them were collected into a book entitled I, Robot.

  Asimov’s safeguards were formalized as the “Three Laws of Robotics.” The phrase was first used in a story published in March 1942, and that was the very first known use of the word robotics, the now-accepted term for the science and technology of the design, construction, maintenance and use of robots.

  The three rules are:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  What Asimov did was, of course, purely speculative and could, at best, only serve as a source of inspiration. The real work was being done by scientists in the field.

  Partly, this was being done through the pressures of the Second World War. The application of electronics made it possible to endow weapons with a sensitivity and swiftness of response even beyond the capabilities of a living organism. Furthermore, radio extended their sphere of action over a considerable distance. The German buzz bomb of the war was essentially a flying servomechanism, and it introduced the possibility not only of guided missiles but also of self-operated or remotely operated vehicles of all sorts, from subway trains to space ships. Because the military establishments had the keenest interest in these devices, and the most abundant supply of funds, servomechanisms have reached perhaps their highest development in aiming-and-firing mechanisms for guns and rockets. These systems can detect a swiftly moving target hundreds of miles away, instantly calculate its course (taking into account the target’s speed of motion, the wind, the temperatures of the various layers of air, and numerous other conditions), and hit the target with pinpoint accuracy, all without any human guidance.

  Automation found an ardent theoretician and advocate in the mathematician Norbert Wiener, who worked on such targeting problems. In the 1940s, he and his group at the Massachusetts Institute of Technology worked out some of the fundamental mathematical relationships governing the handling of feedback. He named this branch of study cybernetics, from the Greek word for “helmsman,” which seems appropriate, since the first use of servomechanisms was in connection with a helmsman. (Cybernetics also harks back to Watt’s centrifugal governor, for governor comes from the Latin word for “helmsman.”)

  This was the first important book to be devoted entirely to theory of computer control, and cybernetic principles made it possible to build, if not a robot, then at least systems that utilized these principles to mimic the behavior of simple animals.

  The British neurologist William Grey Walter, for instance, built a device in the 1950s that explores and reacts to its surroundings. His turtlelike object, which he calls a testudo (Latin for “tortoise”), has a photoelectric cell for an eye, a sensing device to detect touch, and two motors—one to move forward or backward, and the other to turn around. In the dark, it crawls about, circling in a wide arc. When it touches an obstacle, it backs off a bit, turns slightly and moves forward again; it will do this until it gets around the obstacle. When its photoelectric eye sees a light, the turning motor shuts off and the testudo advances straight toward the light. But its phototropism is under control; as it gets close to the light, the increase in brightness causes it to back away, so that it avoids the mistake of the moth. When its batteries run down, however, the now “hungry” testudo can crawl close enough to the light to make contact with a recharger placed near the light bulb. Once recharged, the testudo is again sensitive enough to back away from the bright area around the light.

  And yet neither can we entirely underplay the influence of inspiration. In the early 1950s, a Columbia undergraduate, Joseph F. Engelberger, read Asimov’s I, Robot and was, as a result, infected with a life-long enthusiasm for work with robots.

  In 1956, Engelberger met George C. Devol, Jr., who, two years before, had obtained the first patent for an industrial robot. He called its control and computer memory system universal automation—or unimation, for short.

  Together, Engelberger and Devol founded Unimation, Inc., and Devol then developed thirty to forty related patents.

  None of these were really practical, because the robots could not really do their work unless they were computerized; and computers were too bulky and expensive to make robots competitive enough for any tasks. It was only with the development of the microchip that the robot designs of Unimation became attractive in the marketplace. Unimation quickly became the most important and most profitable robotics firm in the world.

  With that began the era of the industrial robot. The industrial robot does not have the appearance of the classical robot; there is nothing obviously humanoid about it. It is essentially a computerized arm, which can perform simple operations with great precision and which possesses, because of its computerization, a certain flexibility.

  Industrial robots have found their greatest use so far on assembly lines (particularly those in Japan along which automobiles are assembled). For the first time, we have machines that are complex enough and “talented” enough to do jobs that until now required human judgment—but so little human judgment that the human brain, caught in the necessity of doing a repetitious and stultifying job does not reach anything near its potential and is probably damaged as a result.

  It is clearly useful to have machines do jobs that are insufficient for the human brain (though too much for anything short of robots) and thus leave human beings the possibility of devoting themselves to more creative labors that will stretch and expand their minds.

  Already, however, the use of industrial robots is showing uncomfortable side effects in the short term. Human workers are being replaced. We are probably headed fo
r a painful transition period during which society will be faced with the problem of taking care of the new unemployed; of re-educating or retraining them to do other work; or, where that is impossible, of finding some useful occupation they can do; or, where all else fails, of simply supporting them.

  Presumably, as time passes, a new generation educated to be part of a computerized, robotized society will come into being, and matters will improve.

  And yet technology will continue to advance. There is a strong push in favor of developing robots with greater abilities, with more flexibility, with the ability to “see,” “speak,” “hear.” What’s more, home robots are being developed—robots of more humanoid appearance which can be useful about the house and do some of the functions classically assigned to human servants. (Joseph Engelberger has a prototype of such a device which he hopes before long to introduce into his home: something that will be capable of accepting coats, passing out drinks, and performing other simple tasks. He calls it Isaac.)

  Can we help but wonder whether computers and robots may not eventually replace any human ability? Whether they may not replace human beings by rendering them obsolete? Whether artificial intelligence, of our own creation, is not fated to be our replacement as dominant entities on the planet?

  One might be fatalistic about this. If it is inevitable, then it is inevitable. Besides, the human record is not a good one, and we are in the process, perhaps, of destroying ourselves (along with much of life) in any case. Perhaps it is not computer replacement we should fear, but the possibility that it will not come along quickly enough.

  We might even feel triumphant about it. What achievement could be grander than the creation of an object that surpasses the creator? How could we consummate the victory of intelligence more gloriously than by passing on our heritage, in triumph, to a greater intelligence—of our own making?

 

‹ Prev