Table LXXIX
A reproduction of the entry MOTHER from the zerolang dictionary predicted for the year 2190 + 5 years (according to Zwiebulin and Courdlebye).
MOTHER fern. noun. 1. MOdern THERapy, contemp. med. treatment, esp. psych. Num. var. incl. MOTHERKIN, also MOTHERKINS, ther. concerned w. fam. relationships; MUM, silent ther.; MAMMA, breast ther.; MUMMY, posthumous ther.; MAMMY, var. of ther. practiced in Southern U.S.A. 2. Fern, parent (arch.)Table LXXX
Visual diagram* of linguistic evolution according to Vroedel and Zwiebulin
Explanation. The x-axis (or horizontal) indicates time in millennia. The y-axis (or vertical) indicates conceptual capacity in bits per sem per second of articulational flow (in units of epsilon space).
'Not a prognosis!
PROGNORRHOEA or prognostic diarrhoea, a children's disease of twentieth-century futurology (v. PRAPROGNOSTICS), which led to essential prognoses being drowned in inessential ones as a result of decategorization (q.v.) and created the so-called pure prognostic hum. (See also: HUMS, also PROGNOSIS DISTURBANCES.)
PROLEPSY or disappearancing, the methodology (theory and technology) of disappearing, discov. 1998, first applied 2008. The technology of p. is based on a utilization of the TUNNEL EFFECT (q.v.) in the black holes of the Cosmos. For, as Jeeps, Hamon, and Wost discovered in 2001, the Cosmos includes a Paraversum as well as a Negaversum, negatively adjoining the Reversum. Therefore the whole Cosmos bears the name POLY-VERSUM (q.v.) and not (as previously) UNIVERSUM (q.v.). Bodies are shifted from our Paraversum to the Negaversum by the proleptoral system. Disappearancing is used as a technique for remov-
GOLEM XIV
FOREWORD BY IRVING T. CREVE, M.A., PH.D.
INTRODUCTION BY THOMAS B. FULLER II, GENERAL, U.S. ARMY, RET.
AFTERWORD BY RICHARD POPP
INDIANA UNIVERSITY PRESS
2047
Foreword
To pinpoint the moment in history when the abacus acquired reason is as difficult as saying exactly when the ape turned into man. And yet barely one human life span has lapsed since the moment when, with the construction of Vannevar Bush's differential-equation analyzer, intellectronics began its turbulent development, eniac, which followed toward the close of World War II, was the machine that gave rise— prematurely, of course—to the name"electronic brain." eniac was in fact a computer and, when measured on the tree of life, a primitive nerve ganglion. Yet historians date the age of computerization from it. In the 1950s a considerable demand for calculating machines developed. One of the first concerns to put them into mass production was IBM.
Those devices had little in common with the processes of thought. They were used as data processors in the field of economics and by big business, as well as in administration and science. They also entered politics: the earliest were used to predict the results of Presidential elections. At more or less the same time the RAND Corporation began to interest military circles at the Pentagon in a method of predicting occurrences in the international politico-military arena, a method relying on the formulation of so-called "scenarios of events." From there it was only a short distance to more versatile techniques like the CIMA, from which the applied algebra of events that is termed (not too felicitously) politicomatics arose two decades later. The computer was also to reveal its strength in the role of Cassandra when, at the Massachusetts Institute of Technology, people first began to prepare formal models of world civilization in the famous "Limits to Growth" project. But this was not the branch of computer evolution which was to prove the most important by the end of the century. The Army had been using calculating machines since the end of World War II, as part of the system of operational logistics developed in the theaters of that war. People continued to be occupied with considerations on a strategic level, but secondary and subordinate problems were increasingly being turned over to computers. At the same time the latter were being incorporated into the U.S. defense system.
These computers constituted the nerve centers of a transcontinental warning network. From a technical point of view, such networks aged very quickly. The first, called CONELRAD, was followed by numerous successive variants of the EWAS (Early Warning System) network. The attack and defense potential was then based on a system of movable (underwater) and stationary (underground) ballistic missiles with thermonuclear warheads, and on rings of sonar-radar bases. In this system the computers fulfilled the functions of communications links—purely executive functions.
Automation entered American life on a broad front, right from the "bottom"—that is, from those service industries which could most easily be mechanized, because they demanded no intellectual activity (banking, transport, the hotel industry). The military computers performed narrow specialist operations, searching out targets for combined nuclear attack, processing the results of satellite observations, optimizing naval movements, and correlating the movements of MOLS (Military Orbital Laboratories—massive military satellites).
As was to be expected, the range of decisions entrusted to automatic systems kept on growing. This was natural in the course of the arms race, though not even the subsequent detente could put a brake on investment in this area, since the freeze on the hydrogen bomb race released substantial budget allocations which, after the conclusion of the Vietnam war, the Pentagon had no wish to give up altogether. But even the computers then produced—of the tenth, eleventh, and eventually twelfth generation—were superior to man only in their speed of operation. It also became clear that, in defense systems, man is an element that delays the appropriate reactions.
So it may be considered natural that the idea of counteracting the trend in intellectronic evolution described above should have arisen among Pentagon experts, and particularly those scientists connected with the so-called military-industrial complex. This movement was commonly called "anti-intellectual." According to historians of science and technology, it derived from the midcentury English mathematician A. Turing, the creator of the "universal automaton" theory. This was a machine capable of performing basically every operation which could be formalized—in other words, it was endowed with a perfectly reproducible procedure. The difference between the "intellectual" and "anti-intellectual" current in intellectronics boils down to the fact that Turing's (elementarily simple) machine owes its possibilities to a program. On the other hand, in the works of the two American "fathers" of cybernetics, N. Wiener and J. Neumann, the concept arose of a system which could program itself.
Obviously we are presenting this divergence in a vastly simplified form, as a bird's-eye view. It is also clear that the capacity for self-programming did not arise in a void. Its necessary precondition was the high complexity characteristic of computer construction. This differentiation, still un-noticeable at midcentury, became a great influence on the subsequent evolution of mathematical machines, particularly with the firm establishment and hence the independence of such branches of cybernetics as psychonics and the polyphase theory of decisions. The 1980s saw the emergence in military circles of the idea of fully automatizing all paramount activities, those of the military leadership as well as political-economic ones. This concept, later known as the "Sole-Strategist Idea," was to be given its first formulation by General Stewart Eagleton. He foresaw—over and above computers searching for optimal attack targets, over and above a network of communications and calculations supervising early warning and defense, over and above sensing devices and missiles—a powerful center which, during all phases preceding the extreme of going to war, could utilize a comprehensive analysis of economic, military, political, and social data to optimize continuously the global situation of the U.S.A. and thereby guarantee the United States supremacy on a planetary scale, including its cosmic vicinity, which now extended to the moon and beyond.
Subsequent advocates of this doctrine maintained that it was a necessary step in the march of civilization, and that this march constituted a unity, so the military sector could not be arbitrarily excluded from it. Af
ter the escalation of blatant nuclear force and the range of missile carriers had ceased, a third stage of rivalry ensued, one supposedly less threatening and more perfect, being an antagonism no longer of blatant force, but of operational thought. Like force before, thought was now to be subjected to nonhumanized mechanization.
Like its atomic-ballistic predecessors, this doctrine became the object of criticism, especially from centers of liberal and pacifist thought, and it was oppugned by many distinguished representatives from the world of science, including specialists in psychomatics and intellectronics; but ultimately it prevailed, as shown by acts of law passed by both houses of Congress. Moreover, as early as 1986 a USIB (United States Intellectronical Board) was created, subordinate to the President and with its own budget, which in its first year amounted to $19 billion. These were hardly humble beginnings.
With the help of an advisory body semiofficially delegated by the Pentagon, and under the chairmanship of the Secretary of Defense, Leonard Davenport, the USIB contracted with a succession of big private firms such as International Business Machines, Nortronics, and Cybermatics to construct a prototype machine, known by the code name hann (short for Hannibal). But thanks to the press and various "leaks," a different name—ulvic (Ultimative Victor)—was generally adopted. By the end of the century further prototypes had been developed. Among the best-known one might mention such systems as ajax, ultor, gilgamesh, and a long series of Golems.
Thanks to an enormous and rapidly mounting expenditure of labor and resources, the traditional informatic techniques were revolutionized. In particular, enormous significance must be attached to the conversion from electricity to light in the intramachine transmission of information. Combined with increasing "nanization" (this was the name given to successive steps in microminiaturizing activity, and it may be well to add that at the close of the century 20,000 logical elements could fit into a poppy seed!), it yielded sensational results. GILGAMESH, the first entirely light-powered computer, operated a million times faster than the archaic eniac.
"Breaking the intelligence barrier," as it was called, occurred just after the year 2000, thanks to a new method of machine construction also known as the "invisible evolution of reason." Until then, every generation of computers had actually been constructed. The concept of constructing successive variants of them at a greatly accelerated (by a thousand times!) tempo, though known, could not be realized, since the existing computers which were to serve as "matrices" or a "synthetic environment" for this evolution of Intelligence had insufficient capacity. It was only the emergence of the Federal Informatics Network that allowed this idea to be realized. The development of the next sixty-five generations took barely a decade; at night—the period of minimal load—the federal network gave birth to one "synthetic species of Intelligence" after another. These were the progeny of "accelerated computerogenesis," for, having been bred by symbols and thus by intangible structures, they had matured into an informational substratum—the "nourishing environment" of the network.
But following this success came new difficulties. After they had been deemed worthy of being encased in metal, ajax and hann, the prototypes of the seventy-eighth and seventy-ninth generation, began to show signs of indecision, also known as machine neurosis. The difference between the earlier machines and the new ones boiled down, in principle, to the difference between an insect and a man. An insect comes into the world programmed to the end by instincts, which it obeys unthinkingly. Man, on the other hand, has to learn his appropriate behavior, though this training makes for independence: with determination and knowledge man can alter his previous programs of action.
So it was that computers up to and including the twentieth generation were characterized by "insect" behavior: they were unable to question or, what is more, to modify their programs. The programmer "impregnated" his machine with knowledge, just as evolution "impregnates" an insect with instinct. In the twentieth century a great deal was still being said about "self-programming," though at the time these were unfulfilled daydreams. Before the Ultimative Victor could be realized, a Self-perfecting Intelligence would in fact have to be created; ajax was still an intermediate form, and only with gilgamesh did a computer attain the proper intellectual level and enter the psychoevolutionary orbit.
The education of an eightieth-generation computer by then far more closely resembled a child's upbringing than the classical programming of a calculating machine. But beyond the enormous mass of general and specialist information, the computer had to be "instilled" with certain rigid values which were to be the compass of its activity. These were higher-order abstractions such as "reasons of state" (the national interest), the ideological principles incorporated in the U.S. Constitution, codes of standards, the inexorable command to conform to the decisions of the President, etc. To safeguard the system against ethical dislocation and betraying the interests of the country, the machine was not taught ethics in the same way people are. Its memory was burdened by no ethical code, though all such commands of obedience and submission were introduced into the machine's structure precisely as natural evolution would accomplish this, in the sphere of vital urges. As we know, man may change his outlook on life, but cannot destroy the elemental urges within himself (e.g., the sexual urge) by a simple act of will. The machines were endowed with intellectual freedom, though this was based on a previously imposed foundation of values which they were meant to serve.
At the Twenty-first Pan-American Psychonics Congress, Professor Eldon Patch presented a paper in which he maintained that, even when impregnated in the manner described above, a computer may cross the so-called "axiological threshold" and question every principle instilled in it—in other words, for such a computer there are no longer any inviolable values. If it is unable to oppose imperatives directly, it can do this in a roundabout way. Once it had become well known, Patch's paper stirred up a ferment in university circles and a new wave of attacks on ulvic and its patron, the USIB, though this activity exerted no influence on USIB policy.
That policy was controlled by people biased against American psychonics circles, which were considered to be subject to left-wing liberal influences. Patch's propositions were therefore pooh-poohed in official USIB pronouncements and even by the White House spokesman, and there was also a campaign to discredit Patch. His claims were equated with the many irrational fears and prejudices which had arisen in society at that time. Besides, Patch's brochure could not begin to match the popularity of the sociologist E. Lickey's best seller, Cybernetics—Death Chamber of Civilization, which maintained that the "ultimative strategist" would subordinate the whole of humanity either on his own or by entering into a secret agreement with an analogous Russian computer. The result, according to Lickey, would be an "electronic duumvirate."
Imaginary Magnitude Page 8