Having ensured the safety of my secret thus, I can now serenely set about giving a complete report. I will confine myself to the first two volumes of Weapons Systems of the Twenty-first Century: The Upside-down Evolution, published in 2105. I could even name the authors (none of whom has been born yet), but what would be the point? The work is in three volumes. The first presents the development of weapons from the year 1944; the second explains how the nuclear-arms race gave rise to the “unhumanizing” of warfare by transferring the production of weapons from the defense industry to the battlefield itself; and the third deals with the effect this greatest military revolution had on the subsequent history of the world.
II
Soon after the destruction of Hiroshima and Nagasaki, American nuclear researchers founded the Bulletin of the Atomic Scientists. On its cover they put the picture of a clock with the minute hand at ten to midnight. Six years later, after the first successful tests of the hydrogen bomb, they moved the hand five minutes closer, and when the Soviet Union acquired thermonuclear weapons the hand was moved three minutes closer. The next move would mean the end of civilization. The Bulletin’s doctrine was “One World or None": the world would either unite and be saved, or would perish.
With the nuclear build-up on both sides of the ocean and the placing of ever larger payloads of plutonium and tritium in ever more accurate ballistic missiles, none of the scientists who were the “fathers of the bomb” believed that peace — troubled as it was by local, conventional wars — would last to the end of the century. Atomic weapons had amended Clausewitz’s famous definition ("War is… a continuation of political activity by other means"), because now the threat of attack could substitute for the attack itself. Thus came about the doctrine of symmetrical deterrence known later as the “balance of terror.” Different American administrations advocated it with different initials. There was, for example, MAD (Mutual Assured Destruction), based on the “second-strike” principle (the ability of the country attacked to retaliate in force). The vocabulary of destruction was enriched in the next decades. There was “Total Strategic Exchange,” meaning all-out nuclear war; MIRV (Multiple Independently Targetable Re-entry Vehicle), a missile firing a number of warheads simultaneously, each aimed at a different target; PENAID (Penetration Aids), dummy missiles to fool the opponent’s radar; and MARY (Maneuverable Re-entry), a missile capable of evading antimissiles and of hitting the target within fifty feet of the programmed “ground zero.” But to list even a hundredth of the succession of specialized terms is impossible here.
Although the danger of atomic warfare increased whenever “equality” was lessened, and therefore the rational thing would seem to have been to preserve that equality under multinational supervision, the antagonists did not reach an agreement despite repeated negotiations.
There were many reasons, which the authors of Weapons Systems divide into two groups. In the first group they see the pressure of traditional thinking in international politics. Tradition has determined that one should call for peace but prepare for war, upsetting the existing balance until the upper hand is gained. The second group of reasons are factors independent of human thought both political and nonpolitical; these have to do with the evolution of the major applied military technologies.
Each new possibility of technological improvement in weaponry became a reality, on the principle “If we don’t do it, they will.” Meanwhile, the doctrine of nuclear warfare went through changes. At one time it advocated a limited exchange of nuclear strikes (though no one knew exactly what the guarantee of the limitation would be); at another, its goal was the total annihilation of the enemy (all of whose population became “hostages” of a sort); at still another, it gave first priority to destroying the enemy’s military-industrial potential.
The ancient law of “sword and shield” still held sway in the evolution of weaponry. The shield took the form of hardening the silos that housed the missiles, while the sword to pierce the shield involved making the missiles increasingly accurate and, later, providing them with self-guidance systems and self-maneuverability. For atomic submarines the shield was the ocean; improved methods for their underwater detection constituted the sword.
Technological progress in defense sent electronic “eyes” into orbit, creating a high frontier of global reconnaissance able to spot missiles at the moment of launch. This was the shield that the new type of sword — the “killer satellite" — was to break, with a laser to blind the defending “eyes,” or with a lightninglike discharge of immense power to destroy the missiles themselves during their flight above the atmosphere.
But the hundreds of billions of dollars invested in building these higher and higher levels of conflict failed, ultimately, to produce any definite, and therefore valuable, strategic advantage — and for two very different, almost unrelated reasons.
In the first place, all these improvements and innovations, instead of increasing strategic security, offensive or defensive, only reduced it. Security was reduced because the global system of each superpower grew more and more complex, composed of an increasing number of different subsystems on land, sea, and air and in space. Military success required infallible communications to guarantee the optimum synchronization of operations. But all systems that are highly complex, whether they be industrial or military, biological or technological, whether they process information or raw material, are prone to breakdown, to a degree mathematically proportional to the number of elements that make up the system. Progress in military technology carried with it a unique paradox: the more sophisticated the weapon it produced, the greater was the role of chance (which could not be calculated) in the weapon’s successful use.
This fundamental problem must be explained carefully, because scientists were for a long time unable to base any technological activity on the randomness of complex systems. To counteract malfunctions in such systems, engineers introduced redundancy: power reserves, for example, or — as with the first American space shuttles (like the Columbia) — the doubling, even quadrupling of parallel, onboard computers. Total reliability is unattainable: if a system has a million elements and each element will malfunction only one time out of a million, a breakdown is certain.
The bodies of animals and plants consist of trillions of functioning parts, yet life copes with the phenomenon of inevitable failure. In what way? The experts call it the construction of reliable systems out of unreliable components. Natural evolution uses various tactics to counteract the fallibility of organisms: the capacity for self-repair or regeneration; surplus organs (this is why we have two kidneys instead of one, why a half-destroyed liver can still function as the body’s central chemical-processing plant, and why the circulatory system has so many alternate veins and arteries); and the separation of control centers for the somatic and psychic processes. This last phenomenon gave brain researchers much trouble: they could not understand why a seriously injured brain still functioned but a slightly damaged computer refused to obey its programs.
Merely doubling control centers and parts used in twentieth-century engineering led to the absurd in actual construction. If an automated spaceship going to a distant planet were built according to the directive of multiplying pilot computers, as in the shuttles, then it would have to contain — in view of the duration of the flight — not four or five but possibly fifty such computers. They would operate not by “linear logic” but by “voting": once the individual computers ceased functioning identically and thus diverged in their results, one would have to accept, as the right result, what was reached by the majority. But this kind of engineering parliamentarianism led to the production of giants burdened with the woes typical of democracies: contradictory views, plans, and actions. To such pluralism, to such programmed elasticity, there had to be a limit.
We should have begun much earlier — said the twenty-first-century specialists — to learn from biological evolution, whose several-billion-year existence demonstrates optimal strategic engineer
ing. A living organism is not guided by “totalitarian centralism” or “democratic pluralism,” but by a strategy much more complex. Simplifying, we might call it a compromise between concentration and separation of the regulating centers.
Meanwhile, in the late-twentieth-century phase of the arms race, the role of unpredictable chance increased. When hours (or days) and miles (or hundreds of miles) separate defeat from victory, and therefore an error of command can be remedied by throwing in reserves, or retreating, or counterattacking, then there is room to reduce the element of chance. But when micromillimeters and nanoseconds determine the outcome, then chance enters like a god of war, deciding victory or defeat; it is magnified and lifted out of the microscopic scale of atomic physics. The fastest, best weapons system comes up against the Heisenberg uncertainty principle, which nothing can overcome, because that principle is a basic property of matter in the Universe. It need not be a computer breakdown in satellite reconnaissance or in missiles whose warheads parry defenses with laser beams; if a series of electronic defensive impulses is even a billionth of a second slow in meeting a similar series of offensive impulses, that is enough for a toss of the dice to decide the outcome of the Final Encounter.
Unaware of this state of affairs, the major antagonists of the planet devised two opposite strategies. One can call them the “scalpel” and the “hammer.” The constant escalation of pay-load megatonnage was the hammer; the improvement of detection and swift destruction in flight was the scalpel. They also reckoned on the deterrent of the “dead man’s revenge": the enemy would realize that even in winning he would perish, since a totally obliterated country would still respond — automatically and posthumously — with a strike that would make defeat universal. Such was the direction the arms race was taking, and such was its destination, which no one wanted but no one knew how to avoid.
How does the engineer minimize error in a very large, very complex system? He does trial runs to test it; he looks for weak spots, weak links. But there was no way of testing a system designed to wage global nuclear war, a system made up of surface, submarine, air-launched, and satellite missiles, antimissiles, and multiple centers of command and communications, ready to loose gigantic destructive forces in wave on wave of reciprocal atomic strikes. No maneuvers, no computer simulation, could re-create the actual conditions of such a battle.
Increasing speed of operation marked each new weapons system, particularly the decision-making function (to strike or not to strike, where, how, with what force held in reserve, at what risk, etc.), and this increasing speed also brought the incalculable factor of chance into play. Lightning-fast systems made lightning-fast mistakes. When a fraction of a second determined the safety or destruction of a region, a great metropolis, an industrial complex, or a large fleet, it was impossible to achieve military certainty. One could even say that victory had ceased to be distinguishable from defeat. In a word, the arms race was heading toward a Pyrrhic situation.
On the battlefields of yore, when knights in armor fought on horseback and foot soldiers met at close quarters, chance decided the life or death of individuals and military units. But the power of electronics, embodied in computer logic, made chance the arbiter of the fate of whole armies and nations.
Moreover — and this was quite a separate thing — blueprints for new, better weapons were developed so quickly that industry could not keep pace. Control systems, targeting systems, camouflage, maintenance and disruption of communications, the strike capability of so-called conventional weapons (a misleading term, really, and out of date) became anachronisms even before they were put into the field.
That is why, in the late eighties, production was frequently halted on new fighter planes and bombers, cruise missiles, anti-antimissiles, spy satellites, submarines, laser bombs, sonars, and radars. That is why prototypes had to be abandoned and why so much political debate seethed around successive weapons that swallowed huge budgets and vast human energies. Not only did each innovation turn out to be far more expensive than the one before, but many soon had to be written off as losses, and this pattern continued without letup. It seemed that technological-military invention per se was not the answer, but, rather, the speed of its industrial implementation. This phenomenon became, at the turn of the century, the latest paradox of the arms race. The only way to nullify its awful drain on the military appeared to be to plan weapons not eight or twelve years ahead, but a quarter of a century in advance — which was a sheer impossibility, requiring the prediction of new discoveries and inventions beyond the ken of the best minds of the day.
At the end of the twentieth century, the idea emerged of a new weapon that would be neither an atom bomb nor a laser gun but a hybrid of the two. Up to then, there were fission (uranium, plutonium) and fusion (thermonuclear, hydrogen-plutonium) bombs. The “old” bomb, in breaking nuclear bonds, unleashed every possible sort of radiation: gamma rays, X-rays, heat, and an avalanche of radioactive dust and lethal high-energy particles. The fireball, having a temperature of millions of degrees, emitted energy at all wavelengths. As someone said, “Matter vomited forth everything she could.” From a military standpoint it was wasteful, because at ground zero all objects turned into flaming plasma, a gas of atoms stripped of their electron shells. At the site of the explosion, stones, trees, houses, metals, bridges, and human bodies vaporized, and concrete and sand were hurled into the stratosphere in a rising mushroom of flames. “Conversion bombs” were a more efficient version of this weapon. They emitted what the strategists required in a given situation: either hard radiation — in which case it was called a “clean bomb,” striking only living things — or thermal radiation, which unleashed a firestorm over hundreds of square miles.
The laser bomb, however, was not actually a bomb; it was a single-charge laser gun, focusing a huge part of its force into a ray that could incinerate a city (from a high orbit), for example, or a rocket base, or some other important target (such as the enemy’s satellite defense screen). At the same time, the ray would turn the laser bomb itself into flaming fragments. But we will not go into more detail about such weapons, because instead of leading to further escalation, as was expected, they really marked its end.
It is worthwhile, however, to look at the atomic arsenals of twentieth-century Earth from a historical perspective. Even in the seventies, they held enough weapons to kill every inhabitant of the planet several times over. Given this overabundance of destructive might, the specialists favored a preventive strike, or making a second strike at the enemy’s stockpiles while protecting their own. The safety of the population was important but second in priority.
In the early fifties, the Bulletin of the Atomic Scientists printed a discussion in which the fathers of the bomb, physicists like Bethe and Szilard, took part. It dealt with civil defense in the event of nuclear war. A realistic solution would have meant evacuating the cities and building gigantic underground shelters. Bethe estimated the cost of the first phase of such a project to be twenty billion dollars, though the social and psychological costs were beyond reckoning. But it soon became clear that even a “return to the cave” would not guarantee the survival of the population, because the arms race continued to yield more powerful warheads and increasingly accurate missiles. The science fiction of the day painted gloomy and nightmarish scenes in which the degenerate remnants of humanity vegetated in concrete, multilevel molehills beneath the ruins of gutted cities. Self-styled futurologists (but all futurologists were self-styled) outdid one another in extrapolating, from existing atomic arsenals, future arsenals even more frightful. One of the better known of such speculations was Herman Kahn’s Thinking about the Unthinkable, an essay on hydrogen warfare. Kahn also thought up a “doomsday machine.” An enormous nuclear charge encased in a cobalt jacket could be buried by a nation in the depths of its own territory, in order to blackmail the rest of the world with the threat of “total planetary suicide.” But no one dreamed that, with political antagonisms still persisting,
the era of atomic weapons would come to an end without ushering in either world peace or world annihilation.
During the early years of the twenty-first century, theoretical physics pondered a question that was thought to be crucial for the world’s continued existence: namely, whether or not the critical mass of uranides like uranium 235 and plutonium (that is, the mass at which an initiated chain reaction causes a nuclear explosion) was an absolute constant. If the critical mass could be influenced, particularly at a great distance, there might be a chance of neutralizing all warheads. As it turned out (and the physicists of the previous century had a rough idea of this), the critical mass could change. Under certain physical conditions, an explosive charge that had been critical ceased to be critical, and therefore did not explode. But the amount of energy needed to create such conditions was far greater than the power contained in all the atomic weapons combined. These attempts to neutralize atomic weapons were unsuccessful.
III
In the 1990s a new type of missile, popularly called the “F&F” (Fire & Forget), made its appearance. Guided by a programmed microcomputer, the missile sought its own target after being launched. Once activated, it was truly on its own. At the same time, “unhuman” espionage came into use, at first underwater. An underwater mine, equipped with sensors and memory, could keep track of the movements of ships sailing over it, distinguish commercial vessels from military, establish their tonnage, and transmit the information in code if necessary.
Combat readiness, in the affluent nations especially, evaporated. Young men of draft age considered such time-honored phrases as Dulce et decorum est pro patria mori to be completely ridiculous.
Meanwhile, new generations of weapons were rising in price exponentially. The airplane of the First World War was made of canvas, wood, and piano wire, with a couple of machine guns; landing gear and all, it cost about as much as a good automobile. A comparable airplane of the Second World War cost as much as thirty automobiles. By the end of the century, the price of a jet interceptor or a radar-proof bomber of the “Stealth” type was in the hundreds of millions of dollars. Aircraft for the year 2000 were expected to cost a billion apiece. At this rate, it was calculated that over the next eighty years each superpower would be able to afford only twenty to twenty-five new planes. Tanks were no cheaper. And an atomic aircraft carrier, which was like an antediluvian brontosaurus under fire, cost many billions. The carrier could be sunk by a single hit from an F&F superrocket, which could split over the target into a cluster of specialized warheads, each to strike at a different nerve center of the sea leviathan.
One Human Minute Page 5