Book Read Free

American Experiment

Page 285

by James Macgregor Burns


  “Frankly, I can see only the revolutionary way of non-cooperation in the sense of Gandhi’s. Every intellectual who is called before one of the committees ought to refuse to testify, i.e., he must be prepared for jail and economic ruin, in short, for the sacrifice of his personal welfare in the interest of the cultural welfare of his country.” Predictably, sundry newspapers attacked him for this, the New York Times tut-tutting him for using one evil to attack another. Einstein was disappointed by the pallid support from academics. If people were willing to testify, he said, “the intellectuals of this country deserve nothing better than the slavery which is intended for them.” Einstein spoke feelingly; for years he and his secretary were under investigation by the FBI, suspected of “pro-Communist” activities.

  Throughout his life Einstein’s highest political commitment was the search for international understanding and world peace. As chauvinism rose to new heights during World War I, he had signed a “Manifesto to Europeans” urging all those whom Goethe had called “good Europeans” to join forces for lasting unity. Forty years later, he served as a leader in the world government movement and as chairman of the Emergency Committee of Atomic Scientists to appeal for control of nuclear arms. In April 1955, during the last week of his life, he agreed with Bertrand Russell on a manifesto that proposed a global conference of scientists to assess the peril of war. This later led to the Pugwash conferences attended by scientists from Britain, the United States, the Soviet Union, and other countries.

  The scientist Einstein’s supreme commitment was to a unity and coherence that transcended even global politics—the grand unification of physical theory. His work on the general theory of relativity had struck another eminent German scientist, Max Born, as “the greatest feat of human thinking about nature, the most amazing combination of philosophical penetration, physical intuition, and mathematical skill”—even more, a “great work of art.” Both before and after he emigrated to the United States, Einstein labored over his unified field theory, which he viewed as the capstone of the inspired work of Maxwell, Faraday, and others. His general relativity theory, he explained to the press, reduced to one formula all laws governing space, time, and gravitation. The purpose of his continuing work was “to further this simplification, and particularly to reduce to one formula the explanation of the field of gravity and of the field of electro-magnetism.”

  He died without achieving this supreme goal. The immense mathematical and conceptual stages still eluded him. But he died expecting that the goal would be achieved. Everything in him, as a supreme embodiment of the Western Enlightenment, cried out that there must be some ultimate harmony in the physical as well as in the moral and intellectual universe. He was convinced, he liked to say, that God would not leave things to blind chance—that the “Old One” did not “throw dice” with the world. When Einstein died, many physicists doubted the possibility of a unified theory. Thirty years later, however, his faith was being vindicated. Scientists were more hopeful than ever that, following a somewhat different path to the same goal, they were on the verge of discovering the “Grand Unified Theory.” Einstein would have liked the acronym they applied to it—GUT.

  Three decades after Einstein’s death it seemed a hundredfold more likely that science would be unified than would the cultural and political thought of the world, or even of the West. In 1959 the English scientist and novelist C. P. Snow asserted in a Cambridge University lecture that Western society had become seriously fragmented, lacking even the pretense of a common culture. In particular, scientists and “literary intellectuals” had lost even the capacity to communicate with one another on the plane of serious common concerns. Soon Snow was set upon by those who insisted that there was really only one culture—the bourgeois capitalist one—or three or twenty or fifty ones, and by those who held that “culture” itself had become an ambiguous, problematic, and therefore, in Snow’s sense, meaningless term. But the press and public responded to his ideas with an intensity and excitement that indicated, as Snow later noted, a deep underlying concern with intellectual and cultural fragmentation.

  A few years after Snow’s lecture Hans Morgenthau, in his Science: Servant or Master?, described an even deeper and more pervasive problem than Snow’s. He found a conflict within the mind of man rather than between science and the mind of man—a “contrast between the achievements and promises of science and their acceptance by humanity, on the one hand,” and a “malaise” that had become a “universal phenomenon encompassing humanity.” This malaise stemmed from the irrelevance of institutional scholarship to the concerns of society, of people, and especially of the young who were in the process during these years, the early 1970s, of rebelling against that scholarship and its theft from them of their individual autonomy. In effect Morgenthau took Tocqueville’s old insight into the intellectual gulf in America—the gap between cloudy, high-blown rhetoric and day-to-day “pragmatic” expediency—and broadened it into even more dire and far-reaching disjunctions: actions directed by utilitarian objectives rather than by effective theory, ideas measured by technological relevance and applicability rather than by a transcending grand theory, and theory governed more by “smart truth”—what is knowable—than by what is worth knowing, as measured by transcending moral principles derived from authentic human needs.

  As a political theorist, Morgenthau emphasized the applicability of these gloomy findings to the physical danger resulting from scientific progress and the inability of politics to master that danger. “When science fails to protect man from the forces of nature,” he wrote, “its failure can be justified by the ineluctable limits of human powers confronting those forces. But war and misery are not the result of the blind forces of nature. They are the result of purposeful human action which, in its march toward the mastery of nature, has turned against man, his freedom, and his life. This shocking paradox of man’s ability to master nature and his helplessness to control the results of that mastery, his supremacy over what is inanimate and alien and his impotence in the face of man, is at the root of the contemporary revolt against science, society, and politics-as-usual.”

  Morgenthau was writing in the wake of almost three decades of memorable collisions between science and politics. Most of the scientists who had developed the A-bomb had wanted to avert or postpone its use, or at least have it dropped into an uninhabited area, but to no avail. They had chafed under military restrictions at Los Alamos and other scientific installations. They had heard with disbelief that army occupation forces in Japan had destroyed with welding torches and explosives five cyclotrons that the Japanese, lacking uranium, had developed chiefly for biological and medical research. Hundreds of scientists had mobilized to fight off the May-Johnson bill, which they feared would put atomic energy excessively under the control of the government in general and the military in particular, with dire implications for freedom of scientific research and experimentation. Even though a less threatening act passed, by the early 1950s many scientists outside the old leadership corps represented by Vannevar Bush and James B. Conant saw the national program for the physical sciences, in the words of science historian Daniel J. Kevles, as “dominated outside of atomic energy by the military, its dispensations concentrated geographically in the major universities, its primary energies devoted to the chief challenges of national defense and fundamental physics.” This focus might be necessary for scientific progress—was it enough?

  Many scientists had plunged into politics after the war, intent not only on controlling the atom but on banishing force as a means of settling disputes among nations. Although it might have appeared at first, wrote historian Alice Kimball Smith, that “the evangelical zeal with which scientists embarked on public education and lobbying was entirely out of character with the rationalist temper of their calling, yet their moral earnestness often had deep roots in backgrounds of Judaism or evangelical Protestantism.” Scientists translated their moral concern into the practical tasks of speaking, testifying,
raising money, and organizing the Federation of American Scientists. Even so, they were hardly prepared for the vagaries of transactional interest-group politics. Like other pressure groups, scientists wanted government support without government regulation; government wanted scientific progress at bargain rates.

  In the end it was evident that “something between seduction and rape repeatedly occurred,” science editor Daniel Greenberg concluded, but it was not always clear which side was the aggressor and which the victim. Depending on the times, scientists were now villains, now heroes. Sometimes they could be both. J. Robert Oppenheimer, the director of the research project at Los Alamos, became an American hero as the “father of the A-bomb.” He became controversial after fighting for international control of atomic energy, wobbling on the May-Johnson bill, opposing and then accepting development of the hydrogen bomb, and disregarding official security regulations. Edward Teller, “father of the H-bomb,” suspected him of being a communist. The Atomic Energy Commission in 1954, at the height of the McCarthy furor, stripped him of his security clearance. Less than a decade later, on December 2, 1963, with Teller and others in the audience, President Johnson presented Oppenheimer with the AEC’s prestigious Fermi Award that Kennedy had decided to grant him. Oppenheimer in his subdued response reminded his listeners that Jefferson had spoken of the “brotherly” spirit uniting the votaries of science. “We have not, I know,” Oppenheimer went on, “always given evidence of that brotherly spirit of science.” This was a gentle reminder of what Greenberg described as the political fragmentation of American science.

  Those who knew of Einstein’s life during the opening years of the century marveled at the contrast between his daily occupations at the Swiss Patent Office and his scientific work outside it. As a “technical expert, third class, provisional, lowest salary,” he spent the day examining models of household appliances and farm implements, cameras and typewriters, and various engineering devices that came in from the inventive Swiss. During his glorious “eight hours of idleness plus a whole Sunday,” as he described it to a friend, he worked on abstract and advanced theoretical concepts that would revolutionize scientific thought. It is very possible, however, that the kinds of gadgets and gimmicks he examined during the day had even more influence on Western life during the century ahead than his most creative scientific theorizing.

  This influence was evident not only in the immense number of mechanical devices that flooded first the Western world and then most of the globe during the twentieth century but also in the technological systems that braided societies—transportation, communications, information, medical, military. What the public tended to see and use as an array of convenient and time-saving devices were but the day-to-day manifestations of semi-autonomous entities that cut across whole cultures. Thus, something that started out as the horseless carriage, Michael Maccoby noted, became highway networks, plus a petroleum industry, plus rubber plantations, plus auto workers’ unions. In earlier times factories financed by capitalists had made goods; in the most recent era laboratories with public and private money manufactured whole technological systems, such as Morton Thiokol’s production of the space shuttle’s booster rockets. At the start of the 1980s Bell Laboratories in its eighteen locations had an annual budget of over $1 billion and 20,000 employees, of whom 2,500 held doctorates.

  During the 1980s the earlier furors over mechanization, scientific management, and automation were giving way to debate over the consequences of the information revolution. In fact, the “revolution” was a culmination of a long process stretching back to the early industrial revolution, even to Gutenberg, and embracing hundreds of developments from the invention of simple pencil erasers to the telegraph, the typewriter, the punch-card tabulating machine, word processors, computers, and the other bewildering electronic devices adorning banks, offices, factories, and, increasingly, homes. The new machines were so impressive in their capacities and versatile in their uses that once again the warning bells sounded: technology was threatening to replace human mind and feeling, information processing was no substitute for ideas, the finest computer could not capture the poetry and joy and nuances of life, the rising generation of students might even be “seriously hampered in its capacity to think through the social and ethical questions that confront us” if educators were swept into the computer cult.” Even defenders of the computer granted, in J. David Bolter’s words, that it would foster “a general redefinition of certain basic relationships: the relationship of science to technology, of knowledge to technical power, and, in the broadest sense, of mankind to the world of nature.”

  Others raised the same question about technology that Morgenthau had about science: servant or master? Granted that machines could not master anything except at the command of the human beings running them, whose interests did the new technology serve? Presumably the owners and managers of the technology. Possibly the great mass of technical people operating the machines, who tended, David F. Noble wrote, to “internalize and even consciously adopt the outlook of their patrons, an outlook translated into professional habit through such mechanisms as education, funding, reward-structures, and peer pressure.” Labor? Some argued that the skilled monitoring and constant adjustments required by the new cybernetic systems were making factory jobs more interesting and remunerative for skilled labor; others noted that, for the mass of workers, making or running information systems would continue to be boring and alienating. Women? Their relationship to technology had differed sharply from men’s all along, in Ruth S. Cowan’s view—as evidenced even in the writing of the history of technology.

  “Women menstruate, parturate, and lactate; men do not,” Cowan wrote. Technologies relating to these processes had long been developed: pessaries, sanitary napkins, tampons, intrauterine devices, childbirth anesthesia, artificial nipples, bottle sterilizers, pasteurized and condensed milks. Where, Cowan asked, were the histories of these female technologies? They had yet to be written. Even more, she wrote, women had been culturally discouraged from playing a major role in general technological change. They had experienced technology and science as consumers, not producers. Hence it was not surprising that an upsurge of “antitechnology” attitudes among women during the 1970s was correlated closely with the upsurge in women’s political consciousness, or that women might carry an unspoken hostility to science and technology into the political arena.

  Technology as a whole had long been deeply enmeshed with politics and government, though often in complex, mysterious, and unseen ways. The design of bridges—as against the awarding of contracts for the building of bridges—would have seemed a virtually nonpolitical act, at least until the days of Robert Moses, the builder of New York highways, parks, and bridges. Moses, his biographer Robert Caro reported, built the bridges over Long Island parkways with very low clearances in order to discourage buses that would allow hordes of New York’s poor—especially blacks—to invade his beloved Jones Beach. The clearances admitted middle-class people with private cars. This politically shrewd, if morally reprehensible, decision was taken without undue publicity and fuss. Many of Moses’s “monumental structures of concrete and steel embody a systematic social inequality,” according to Langdon Winner, “a way of engineering relationships among people that, after a time, become just another part of the landscape.”

  By the late 1980s the information revolution was still shrouding the play of technological politics. With their emphasis on interaction, the processes of linkage, feedback, equilibrium, cybernetics, and information systems put a premium on harmony and stability, whereas politics thrived on conflict. The massive information and other technological systems fit their operators into the internal consensus and equilibrium of the systems, thus presenting unified ranks to the often hostile outside world. But internal conflicts broke out—between employers and employees, among doctors and administrators and nurses, railroads and bus companies and air carriers, army, marine, and naval officers. And the systems were no
t wholly benign; some posed threats to the whole society, as in the case of arms building, environmental pollution, traffic in illegal drugs, nuclear power plants.

  These conflicts and threats were catapulted into the political process, as technologists transformed themselves into pressure groups competing with rival interests. Compromises were arranged, regulations imposed or withdrawn or changed, judgments made and unmade by government. Scientists manifested a growing concern as to whether the American political process could resolve not only routine, day-to-day interest-group conflicts but the far more complex task of helping science and technology fortify, rather than threaten, core American values. If, as an MIT professor of electrical engineering feared, the American culture had “a weak value system,” would it therefore be “disastrously vulnerable to technology”? If, as other scientists contended, they needed a wide measure of freedom in the early development of technologies, at what point should government step in to constrain their freedom to extend technology into such dangerous areas as genetic experimentation? If, as an MIT political scientist observed, whatever claims might be made for liberty, justice, or equality could be quickly neutralized by the answer, “ ‘Fine, but that’s no way to run a railroad’ (or steel mill, or airline, or communications system, and so on),” did such counterclaims of practical necessity eclipse the need for “moral and political reasoning”?

 

‹ Prev