Making of the Atomic Bomb

Home > Science > Making of the Atomic Bomb > Page 5
Making of the Atomic Bomb Page 5

by Richard Rhodes


  What the atom of each element is, whether it is a movement, or a thing, or a vortex, or a point having inertia, whether there is any limit to its divisibility, and, if so, how that limit is imposed, whether the long list of elements is final, or whether any of them have any common origin, all these questions remain surrounded by a darkness as profound as ever.92

  Physics worked that way, sorting among alternatives: all science works that way. The chemist Michael Polanyi, Leo Szilard’s friend, looked into the workings of science in his later years at the University of Manchester and at Oxford. He discovered a traditional organization far different from what most nonscientists suppose. A “republic of science,” he called it, a community of independent men and women freely cooperating, “a highly simplified example of a free society.” Not all philosophers of science, which is what Polanyi became, have agreed.93, 94 Even Polanyi sometimes called science an “orthodoxy.” But his republican model of science is powerful in the same way successful scientific models are powerful: it explains relationships that have not been clear.

  Polanyi asked straightforward questions. How were scientists chosen? What oath of allegiance did they swear? Who guided their research—chose the problems to be studied, approved the experiments, judged the value of the results? In the last analysis, who decided what was scientifically “true”? Armed with these questions, Polanyi then stepped back and looked at science from outside.

  Behind the great structure that in only three centuries had begun to reshape the entire human world lay a basic commitment to a naturalistic view of life. Other views of life dominated at other times and places—the magical, the mythological. Children learned the naturalistic outlook when they learned to speak, when they learned to read, when they went to school. “Millions are spent annually on the cultivation and dissemination of science by the public authorities,” Polanyi wrote once when he felt impatient with those who refused to understand his point, “who will not give a penny for the advancement of astrology or sorcery. In other words, our civilization is deeply committed to certain beliefs about the nature of things; beliefs which are different, for example, from those to which the early Egyptian or the Aztec civilizations were committed.”95

  Most young people learned no more than the orthodoxy of science. They acquired “the established doctrine, the dead letter.” Some, at university, went on to study the beginnings of method.96 They practiced experimental proof in routine research. They discovered science’s “uncertainties and its eternally provisional nature.” That began to bring it to life.97

  Which was not yet to become a scientist. To become a scientist, Polanyi thought, required “a full initiation.” Such an initiation came from “close personal association with the intimate views and practice of a distinguished master.” The practice of science was not itself a science; it was an art, to be passed from master to apprentice as the art of painting is passed or as the skills and traditions of the law or of medicine are passed.98, 99 You could not learn the law from books and classes alone. You could not learn medicine. No more could you learn science, because nothing in science ever quite fits; no experiment is ever final proof; everything is simplified and approximate.

  The American theoretical physicist Richard Feynman once spoke about his science with similar candor to a lecture hall crowded with undergraduates at the California Institute of Technology. “What do we mean by ‘understanding’ something?” Feynman asked innocently.100 His amused sense of human limitation informs his answer:

  We can imagine that this complicated array of moving things which constitutes “the world” is something like a great chess game being played by the gods, and we are observers of the game. We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules. The rules of the game are what we mean by fundamental physics. Even if we know every rule, however . . . what we really can explain in terms of those rules is very limited, because almost all situations are so enormously complicated that we cannot follow the plays of the game using the rules, much less tell what is going to happen next. We must, therefore, limit ourselves to the more basic question of the rules of the game. If we know the rules, we consider that we “understand” the world.

  Learning the feel of proof; learning judgment; learning which hunches to play; learning which stunning calculations to rework, which experimental results not to trust: these skills admitted you to the spectators’ benches at the chess game of the gods, and acquiring them required sitting first at the feet of a master.

  Polanyi found one other necessary requirement for full initiation into science: belief. If science has become the orthodoxy of the West, individuals are nevertheless still free to take it or leave it, in whole or in part; believers in astrology, Marxism and virgin birth abound. But “no one can become a scientist unless he presumes that the scientific doctrine and method are fundamentally sound and that their ultimate premises can be unquestioningly accepted.”101

  Becoming a scientist is necessarily an act of profound commitment to the scientific system and the scientific world view. “Any account of science which does not explicitly describe it as something we believe in is essentially incomplete and a false pretense. It amounts to a claim that science is essentially different from and superior to all human beliefs that are not scientific statements—and this is untrue.” Belief is the oath of allegiance that scientists swear.102

  That was how scientists were chosen and admitted to the order. They constituted a republic of educated believers taught through a chain of masters and apprentices to judge carefully the slippery edges of their work.

  Who then guided that work? The question was really two questions: who decided which problems to study, which experiments to perform? And who judged the value of the results?

  Polanyi proposed an analogy. Imagine, he said, a group of workers faced with the problem of assembling a very large, very complex jigsaw puzzle.103 How could they organize themselves to do the job most efficiently?

  Each worker could take some of the pieces from the pile and try to fit them together. That would be an efficient method if assembling a puzzle was like shelling peas. But it wasn’t. The pieces weren’t isolated. They fitted together into a whole. And the chance of any one worker’s collection of pieces fitting together was small. Even if the group made enough copies of the pieces to give every worker the entire puzzle to attack, no one would accomplish as much alone as the group might if it could contrive a way to work together.

  The best way to do the job, Polanyi argued, was to allow each worker to keep track of what every other worker was doing. “Let them work on putting the puzzle together in the sight of the others, so that every time a piece of it is fitted in by one [worker], all the others will immediately watch out for the next step that becomes possible in consequence.” That way, even though each worker acts on his own initiative, he acts to further the entire group’s achievement.104 The group works independently together; the puzzle is assembled in the most efficient way.

  Polanyi thought science reached into the unknown along a series of what he called “growing points,” each point the place where the most productive discoveries were being made.105 Alerted by their network of scientific publications and professional friendships—by the complete openness of their communication, an absolute and vital freedom of speech—scientists rushed to work at just those points where their particular talents would bring them the maximum emotional and intellectual return on their investment of effort and thought.

  It was clear, then, who among scientists judged the value of scientific results: every member of the group, as in a Quaker meeting. “The authority of scientific opinion remains essentially mutual; it is established between scientists, not above them.” There were leading scientists, scientists who worked with unusual fertility at the growing points of their fields; but science had no ultimate leaders.106 Consensus ruled.

  Not tha
t every scientist was competent to judge every contribution. The network solved that problem too. Suppose Scientist M announces a new result. He knows his highly specialized subject better than anyone in the world; who is competent to judge him? But next to Scientist M are Scientists L and N. Their subjects overlap M’s, so they understand his work well enough to assess its quality and reliability and to understand where it fits into science. Next to L and N are other scientists, K and O and J and P, who know L and N well enough to decide whether to trust their judgment about M. On out to Scientists A and Z, whose subjects are almost completely removed from M’s.

  “This network is the seat of scientific opinion,” Polanyi emphasized; “of an opinion which is not held by any single human brain, but which, split into thousands of different fragments, is held by a multitude of individuals, each of whom endorses the other’s opinion at second hand, by relying on the consensual chains which link him to all the others through a sequence of overlapping neighborhoods.”107 Science, Polanyi was hinting, worked like a giant brain of individual intelligences linked together. That was the source of its cumulative and seemingly inexorable power. But the price of that power, as both Polanyi and Feynman are careful to emphasize, is voluntary limitation. Science succeeds in the difficult task of sustaining a political network among men and women of differing backgrounds and differing values, and in the even more difficult task of discovering the rules of the chess game of the gods, by severely limiting its range of competence. “Physics,” as Eugene Wigner once reminded a group of his fellows, “does not even try to give us complete information about the events around us—it gives information about the correlations between those events.”108

  Which still left the question of what standards scientists consulted when they passed judgment on the contributions of their peers. Good science, original work, always went beyond the body of received opinion, always represented a dissent from orthodoxy. How, then, could the orthodox fairly assess it?

  Polanyi suspected that science’s system of masters and apprentices protected it from rigidity. The apprentice learned high standards of judgment from his master. At the same time he learned to trust his own judgment: he learned the possibility and the necessity of dissent. Books and lectures might teach rules; masters taught controlled rebellion, if only by the example of their own original—and in that sense rebellious—work.

  Apprentices learned three broad criteria of scientific judgment.109 The first criterion was plausibility. That would eliminate crackpots and frauds. It might also (and sometimes did) eliminate ideas so original that the orthodox could not recognize them, but to work at all, science had to take that risk. The second criterion was scientific value, a composite consisting of equal parts accuracy, importance to the entire system of whatever branch of science the idea belonged to, and intrinsic interest. The third criterion was originality. Patent examiners assess an invention for originality according to the degree of surprise the invention produces in specialists familiar with the art. Scientists judged new theories and new discoveries similarly. Plausibility and scientific value measured an idea’s quality by the standards of orthodoxy; originality measured the quality of its dissent.

  Polanyi’s model of an open republic of science where each scientist judges the work of his peers against mutually agreed upon and mutually supported standards explains why the atom found such precarious lodging in nineteenth-century physics. It was plausible; it had considerable scientific value, especially in systematic importance; but no one had yet made any surprising discoveries about it. None, at least, sufficient to convince the network of only about one thousand men and women throughout the world in 1895 who called themselves physicists and the larger, associated network of chemists.110

  The atom’s time was at hand. The great surprises in basic science in the nineteenth century came in chemistry. The great surprises in basic science in the first half of the twentieth century would come in physics.

  * * *

  In 1895, when young Ernest Rutherford roared up out of the Antipodes to study physics at the Cavendish with a view to making his name, the New Zealand he left behind was still a rough frontier. British nonconformist craftsmen and farmers and a few adventurous gentry had settled the rugged volcanic archipelago in the 1840s, pushing aside the Polynesian Maori who had found it first five centuries before; the Maori gave up serious resistance after decades of bloody skirmish only in 1871, the year Rutherford was born. He attended recently established schools, drove the cows home for milking, rode horseback into the bush to shoot wild pigeons from the berry-laden branches of virgin miro trees, helped at his father’s flax mill at Brightwater where wild flax cut from aboriginal swamps was retted, scutched and hackled for linen thread and tow. He lost two younger brothers to drowning; the family searched the Pacific shore near the farm for months.

  It was a hard and healthy childhood. Rutherford capped it by winning scholarships, first to modest Nelson College in nearby Nelson, South Island, then to the University of New Zealand, where he earned an M.A. with double firsts in mathematics and physical science at twenty-two. He was sturdy, enthusiastic and smart, qualities he would need to carry him from rural New Zealand to the leadership of British science. Another, more subtle quality, a braiding of country-boy acuity with a profound frontier innocence, was crucial to his unmatched lifetime record of physical discovery. As his protégé James Chadwick said, Rutherford’s ultimate distinction was “his genius to be astonished.” He preserved that quality against every assault of success and despite a well-hidden but sometimes sickening insecurity, the stiff scar of his colonial birth.111, 112

  His genius found its first occasion at the University of New Zealand, where Rutherford in 1893 stayed on to earn a B.Sc. Heinrich Hertz’s 1887 discovery of “electric waves”—radio, we call the phenomenon now—had impressed Rutherford wonderfully, as it did young people everywhere in the world. To study the waves he set up a Hertzian oscillator—electrically charged metal knobs spaced to make sparks jump between metal plates—in a dank basement cloakroom. He was looking for a problem for his first independent work of research.

  He located it in a general agreement among scientists, pointedly including Hertz himself, that high-frequency alternating current, the sort of current a Hertzian oscillator produced when the spark radiation surged rapidly back and forth between the metal plates, would not magnetize iron. Rutherford suspected otherwise and ingeniously proved he was right. The work earned him an 1851 Exhibition scholarship to Cambridge. He was spading up potatoes in the family garden when the cable came. His mother called the news down the row; he laughed and jettisoned his spade, shouting triumph for son and mother both: “That’s the last potato I’ll dig!” (Thirty-six years later, when he was created Baron Rutherford of Nelson, he sent his mother a cable in her turn: “Now Lord Rutherford, more your honour than mine.”113, 114)

  “Magnetization of iron by high-frequency discharges” was skilled observation and brave dissent.115 With deeper originality, Rutherford noticed a subtle converse reaction while magnetizing iron needles with high-frequency current: needles already saturated with magnetism became partly demagnetized when a high-frequency current passed by. His genius to be astonished was at work. He quickly realized that he could use radio waves, picked up by a suitable antenna and fed into a coil of wire, to induce a high-frequency current into a packet of magnetized needles. Then the needles would be partly demagnetized and if he set a compass beside them it would swing to show the change.

  By the time he arrived on borrowed funds at Cambridge in September 1895 to take up work at the Cavendish under its renowned director, J. J. Thomson, Rutherford had elaborated his observation into a device for detecting radio waves at a distance—in effect, the first crude radio receiver. Guglielmo Marconi was still laboring to perfect his version of a receiver at his father’s estate in Italy; for a few months the young New Zealander held the world record in detecting radio transmissions at a distance.116

  Rutherford’s
experiments delighted the distinguished British scientists who learned of them from J. J. Thomson. They quickly adopted Rutherford, even seating him one evening at the Fellows’ high table at King’s in the place of honor next to the provost, which made him feel, he said, “like an ass in a lion’s skin” and which shaded certain snobs on the Cavendish staff green with envy.117 Thomson generously arranged for a nervous but exultant Rutherford to read his third scientific paper, “A magnetic detector of electrical waves and some of its applications,” at the June 18, 1896, meeting of the Royal Society of London, the foremost scientific organization in the world.118 Marconi only caught up with him in September.119

  Rutherford was poor. He was engaged to Mary Newton, the daughter of his University of New Zealand landlady, but the couple had postponed marriage until his fortunes improved. Working to improve them, he wrote his fiancée in the midst of his midwinter research: “The reason I am so keen on the subject [of radio detection] is because of its practical importance. . . . If my next week’s experiments come out as well as I anticipate, I see a chance of making cash rapidly in the future.”120

  There is mystery here, mystery that carries forward all the way to “moonshine.” Rutherford was known in later years as a hard man with a research budget, unwilling to accept grants from industry or private donors, unwilling even to ask, convinced that string and sealing wax would carry the day. He was actively hostile to the commercialization of scientific research, telling his Russian protégé Peter Kapitza, for example, when Kapitza was offered consulting work in industry, “You cannot serve God and Mammon at the same time.”121 The mystery bears on what C. P. Snow, who knew him, calls the “one curious exception” to Rutherford’s “infallible” intuition, adding that “no scientist has made fewer mistakes.” The exception was Rutherford’s refusal to admit the possibility of usable energy from the atom, the very refusal that irritated Leo Szilard in 1933.122 “I believe that he was fearful that his beloved nuclear domain was about to be invaded by infidels who wished to blow it to pieces by exploiting it commercially,” another protege, Mark Oliphant, speculates.123 Yet Rutherford himself was eager to exploit radio commercially in January 1896. Whence the dramatic and lifelong change?

 

‹ Prev