by Lewis Thomas
The social scientists have a long way to go to catch up, but they may be up to the most important scientific business of all, if and when they finally get down to the right questions. Our behavior toward each other is the strangest, most unpredictable, and almost entirely unaccountable of all the phenomena with which we are obliged to live. In all of nature there is nothing so threatening to humanity as humanity itself. We need, for this most worrying of puzzles, the brightest and youngest of our most agile minds, capable of dreaming up ideas not dreamed before, ready to carry the imagination to great depths and, I should hope, handy with big computers but skeptical about long questionnaires and big numbers.
Fundamental science did not become a national endeavor in this country until the time of World War II, when it was pointed out by some influential and sagacious advisers to the government that whatever we needed for the technology of warfare could be achieved only after the laying of a solid foundation of basic research. During the Eisenhower administration a formal mechanism was created in the White House for the explicit purpose of furnishing scientific advice to the President, the President’s Science Advisory Committee (PSAC), chaired by a new administration officer, the Science Adviser. The National Institutes of Health, which had existed before the war as a relatively small set of laboratories for research on cancer and infectious disease, expanded rapidly in the postwar period to encompass all disciplines of biomedical science. The National Science Foundation was organized specifically for the sponsorship of basic science. Each of the federal departments and agencies developed its own research capacity, relevant to its mission; the programs of largest scale were those in defense, agriculture, space, and atomic energy.
Most of the country’s basic research has been carried out by the universities, which have as a result become increasingly dependent on the federal government for their sustenance, even their existence, to a degree now causing alarm signals from the whole academic community. The ever-rising costs of doing modern science, especially the prices of today’s sophisticated and complex instruments, combined with the federal efforts to reduce all expenditures, are placing the universities in deep trouble. Meanwhile, the philanthropic foundations, which were the principal source of funds for university research before the war, are no longer capable of more than a minor contribution to science.
Besides the government’s own national laboratories and the academic institutions there is a third resource for the country’s scientific enterprise—industry. Up to very recently, industrial research has been conducted in relative isolation, unconnected with the other two. There are signs that this is beginning to change, and the change should be a source of encouragement for the future. Some of the corporations responsible for high technology, especially those involved in energy, have formed solid linkages with a few research universities—MIT and Cal Tech, for example—and are investing substantial sums in long-range research in physics and chemistry. Several pharmaceutical companies have been investing in fundamental biomedical research in association with medical schools and private research institutions.
There needs to be much more of this kind of partnership. The nation’s future may well depend on whether we can set up within the private sector a new system for collaborative research. Although there are some promising partnership ventures now in operation, they are few in number; within industry the tendency remains to concentrate on applied research and development, excluding any consideration of basic science. The academic community tends, for its part, to stay out of fields closely related to the development of new products. Each side maintains adversarial and largely bogus images of the other, money-makers on one side and impractical academics on the other. Meanwhile, our competitors in Europe and Japan have long since found effective ways to link industrial research to government and academic science, and they may be outclassing this country before long. In some fields, most conspicuously the devising and production of new scientific instruments, they have already moved to the front.
There are obvious difficulties in the behavior of the traditional worlds of research in the United States. Corporate research is obliged by its nature to concentrate on profitable products and to maintain a high degree of secrecy during the process; academic science, by its nature, must be carried out in the open and depends for its progress on the free exchange of new information almost at the moment of finding. But these are not impossible barriers to collaboration. Industry already has a life-or-death stake in what will emerge from basic research in the years ahead; there can be no more prudent investment for the corporate world, and the immediate benefit for any corporation in simply having the “first look” at a piece of basic science would be benefit enough in the long run. The university science community, for all the talk of ivory towers, hankers day and night for its work to turn out useful; a close working connection with industrial researchers might well lead to an earlier perception of potential applicability than is now the case.
The age of science did not really begin three hundred years ago. That was simply the time when it was realized that human curiosity about the world represented a deep wish, perhaps embedded somewhere in the chromosomes of human beings, to learn more about nature by experiment and the confirmation of experiment. The doing of science on a scale appropriate to the problems at hand was launched only in the twentieth century and has been moving into high gear only within the last fifty years. We have not lacked explanations at any time in our recorded history, but now we must live and think with the new habit of requiring reproducible observations and solid facts for the explanations. It is not as easy a time for us as it used to be: we are raised through childhood in skepticism and disbelief; we feel the need of proofs all around, even for matters as deep as the working of our own consciousness, where there is as yet no clear prospect of proof about anything. Uncertainty, disillusion, and despair are prices to be paid for living in an age of science. Illumination is the product sought, but it comes in small bits, only from time to time, not ever in broad, bright flashes of public comprehension, and there can be no promise that we will ever emerge from the great depths of the mystery of being.
Nevertheless, we have started to do science on a world scale, and to rely on it, and hope for it. Not just the scientists, everyone, and not for the hope of illumination, but for the sure predictable prospect of new technologies, which have always come along, like spray in the wake of science. We need better ways of predicting how a piece of new technology is likely to turn out, better measures available on an international level to shut off the ones that carry hazard to the life of the planet (including, but perhaps not always so much first of all, as is usually the only consideration, our own species’ life). We will have to go more warily with technology in the future, for the demands will be increasing and the stakes will be very high. Instead of coping, or trying to cope, with the wants of four billion people, we will very soon be facing the needs, probably desperate, of double that number and, soon thereafter, double again. The real challenge to human ingenuity, and to science, lies in the century to come.
I cannot guess at the things we will need to know from science to get through the time ahead, but I am willing to make one prediction about the method: we will not be able to call the shots in advance. We cannot say to ourselves, we need this or that sort of technology, therefore we should be doing this or that sort of science. It does not work that way. We will have to rely, as we have in the past, on science in general, and on basic, undifferentiated science at that, for the new insights that will open up the new opportunities for technological development. Science is useful, indispensable sometimes, but whenever it moves forward it does so by producing a surprise; you cannot specify the surprise you’d like. Technology should be watched closely, monitored, criticized, even voted in or out by the electorate, but science itself must be given its head if we want it to work.
ALCHEMY
Alchemy began long ago as an expression of the deepest and oldest of huma
n wishes: to discover that the world makes sense. The working assumption—that everything on earth must be made up from a single, primal sort of matter—led to centuries of hard work aimed at isolating the original stuff and rearranging it to the alchemists’ liking. If it could be found, nothing would lie beyond human grasp. The transmutation of base metals to gold was only a modest part of the prospect. If you knew about the fundamental substance, you could do much more than make simple money: you could boil up a cure-all for every disease affecting humankind, you could rid the world of evil, and, while doing this, you could make a universal solvent capable of dissolving anything you might want to dissolve. These were heady ideas, and generations of alchemists worked all their lives trying to reduce matter to its ultimate origin.
To be an alchemist was to be a serious professional, requiring long periods of apprenticeship and a great deal of late-night study. From the earliest years of the profession, there was a lot to read. The documents can be traced back to Arabic, Latin, and Greek scholars of the ancient world, and beyond them to Indian Vedic texts as far back as the tenth century B.C. All the old papers contain a formidable array of information, mostly expressed in incantations, which were required learning for every young alchemist and, by design, incomprehensible to everyone else. The word “gibberish” is thought by some to refer back to Jabir ibn Hayyan, an eighth-century alchemist, who lived in fear of being executed for black magic and worded his doctrines so obscurely that almost no one knew what he was talking about.
Indeed, black magic was what most people thought the alchemists were up to in their laboratories, filled with the fumes of arsenic, mercury, and sulphur and the bubbling infusions of all sorts of obscure plants. We tend to look back at them from today’s pinnacle of science as figures of fun, eccentric solitary men wearing comical conical hats, engaged in meaningless explorations down one blind alley after another. It was not necessarily so: the work they were doing was hard and frustrating, but it was the start-up of experimental chemistry and physics. The central idea they were obsessed with—that there is a fundamental, elementary particle out of which everything in the universe is made—continues to obsess today’s physicists.
They never succeeded in making gold from base metals, nor did they find a universal elixir in their plant extracts; they certainly didn’t rid the world of evil. What they did accomplish, however, was no small thing: they got the work going. They fiddled around in their laboratories, talked at one another incessantly, set up one crazy experiment after another, wrote endless reams of notes, which were then translated from Arabic to Greek to Latin and back again, and the work got under way. More workers became interested and then involved in the work, and, as has been happening ever since in science, one thing led to another. As time went on and the work progressed, error after error, new and accurate things began to turn up. Hard facts were learned about the behavior of metals and their alloys, the properties of acids, bases, and salts were recognized, the mathematics of thermodynamics were worked out, and, with just a few jumps through the centuries, the helical molecule of DNA was revealed in all its mystery.
The current anxieties over what science may be doing to human society, including the worries about technology, are no new thing. The third-century Roman emperor Diocletian decreed that all manuscripts dealing with alchemy were to be destroyed, on grounds that such enterprises were against nature. The work went on in secrecy, and, although some of the material was lost, a great deal was translated into other languages, passed around, and preserved.
The association of alchemy with black magic has persisted in the public mind throughout the long history of the endeavor, partly because the objective—the transmutation of one sort of substance to another—seemed magical by definition. Partly also because of the hybrid term: al was simply the Arabic article, but chemy came from a word meaning “the black land,” Khemia, the Greek name for Egypt. Another, similar-sounding word, khumeia, meant an infusion or elixir, and this was incorporated as part of the meaning. The Egyptian origin is very old, extending back to Thoth, the god of magic (who later reappeared as Hermes Trismegistus, master of the hermetic seal required by alchemists for the vacuums they believed were needed in their work). The notion of alchemy may be as old as language, and the idea that language and magic are somehow related is also old. “Grammar,” after all, was a word used in the Middle Ages to denote high learning, but it also implied a practicing familiarity with alchemy. Gramarye, an older term for grammar, signified occult learning and necromancy. “Glamour,” of all words, was the Scottish word for grammar, and it meant, precisely, a spell, casting enchantment.
Medicine, from its dark origins in old shamanism millennia ago, became closely linked in the Middle Ages with alchemy. The preoccupation of alchemists with metals and their properties led to experiments—mostly feckless ones, looking back—with the therapeutic use of all sorts of metals. Paracelsus, a prominent physician of the sixteenth century, achieved fame from his enthusiastic use of mercury and arsenic, based on what now seems a wholly mystical commitment to alchemical philosophy as the key to understanding the universe and the human body simultaneously. Under his influence, three centuries of patients with all varieties of illness were treated with strong potions of metals, chiefly mercury, and vigorous purgation became standard medical practice.
Physics and chemistry have grown to scientific maturity, medicine is on its way to growing up, and it is hard to find traces anywhere of the earlier fumblings toward a genuine scientific method. Alchemy exists only as a museum piece, an intellectual fossil, so antique that we no longer need be embarrassed by the memory, but the memory is there. Science began by fumbling. It works because the people involved in it work, and work together. They become excited and exasperated, they exchange their bits of information at a full shout, and, the most wonderful thing of all, they keep at one another.
Something rather like this may be going on now, without realizing it, in the latest and grandest of all fields of science. People in my field, and some of my colleagues in the real “hard” sciences such as physics and chemistry, have a tendency to take lightly and often disparagingly the efforts of workers in the so-called social sciences. We like to refer to their data as soft. We do not acknowledge as we should the differences between the various disciplines within behavioral research—we speak of analytical psychiatry, sociology, linguistics, economics, and computer intelligence as though these inquiries were all of a piece, with all parties wearing the same old comical conical hats. It is of course not so. The principal feature that the social sciences share these days is the attraction they exert on considerable numbers of students, who see the prospect of exploring human behavior as irresistible and hope fervently that a powerful scientific method for doing the exploring can be worked out. All of the matters on the social-science agenda seem more urgent to these young people than they did at any other time in human memory. It may turn out, years hence, that a solid discipline of human science will have come into existence, hard as quantum physics, filled with deep insights, plagued as physics still is by ambiguities but with new rules and new ways of getting things done. Like, for instance, getting rid of thermonuclear weapons, patriotic rhetoric, and nationalism all at once. If anything like this does turn up we will be looking back at today’s social scientists, and their close colleagues the humanists, as having launched the new science in a way not all that different from the accomplishment of the old alchemists, by simply working on the problem—this time, the fundamental, primal universality of the human mind.
CLEVER ANIMALS
Scientists who work on animal behavior are occupationally obliged to live chancier lives than most of their colleagues, always at risk of being fooled by the animals they are studying or, worse, fooling themselves. Whether their experiments involve domesticated laboratory animals or wild creatures in the field, there is no end to the surprises that an animal can think up in the presence of an investigator. Sometimes it seems as if anim
als are genetically programmed to puzzle human beings, especially psychologists.
The risks are especially high when the scientist is engaged in training the animal to do something or other and must bank his professional reputation on the integrity of his experimental subject. The most famous case in point is that of Clever Hans, the turn-of-the-century German horse now immortalized in the lexicon of behavioral science by the technical term, the “Clever Hans Error.” The horse, owned and trained by Herr von Osten, could not only solve complex arithmetical problems, but even read the instructions on a blackboard and tap out infallibly, with one hoof, the right answer. What is more, he could perform the same computations when total strangers posed questions to him, with his trainer nowhere nearby. For several years Clever Hans was studied intensively by groups of puzzled scientists and taken seriously as a horse with something very like a human brain, quite possibly even better than human. But finally in 1911, it was discovered by Professor O. Pfungst that Hans was not really doing arithmetic at all; he was simply observing the behavior of the human experimenter. Subtle, unconscious gestures—nods of the head, the holding of breath, the cessation of nodding when the correct count was reached—were accurately read by the horse as cues to stop tapping.
Whenever I read about that phenomenon, usually recounted as the exposure of a sort of unconscious fraud on the part of either the experimenter or the horse or both, I wish Clever Hans would be given more credit than he generally gets. To be sure, the horse couldn’t really do arithmetic, but the record shows that he was considerably better at observing human beings and interpreting their behavior than humans are at comprehending horses or, for that matter, other humans.