Book Read Free

Dancing With Myself

Page 9

by Charles Sheffield


  These examples may seem rather trivial, since we know from our own experience that the air in one end of the room doesn’t suddenly vanish, and we don’t feel continuous popping in our ears from fluctuating air pressure. However, the same technique, expressed in suitable mathematical form, is much more than it may seem. In the hands of the Scotsman, James Clerk Maxwell, the German, Ludwig Boltzmann, and the American, J. Willard Gibbs, this statistical approach had by the end of the nineteenth century become a powerful tool that allowed global properties of continuous systems (such as temperature) to be understood from the statistical properties governing the movement of their individual pieces—the atoms and molecules.

  The science that governs the motion of individual particles is mechanics; the science that describes the global pressure and temperatures of continuous systems is thermodynamics. Statistical mechanics—the statistical analysis of very large assemblies of particles, each governed by the laws of mechanics—provides the bridge between individual particle behavior and whole system behavior. The central mathematical technique is one of counting, enumerating the number of possible arrangements of very large numbers of particles.

  What do we lose in adopting such an approach? In the words of Maxwell, “I wish to point out that, in adopting this statistical method of considering the average number of groups of molecules selected according to their velocities, we have abandoned the strict kinetic method of tracing the exact circumstances of each individual molecule in all its encounters. It is therefore possible that we may arrive at results which, though they fairly represent the facts as long as we are supposed to deal with a gas in mass, would cease to be applicable if our faculties and instruments were so sharpened that we could detect and lay hold of each molecule and trace it, through all its course.”

  Atoms and molecules are tiny objects, only billionths of an inch in diameter and visible only with the aid of the most powerful electron microscopes. Can the effects of their encounters as individual particles ever be seen, as Maxwell suggests? They can, under the right circumstances, with the aid of no more than a low-power microscope. In 1828, an English botanist, Robert Brown, was observing tiny grains of pollen suspended in water. He noticed that instead of remaining in one place, or slowly rising or sinking, the pollen grains were in constant, jerky, and unpredictable motion. They were moving to the buffeting of water molecules. Pollen is at the size threshold where the probability of different numbers of molecules hitting each side is big enough to show a visible effect. A detailed analysis of “Brownian motion,” as it is now called, was the subject of one of three ground-breaking papers published in 1905 by Albert Einstein. (The other two set forth the theory of relativity and explained the photoelectric effect.)

  6.COUNTING AND BIOLOGY

  “Man is judge of all things, a feeble earthworm, the depository of truth, a sink of uncertainty and error; the glory and shame of the universe.”

  —Blaise Pascal

  Simple counting can be the basis of a field of science, as is the case with statistical mechanics; or it can provide the logical destruction of one, as is the case of homeopathic medicine.

  In 1795, the doctrine of “like cures like” was proposed by Samuel Hahnemann as the basis for a new practice of medicine, basing his “Theory of Similia” upon earlier ideas by Paracelsus. The central notion is that if a strong dose of a particular drug produces in a healthy person symptoms like those of a certain disease, then a minute dose of the same drug will cure the disease. To assure that small dose, the procedure is as follows:

  The original drug forms the “mother tincture.” A fixed quantity of this, say one gram, is added to a kilogram of pure water, and thoroughly mixed. From this “first dilution,” one gram is taken, and added to a kilogram of pure water to form the “second dilution.” The procedure is repeated, each time taking one gram and mixing with one kilogram of water, to form third dilutions, fourth dilutions, and so on. This is often done as many as ten or twenty times, arguing that the greater the dilution, the more potent the healing effect of the final mixture.

  The logic behind the process is obscure, but never mind that. Let’s look at the molecular counts, something that was not possible in the first days of homeopathy.

  One gram of a drug will contain no more than 6 × 1023 molecules. If that gram is diluted with one kilogram of water, the number of molecules of the drug in one gram of the result is reduced by a factor of one thousand, so one gram of the first dilution will contain at most 6 × 1020 drug molecules. The second dilution will be reduced to 6 × l017 molecules, the third dilution to 6 × l014, and so on. The tenth dilution has on average 6 × l0-7 molecules of drug—in other words, there is less than one chance in a million that the tenth dilution contains a single molecule of the drug we started with! The tenth or twentieth dilution is pure water. There is not a trace of the original drug in it.

  Even if the like-cures-like idea were correct, how can pure water with no drug cure the disease? The simple answer is, it can’t.

  Presented with the counting argument for number of molecules, the practitioners of homeopathic medicine mutter vaguely about residual influences and healing fields. But no one can explain what they are, or how they fit with any other scientific ideas.

  So homeopathic medicine can’t work.

  Or can it?

  At this point, skeptical as I am, I have to point out another example of large-number counting drawn from the biological sciences, one that shows how careful we must be when we say that something is impossible.

  On the evening of May 6, 1875, the French naturalist Jean Henri Fabre performed an interesting experiment. He took a young female of Europe’s largest moth, the Great Peacock, and placed her in a wire cage. Then he watched in amazement as male moths of the species—and only of that species—began to appear from all over, some of them flying considerable distances to get to the female.

  They could sense her presence. But how?

  After more experiments, sight and sound were ruled out. Smell was all that remained. The male moths could smell some substance emitted by the female, and they could detect it in unbelievably small quantities. (The female silk moth, Bombyx moriy secretes a substance called bombykol with scent glands on her abdomen; the male silk moth will recognize and respond to a single molecule of bombykol. Moths hold the record in the Guinness Book of Records, for the organism with the most acute sense of smell.)

  The moth attractant is one of a class of substances known as pheromones, a coined word meaning “hormone-bearing.”

  Pheromones are chemicals secreted and given off by animals, to convey information to or elicit particular responses from other members of the same species. One class of pheromones conveys sexual information, telling the males that a female of the species is ready, willing, and able to mate.

  Before we get too excited by this, let me mention that pheromones are employed as a sexual lure mainly by insects. The same thing does occur in humans, but our noses have become so civilized and insensitive that we have trouble picking up the signal. We have to resort to other methods, more uniquely human. (“Man is the ‘How’d you like to come back to my place?’-saying animal.”)

  Female moths, ready to mate, attract their male counterparts over incredible distances—several miles, if the male is downwind.

  Now let’s do some counting. A female moth emits a microgram or so of pheromones, possibly as much as ten micrograms under unusual circumstances. That’s a lot of molecules, about 1015 of them (airborne pheromone molecules are large and heavy, 100 to 300 times as massive as a hydrogen atom, and you don’t get all that many to a gram), but by the time that those molecules have diffused through the atmosphere and dispersed themselves over a large airspace, they will be spread very thin.

  So thin that the chance of a male moth, three miles away, receiving even a single molecule from the source female, seems to be almost vanishingly small. For example, suppo
se that the aerial plume of pheromones stays within 20 feet of the ground, and spreads in three miles to a width of 1,000 feet—which is a tight, narrow-angle plume. Then we are looking at pheromone concentrations of about one part in a hundred billion. Even the incredibly sensitive odor detection apparatus of the moth needs at least one molecule to work with, and chances are high that it will not get it.

  The natural conclusion might be that, Guinness Book of Records notwithstanding, the story of a single female moth attracting a male miles away must be no more than a story. And yet the experiments have been done many times. The males, unaware of the statistics, appear from the distance to cluster around the fertile females.

  Now, in England a few months ago I was offered a very intriguing explanation of how this might be possible. If other female moths who receive pheromones from a fertile female themselves produce more pheromones, then these intermediate moths can play a crucial role by serving as amplifiers for the pheromonal message. Each moth that receives a molecule or two of the female pheromone emits more than that of the same substance. Like tiny repeater stations for an electronic signal, the moths pass the word on, increasing the intensity of the message in the process. The distant male moth receives the pheromonal signal, and heads upwind toward the fertile female.

  This is such an attractive idea that it would be hard to forget it once you have heard it mentioned. I was sure I had never encountered it before. On the other hand my ignorance of biology is close to total, so I wanted to check references. And that is where the trouble started.

  I began with the obvious sources: reference texts. There are a number on pheromones, and all of them stress the incredible sensitivity of moths to these chemical messengers. However, not one of them mentioned the idea of pheromonal amplifiers. (One of the most interesting books on the subject is Sexual Strategy, by Tim Halliday. On its cover is a bright red frog on top of a black frog. I found the contents, all about the tricks used by animals in locating their mates, fascinating. It was only when I saw the expression on the desk clerk’s face as I checked the book out of a university library that I realized the title and front illustration might cause questions.)

  With no help from books, or from a search of the General Science Index, I cast my net wider. I called Jack Cohen in England, who had been present when the pheromonal amplifier idea was mentioned. He was not sure quite where or when he had encountered the idea, but he offered two key words: Lymantria, the genus of moth used in the experiments; and Rothschild, the name of the person who had done the work.

  At that point, it seemed a snap. I had names. I expected to have full references in no time. All I needed was a good entomologist, and I set out to track one down.

  Craig Philips is a naturalist who lives not far from me. He is worth an article in his own right. He keeps tropical cockroaches and tarantulas in his apartment, and apparently enjoys their company.

  (“I’ve only been bitten once by a tarantula,” he said, “and that was my own fault. I was wearing shorts in my apartment, and the tarantula was sitting on my bare leg. Suddenly the mynah bird”—not previously mentioned at all in our conversation—“swooped down to have a go at the tarantula. I covered it with my hand to protect it. And naturally it bit me.”

  A perfectly ordinary day in the life of a dedicated naturalist. “It didn’t hurt,” he added, “the way that a bee sting would. But after a while a kind of ulcer developed that took weeks and weeks to heal.”)

  Well, we had a very enjoyable conversation, but he had not heard of pheromonal amplifiers, either. Nor had another friend of mine, an amateur entomologist in Oklahoma City (my phone bills were mounting) but he had a vague recollection of hearing something odd like this about a different moth. Cecropia, he thought.

  I checked that one in the reference texts, too. Cecropia, certainly. Pheromone amplifiers, no. No writer had heard of it.

  Nor had any of the many entomologists that I spoke to over the next few days. Moths may not pass on pheromonal messages, but entomologists sure pass on interesting questions. I heard from Sheila Mutchler, who ran the Insect Zoo at the National Museum of Natural History; Gates Clark, an entomologist at the same organization; Mark Jacobson, who works on moths and pheromones at the U.S. Department of Agriculture; and Jerome Klun, who works in the Agricultural Research Service and who told me more intriguing things about moth mating habits than I had dreamed existed.

  All very helpful, all knowledgeable, all fascinating to talk to. But no pheromonal amplifiers. Everyone agreed what an interesting concept it was—the sort of thing that ought to be true, in an interesting world. But no one could give me a single reference, or recall anything that had ever been written about the idea.

  So where do I stand now?

  I think the pheromonal amplifier idea not true. Other, statistical arguments can explain the male moth’s abilities in long-range female detection. But I’ve become a shade more reluctant to use the word “impossible” when something seems to be ruled out on counting arguments alone.

  7.THE FUTURE OF COUNTING

  “Man is the measure of all things.”

  —Protagoras

  Counting has been important to humans since the beginning of recorded history, but I can make a case for the idea that the real Age of Counting began only in 1946. That’s when the world’s first electronic computer, the ENIAC, went into operation.

  Computers count wonderfully well. Although no human has ever counted to a billion, today’s fastest computers can do it in less than one second. And that is going to have profound effects on the way that science is performed, and on everything else in our future.

  To take one minor example, consider the values in Table 1. I did not copy them from some standard reference work, but computed the necessary ratios of binomial coefficients from scratch, on a lap-top computer that is small and slow by today’s standards. The number of arithmetic operations involved was no more than a few million. The program took less than half an hour to write, and maybe the same to run—I went off to have lunch, so I didn’t bother to time it.

  Compare that with the situation only forty years ago. I would have been forced to use quite different computational methods, or spend months on calculating that single table. For small values of N, say, less than 30, I would have computed the coefficients directly on a mechanical or an electric calculating machine. That would have been several hours of work, with a non-negligible chance of error. If the results were important, I would have had a second person repeat the whole computation as an independent check. For larger values of N, direct calculation would have taken far too long. Instead, I would have made use of an approximation known as Stirling’s formula, calling for the use of logarithm tables and involving me in several more hours of tedious calculations. Again, there would be a strong possibility of human error.

  This example illustrates the general principle: what could have been done a hundred years ago only by ingenious analysis and approximation will be done in the future more and more by direct calculation—i.e., by counting.

  The trend can be seen again and again, in dozens of different problem areas. The motion of the planets used to be calculated by a clever set of analytic approximations known as general perturbation theory. Many of history’s greatest scientists, from Kepler to Newton to Laplace, spent years toiling over hand-calculations. Today, planetary motion (and spacecraft motion) are computed by direct numerical integration of the differential equations of motion. Instead of being treated as a continuous variable, time is chopped up into many small intervals, and the motion of the body is calculated by a computer from one time interval to the next.

  This method of “finite differences” and numerical integration is used in everything from weather prediction to aircraft design to stellar evolution. The time and space variables of the continuous problem are chopped up into sufficiently large numbers of finite pieces, and the calculation consists of coupling neighbo
ring times and places so as to calculate global behavior.

  Statisticians have rather different needs. Instead of using finite difference methods to solve their problems, they often rely on repeated trials of a statistical process. A random element is introduced in each trial to mimic the variations seen in nature. Many thousands of trials are usually needed before a valid statistical inference can be drawn. Often the number of trials is not known in advance, since it is the behavior of the computed solution itself that tells you whether you can safely stop, or must keep on going to more trials. These appropriately named “Monte Carlo” methods are quite impractical without a fast computer.

  That’s the situation in 46 A.C.—and the Age of Computers has just begun. Whole new sciences are emerging, relying on a symbiosis of computer experiment and human analysis: nonlinear stability theory, irreversible thermodynamics, chaos theory, fractals. Even pure mathematics is being changed. The proof a few years ago of the four-color theorem (“Any plane map can be colored, with neighboring regions having different colors, using no more than four colors”) was done using a computer to enumerate the thousands of possible cases.

  The diversity of available computing equipment is increasing, as well as its speed. I’m writing these words on a small portable computer with limited memory and only one programing language (BASIC). On the other hand, it weighs less than four pounds and I can easily carry it with me for use on planes and trains. For heavy-duty work I wouldn’t dream of using it. I have access to a Connection Machine, a large parallel-logic computer with 16,384 separate processors, able to perform about three and a half billion floating-point multiplications a second (that’s 3.5 gigaflops, a word I like very much). That’s where the real number-crunching takes place. Thirty years from now, I expect to have available a machine the size and weight of the portable, but with the computing power of the Connection Machine.

 

‹ Prev