Book Read Free

Asimov's New Guide to Science

Page 100

by Isaac Asimov


  It is now realized that there are trace-element deserts, just as there are waterless deserts; the two usually go together but not always. In Australia soil, scientists have found that 1 ounce of molybdenum in the form of some appropriate compound spread over 16 acres of molybdenum-deficient land results in a considerable increase in fertility. Nor is this a problem of exotic lands only. A survey of American farmland in 1960 showed areas of boron deficiency in forty-one states, of zinc deficiency in twenty-nine states, and of molybdenum deficiency in twenty-one states. The dosage of trace elements is crucial. Too much is as bad as too little, for some substances that are essential for life in small quantities (such as, copper) become poisonous in larger quantities.

  This, of course, carries to its logical extreme the much older custom of using fertilizers for soil. Until modern times, fertilization was through the use of animal excreta, manure or guano, which restored nitrogen and phosphorus to the soil. While this worked, it was accompanied by foul odors and by the ever-present possibility of infection. The substitution of chemical fertilizers, clean and odor-free, was through the work of Justus von Liebig in the early nineteenth century.

  COBALT

  One of the most dramatic episodes in the discovery of mineral deficiencies has to do with cobalt. It involves the once incurably fatal disease called pernicious anemia.

  In the early 1920s, the University of Rochester pathologist George Hoyt Whipple was experimenting on the replenishment of hemoglobin by means of various food substances. He would bleed dogs to induce anemia and then feed them various diets to see which would permit the dogs to replace the lost hemoglobin most rapidly. He did this not because he was interested in pernicious anemia, or in any kind of anemia, but because he was investigating bile pigments, compounds produced by the body from hemoglobin. Whipple discovered that liver was the food that enabled the dogs to make hemoglobin most quickly.

  In 1926, two Boston physicians, George Richards Minot and William Parry Murphy, considered Whipple’s results, decided to try liver as a treatment for pernicious-anemia patients. The treatment worked. The incurable disease was cured, so long as the patients ate liver as an important portion of their diet. Whipple, Minot, and Murphy shared the Nobel Prize in physiology and medicine in 1934.

  Unfortunately liver, although it is a great delicacy when properly cooked, then chopped, and lovingly mixed with such things as eggs, onions, and chicken fat, becomes wearing as a steady diet. (After a while, a patient might be tempted to think pernicious anemia was preferable.) Biochemists began to search for the curative substance in liver; and by 1930, Edwin Joseph Cohn and his co-workers at the Harvard Medical School had prepared a concentrate a hundred times as potent as liver itself. To isolate the active factor, however, further purification was needed. Fortunately, chemists at the Merck Laboratories discovered in the 1940s that the concentrate from liver could accelerate the growth of certain bacteria. This provided an easy test of the potency of any preparation from it, so the biochemists could proceed to break down the concentrate into fractions and test them in quick succession. Because the bacteria reacted to the liver substance in much the same way that they reacted to, say, thiamine or riboflavin, the investigators now suspected strongly that the factor they were hunting for was a B vitamin. They called it vitamin B12.

  By 1948, using bacterial response and chromatography, Ernest Lester Smith in England and Karl August Folkers at Merck succeeded in isolating pure samples of vitamin B12. The vitamin proved to be a red substance, and both scientists thought it resembled the color of certain cobalt compounds. It was known by this time that a deficiency of cobalt caused severe anemia in cattle and sheep. Both Smith and Folkers burned samples of vitamin B12, analyzed the ash, and found that it did indeed contain cobalt. The compound has now been named cyanocobalamine. So far it is the only cobalt-containing compound that has been found in living tissue.

  By breaking it up and examining the fragments, chemists quickly decided that vitamin B12 was an extremely complicated compound, and they worked out an empirical formula of C63H88O14N14PCo. Then a British chemist, Dorothy Crowfoot Hodgkin, determined its over-all structure by means of X rays. The diffraction pattern given by crystals of the compound allowed her to build up a picture of the electron densities along the molecule—that is, those regions where the probability of finding an electron is high and those where it is low. If lines are drawn through regions of equal probability, a kind of skeletal picture is built up of the shape of the molecule as a whole.

  This is not as easy as it sounds. Complicated organic molecules can produce an X-ray scattering truly formidable in its complexity. The mathematical operations required to translate that scattering into electron densities are tedious in the extreme. By 1944, electronic computers had been called in to help work out the structural formula of penicillin. Vitamin B12 was much more complicated, and Hodgkin had to use a more advanced computer—the National Bureau of Standards Western Automatic Computer (SWAC)—and do some heavy spadework. It eventually earned for her, however, the 1964 Nobel Prize for chemistry.

  The molecule of vitamin B12, or cyanocobalamine, turned out to be a lopsided porphyrin ring, with one of the carbon bridges connecting two of the smaller pyrrole rings missing, and with complicated side chains on the pyrrole rings. It resembles the somewhat simpler heme molecule, with this key difference: where heme has an iron atom at the center of the porphyrin ring, cyanocobalamine has a cobalt atom.

  Cyanocobalamine is active in very small quantities when injected into the blood of pernicious-anemia patients. The body can get along on only 1/1,000 as much of this substance as it needs of the other B vitamins. Any diet, therefore, ought to have enough cyanocobalamine for our needs. Even if it did not, the bacteria in the intestines manufacture quite a bit of it. Why, then, should anyone ever have pernicious anemia?

  Apparently, the sufferers from this disease are simply unable to absorb enough of the vitamin into the body through the intestinal walls. Their feces are actually rich in the vitamin (for want of which they are dying). From feedings of liver, providing a particularly abundant supply, such a patient manages to absorb enough cyanocobalamine to stay alive. But he needs 100 times as much of the vitamin if he takes it by mouth as he does when it is injected directly into the blood.

  Something must be wrong with the patient’s intestinal apparatus, preventing the passage of the vitamin through the walls of the intestines. It has been known since 1929, thanks to the researches of the American physician William Bosworth Castle, that the answer liecs somehow in the gastric juice. Castle called the necessary component of gastric juice intrinsic factor. And in 1954 investigators found a product, from the stomach linings of animals, that assists the absorption of the vitamin and proved to be Castle’s intrinsic factor. Apparently this substance is missing in pernicious-anemia patients. When a small amount of it is mixed with cyanocobalamine, the patient has no difficulty in absorbing the vitamin through the intestines. The intrinsic factor has proved to be a glycoprotein (a sugar-containing protein) that binds a molecule of cyanocobalamine and carries it into the intestinal cells.

  IODINE

  Getting back to the trace elements… The first one discovered was not a metal; it was iodine, an element with properties like those of chlorine. This story begins with the thyroid gland.

  In 1896, a German biochemist, Eugen Baumann, discovered that the thyroid was distinguished by containing iodine, practically absent from all other tissues. In 1905, a physician named David Marine, who had just set up practice in Cleveland, was amazed to find how widely prevalent goiter was in that area. Goiter is a conspicuous disease, sometimes producing grotesque enlargement of the thyroid and causing its victims to become either dull and listless or nervous, overactive, and pop-eyed. For the development of surgical techniques in the treatment of abnormal thyroids with resulting relief from goitrous conditions, the Swiss physician Emil Theodor Kocher earned the 1909 Nobel Prize in medicine and physiology.

  But Marine
wondered whether the enlarged thyroid might not be the result of a deficiency of iodine, the one element in which the thyroid specializes, and whether goiter might not be treated more safely and expeditiously by chemicals rather than by the knife. Iodine deficiency and the prevalence of goiter in the Cleveland area might well go hand in hand, at that, for Cleveland, being inland, might lack the iodine that was plentiful in the soil near the ocean and in the seafood that is an important article of diet there.

  The doctor experimented on animals and, after ten years, felt sure enough of his ground to try feeding iodine-containing compounds to goiter patients. He was probably not too surprised to find that it worked. Marine then suggested that iodine-containing compounds be added to table salt and to the water supply of inland cities where the soil was poor in iodine. There was strong opposition to his proposal, however; and it took another ten years to get water iodination and iodized salt generally accepted. Once the iodine supplements became routine, simple goiter declined in importance as a human woe.

  FLUORIDES

  A half-century later American researchers (and the public) were engaged in studies and discussion of a similar health question—the fluoridation of water to prevent tooth decay. This issue was a matter of bitter controversy in the nonscientific and political arena—with the opposition far more stubborn than in the case of iodine. Perhaps one reason is that cavities in the teeth do not seem nearly as serious as the disfigurement of goiter.

  In the early decades of this century dentists noticed that people in certain areas in the United States (for example, some localities in Arkansas) tended to have darkened teeth—a mottling of the enamel. Eventually the cause was traced to a higher-than-average content of fluorine compounds (fluorides) in the natural drinking water of those areas. With the attention of researchers directed to fluoride in the water, another interesting discovery turned up. Where the fluoride content of the water was above average, the population had an unusually low rate of tooth decay. For instance, the town of Galesburg in Illinois, with fluoride in its water, had only one-third as many cavities per youngster as the nearby town of Quincy, whose water contained practically no fluoride.

  Tooth decay is no laughing matter, as anyone with a toothache will readily agree. It costs the people of the United States more than a billion and a half dollars a year in dental bills; and by the age of thirty-five, two thirds of all Americans have lost at least some of their teeth. Dental researchers succeeded in getting support for large-scale studies to find out whether fluoridation of water would be safe and would really help to prevent tooth decay. They found that one part per million of fluoride in the drinking water, at an estimated cost of 5 to 10 cents per person per year, did not mottle teeth and yet showed an effect in decay prevention. They therefore adopted one part per million as a standard for testing the results of fluoridation of community water supplies.

  The effect is, primarily, on those whose teeth are being formed—that is, on children. The presence of fluoride in the drinking water ensures the incorporation of tiny quantities of fluoride into the tooth structure; it is this, apparently, that makes the tooth mineral unpalatable to bacteria. (The use of small quantities of fluoride in pill form or in toothpaste has also shown some protective effect against tooth decay.)

  The dental profession is now convinced, on the basis of a quarter of a century of research, that for a few pennies per person per year, tooth decay can be reduced by about two thirds, with a saving of at least a billion dollars a year in dental costs and a relief of pain and of dental handicaps that cannot be measured in money.

  Two chief arguments have been employed by the opponents of fluoridation with the greatest effect. One is that fluorine compounds are poisonous. So they are, but not in the doses used for fluoridation! The other is that fluoridation is compulsory medication, infringing the individual’s freedom. That may be so, but it is questionable whether the individual in any society should have the freedom to expose others to preventable sickness. If compulsory medication is evil, then we have a quarrel not only with fluoridation but also with chlorination, iodination, and, for that matter, with all the forms of inoculation, including vaccination against smallpox, that are compulsory in most civilized countries today.

  Hormones

  Enzymes, vitamins, trace elements—how potently these sparse substances decide life-or-death issues for the organism! But there is a fourth group of substances that, in a way, are even more potent. They conduct the whole performance; they are like a master switch that awakens a city to activity, or the throttle that controls an engine, or the red cape that excites the bull.

  At the turn of the century, two English physiologists, William Maddock Bayliss and Ernest Henry Starling, became intrigued by a striking little performance in the digestive tract. The gland behind the stomach known as the pancreas releases its digestive fluid into the upper intestines at just the moment when food leaves the stomach and enters the intestine. How does the pancreas get the message? What tells it that the right moment has arrived?

  The obvious guess was that the information must be transmitted via the nervous system, which was then the only known means of communication in the body. Presumably, the entry of food into the intestines from the stomach stimulated nerve endings that relayed the message to the pancreas by way of the brain or the spinal cord.

  To test this theory, Bayliss and Starling cut every nerve to the pancreas. Their maneuver failed! The pancreas still secreted juice at precisely the right moment.

  The puzzled experimenters went hunting for an alternate signaling system. In 1902, they tracked down a chemical messenger. It was a substance secreted by the walls of the intestine. When they injected this into an animal’s blood, it stimulated the secretion of pancreatic juice even though the animal was not eating. Bayliss and Starling concluded that, in the normal course of events, food entering the intestines stimulates their linings to secrete the substance, which then travels via the bloodstream to the pancreas and triggers the gland to start giving forth pancreatic juice. The two investigators named the substance secreted by the intestines secretin, and they called it a hormone, from a Greek word meaning “rouse to activity.” Secretin is now known to be a small protein molecule.

  Several years earlier, physiologists had discovered that an extract of the adrenals (two small organs just above the kidneys) could raise blood pressure if injected into the body. The Japanese chemist [okichi Takamine, working in the United States, isolated the responsible substance in 1901 and named it adrenalin. (This later became a trade name; the chemists’ name for it now is epinephrine.) Its structure proved to resemble that of the amino acid tyrosine, from which it is derived in the body.

  Plainly, adrenalin, too, is a hormone. As the years went on, the physiologists found that a number of other glands in the body secrete hormones. (The word gland comes from the Greek word for “acorn” and was originally applied to any small lump of tissue in the body, but it became customary to give the name to any tissue that secretes a fluid, even large organs such as the liver and the mammaries. Small organs that do not secrete fluids gradually lost this name, so that the lymph glands, for instance, were renamed the lymph nodes. Even so, when lymph nodes in the throat or the armpit become enlarged during infections, physicians and mothers alike still refer to them as “enlarged glands.”)

  Many of the glands—such as those along the alimentary canal, the sweat glands, and the salivary glands—discharge their fluids through ducts. Some, however, are ductless; they release substances directly into the bloodstream, which then circulates the secretions through the body. It is the secretions of these ductless, or endocrine, glands that contain hormones (see figure 15.1). The study of hormones is for this reason termed endocrinology.

  Figure 15.1. The endocrine glands.

  Naturally, biologists are most interested in hormones that control functions of the mammalian body and, in particular, the human one. However, I should like at least to mention the fact that there are plant hormon
es that control and accelerate plant growth, insect hormones that control pigmentation and molting, and so on.

  When biochemists found that iodine was concentrated in the thyroid gland, they made the reasonable guess that the element was part of a hormone. In 1915, Edward Calvin Kendall of the Mayo Foundation in Minnesota isolated from the thyroid an iodine-containing amino acid which behaved like a hormone, and named it thyroxine. Each molecule of thyroxine contained four atoms of iodine, Like adrenalin, thyroxine has a strong family resemblance to tyrosine and is manufactured from it in the body. (Many years later, in 1952, the biochemist Rosalind Pitt-Rivers and her associates isolated another thyroid hormone—triiodothyronine, so named because its molecule contains three atoms of iodine rather than four. It is less stable than thyroxine but three to five times as active.)

  The thyroid hormones control the overall rate of metabolism in the body: they arouse all the cells to activity. People with an underactive thyroid are sluggish, torpid, and after a time may become mentally retarded, because the various cells are running in low gear. Conversely, people with an overactive thyroid are nervous and jittery, because their cells are racing. Either an underactive or an overactive thyroid can produce goiter.

  The thyroid controls the body’s basal metabolism: that is, its rate of consumption of oxygen at complete rest in comfortable environmental conditions—the “idling rate,” so to speak. If a person’s basal metabolism is above or below the norm, suspicion falls upon the thyroid gland. Measurement of the basal metabolism was at one time a tedious affair, for the subject had to fast for a period in advance and lie still for half an hour while the rate is measured, to say nothing of an even longer period beforehand. Instead of going through this troublesome procedure, why not go straight to the horse’s mouth: that is, measure the amount of rate-controlling hormone that the thyroid is producing? In recent years researchers have developed a method of measuring the amount of protein-bound iodine (PBI) in the bloodstream; this indicates the rate of thyroid-hormone production and so has provided a simple, quick blood test to replace the basal-metabolism determination.

 

‹ Prev