Book Read Free

Asimov's New Guide to Science

Page 55

by Isaac Asimov


  By this same principle, if one star were directly behind another, the light of the farther star would bend about the nearer in such a way that the farther star would appear larger than it really is. The nearer star would act as a gravitational lens. Unfortunately, the apparent size of stars is so minute that an eclipse of a distant star by a much closer one (as seen from Earth) is extremely rare. The discovery of quasars, however, gave astronomers another chance. In the early 1980s, they noted double quasars in which each member has precisely the same property. It is a reasonable supposition that we are seeing only one quasar with its light distorted by a galaxy (or black hole, possibly) that is in the line of sight but invisible to us. The image of the quasar is distorted and made to appear double. (An imperfection in a mirror might have the same effect on our own reflected image.)

  TESTING THE GENERAL THEORY

  The early victories of Einstein’s General Theory were all astronomic in nature. Scientists longed to discover a way to check it in the laboratory under conditions they could vary at will. The key to such a laboratory demonstration arose in 1958, when the German physicist Rudolf Ludwig Mössbauer showed that, under certain conditions, a crystal can be made to produce a beam of gamma rays of sharply defined wavelength. Ordinarily, the atom emitting the gamma ray recoils, and this recoil broadens the band of wavelengths produced. In crystals under certain conditions, a crystal acts as a single atom: the recoil is distributed among all the atoms and sinks to virtually nothing, so that the gamma ray emitted is exceedingly sharp. Such a sharp-wavelength beam can be absorbed with extraordinary efficiency by a crystal similar to the one that produced it. If the gamma rays are of even slightly different wavelength from that which the crystal would naturally produce, it would not be absorbed. This is called the Mössbauer effect.

  If such a beam of gamma rays is emitted downward so as to fall with gravity, the General Theory of Relativity requires it to gain energy so that its wavelength becomes shorter. In falling just a few hundred feet, it should gain enough energy for the decrease in wavelength of the gamma rays, though very minute, to become sufficiently large that the absorbing crystal will no longer absorb the beam.

  Furthermore, if the crystal emitting the gamma ray is moved upward while the emission is proceeding, the wavelength of the gamma ray is increased through the Doppler-Fizeau effect. The velocity at which the crystal is moved upward can be adjusted so as to just neutralize the effect of gravitation on the falling gamma ray, which will then be absorbed by the crystal on which it impinges.

  Experiments conducted in 1960 and later made use of the Mössbauer effect to confirm the General Theory with great exactness. They were the most impressive demonstration of its validity that has yet been seen; as a result, Mössbauer was awarded the 1961 Nobel Prize for physics.

  Other delicate measurements also tend to support General Relativity: the passage of radar beams past a planet, the behavior of binary pulsars as they revolve about a mutual center of gravity, and so on. All the measurements are borderline, and numerous attempts have been made by physicists to suggest alternate theories. Of all the suggested theories, however, Einstein’s is the simplest from the mathematical standpoint. Whenever measurements are made that can possibly distinguish between the theories (and the differences are always minute), it is Einstein’s that seems to be supported. After nearly three-quarters of a century, the General Theory of Relativity stands unshaken, although scientists continue (quite properly) to question it. (Mind you, it is the General theory that is questioned. The Special Theory of Relativity has been verified over and over and over again in so many different ways that no physicist questions it.)

  Heat

  So far in this chapter I have been neglecting a phenomenon that usually accompanies light in our everyday experience. Almost all luminous objects from a star to a candle give off heat as well as light.

  MEASURING TEMPERATURE

  Heat was not studied, other than qualitatively, before modern times. It was enough for a person to say, “It is hot,” or “It is cold,” or “This is warmer than that.” To subject temperature to quantitative measure, it was first necessary to find some measurable change that seemed to take place uniformly with change in temperature. One such change was found in the fact that substances expand when warmed and contract when cooled.

  Galileo was the first to try to make use of this fact to detect changes in temperature. In 1603, he inverted a glass tube of heated air into a bowl of THE WAVES 363 water. As the air in the tube cooled to room temperature, it contracted and drew water up the tube, and there Galileo had his thermometer (from Greek words meaning “heat measure”). When the temperature of the room changed, the water level in the tube changed. If the room warmed, the air in the tube expanded and pushed the water level down; if it grew cooler, the air contracted and the water level moved up. The only trouble was that the basin of water into which the tube had been inserted was open to the air and the air pressure kept changing. That also shoved the water level up and down, independently of temperature, confusing the results. The thermometer was the first important scientific instrument to be made of glass.

  By 1654, the Grand Duke of Tuscany, Ferdinand II, had evolved a thermometer that was independent of air pressure. It contained a liquid sealed into a bulb to which a straight tube was attached. The contraction and expansion of the liquid itself was used as the indication of temperature change. Liquids change their volume with temperature much less than gases do; but with a sizable reservoir of liquid and a filled bulb, so that the liquid could expand only up a very narrow tube, the rise and fall within that tube, for even tiny volume changes, could be made considerable.

  The English physicist Robert Boyle did much the same thing about the same time, and he was the first to show that the human body had a constant temperature, markedly higher than the usual room temperature. Others demonstrated that certain physical phenomena always take place at some fixed temperature. Before the end of the seventeenth century, such was found to be the case for the melting of ice and the boiling of water.

  The first liquids used in thermometry were water and alcohol. Since water froze too soon and alcohol boiled away too easily, the French physicist Guillaume Amontons resorted to mercury. In his device, as in Galileo’s, the expansion—and contraction of air caused the mercury level to rise or fall.

  Then, in 1714, the German physicist Gabriel Daniel Fahrenheit combined the advances of the Grand Duke and of Amontons by enclosing mercury in a tube and using its own expansion and contraction with temperature as the indicator. Furthermore, Fahrenheit put a graded scale on the tube to allow the temperature to be read quantitatively.

  There is some argument about exactly how Fahrenheit arrived at the particular scale he used. He set zero, according to one account, at the lowest temperature he could get in his laboratory, attained by mixing salt and melting ice. He then set the freezing point of pure water at 32 and its boiling point at 212. This had two advantages. First, the range of temperature over which water was liquid came to 180, which seemed a natural number to use in connection with degrees, as there are 180 degrees in a semicircle. Second, body temperature came near a round 100 degrees; normally it is 98.6° Fahrenheit, to be exact.

  So constant is body temperature normally that, if it is more than a degree or so above the average, the body is said to run a fever, and one has a clear feeling of illness. In 1858, the German physician Karl August Wunderlich introduced the procedure of frequent checks on body temperature as an indication of the course of disease. In the next decade, the British physician Thomas Clifford Allbutt invented the clinical thermometer which has a constriction in the narrow tube containing the mercury. The mercury thread rises to a maximum when placed in the mouth, but does not fall when the thermometer is removed. The mercury thread simply divides at the constriction, leaving a constant reading in the portion above. In the United States, the Fahrenheit scale is still used. We are familiar with it in everyday affairs such as weather reporting and cl
inical thermometers.

  In 1742, however, the Swedish astronomer Anders Celsius adopted a different scale. In its final form, this set the freezing point of water at 0 and its boiling point at 100. Because of the hundredfold division of the temperature range in which water is liquid, this is called the centigrade scale, from Latin words meaning “hundred steps” (see figure 6.4). Most people still speak of measurements on this scale as degrees centigrade; but scientists, at an international conference in 1948, renamed the scale after the inventor, following the Fahrenheit precedent. Officially, then, one should speak of the Celsius scale and of degrees Celsius. The symbol C still holds. It was Celsius’s scale that won out in the civilized world, and even the United States is attempting to accustom its people to its use. Scientists, in particular, found the Celsius scale convenient.

  TWO THEORIES OF HEAT

  Temperature measures the intensity of heat but not its quantity. Heat will always flow from a place of higher temperature to a place of lower temperature until the temperatures are equal, just as water will flow from a higher level to a lower one until the levels are equal. Heat behaves so regardless of the relative amounts of heat contained in the bodies involved. Although a bathtub of lukewarm water contains far more heat than a burning match, when the match is placed near the water, heat goes from the match to the water, not vice versa.

  Joseph Black, who had done important work on gases (see chapter 5), was the first to make clear the distinction between temperature and heat. In 1760, he announced that various substances were raised in temperature by different amounts when a given amount of heat was poured into them. To raise the temperature of 1 gram of iron by I degree Celsius takes three times as much heat as to warm I gram of lead by 1 degree. And beryllium requires three times as much heat as iron.

  Furthermore, Black showed it was possible to pour heat into a substance without raising its temperature at all. When melting ice is heated, melting is hastened, but the ice does not rise in temperature. Heat will eventually melt all the ice; but during the process, the temperature of the ice itself never goes above 0’ C. Likewise, with boiling water at 100’ C: as heat is poured into the water, more and more of it boils away as vapor, but the temperature of the liquid does not change while it is boiling.

  The development of the steam engine (see chapter 9), which came at about the same time as Black’s experiments, intensified the interest of scientists in heat and temperature. They began to speculate about the nature of heat, as earlier they had speculated about the nature of light.

  In the case of heat, as of light, there were two theories. One held heat to be a material substance which can be poured or shifted from one substance to another. It was named caloric, from the Latin for “heat.” According to this view, when wood was burned the caloric in the wood passed into the flame, and from it into a kettle above the flame, and from it into the water in the kettle. As water filled with caloric, it was converted to steam.

  As for the other theory, in the late eighteenth century, two famous observations gave rise to the theory of heat as a form of vibration. One was published by the American physicist and adventurer Benjamin Thompson, a Tory who fled the country during the Revolution, was given the title Count Rumford, and then proceeded to knock around Europe. While supervising the boring of cannon in Bavaria in 1798, he noticed that quantities of heat were being produced. He found that enough heat was being generated to bring 18 pounds of water to the boiling point in less than 3 hours. Where was all the caloric coming from? Thompson decided that heat must be a vibration set up and intensified by the mechanical friction of the borer against the cannon.

  The next year the chemist Humphry Davy performed an even more significant experiment. Keeping two pieces of ice below the freezing point, he rubbed them together, not by hand but by a mechanical contrivance, so that no caloric could flow into the ice. By friction alone, he melted some of the ice. He, too, concluded that heat must be a vibration and not a material. Actually, this experiment should have been conclusive; but the caloric theory, though obviously wrong, persisted to the middle of the nineteenth century.

  HEAT AS ENERGY

  Nevertheless, although the nature of heat was misunderstood, scientists learned some important things about it, just as the investigators of light turned up interesting facts about the reRection and refraction of light beams before they knew its nature. The French physicists Jean Baptiste Joseph Fourier, in 1822, and Nicholas Leonard Sadi Carnot, in 1824, studied the Row of heat and made important advances. In fact, Carnot is usually considered the founder of the science of thermodynamics (from Greek words meaning “movement of heat”). He placed the working of steam engines on a firm theoretical foundation.

  By the 1840s, physicists were concerned with the manner in which the heat that was put into steam could be converted into the mechanical work of moving a piston. Is there a limit to the amount of work that can be obtained from a given amount of heat? And what about the reverse process: How is work converted to heat?

  Joule spent thirty-five years converting various kinds of work into heat, doing very carefully what Rumford had earlier done clumsily. He measured the amount of heat produced by an electric current. He heated water and mercury by stirring them with paddle wheels, or by forcing water through narrow tubes. He heated air by compressing it, and so on. In every case, he calculated how much mechanical work had been done on the system and how much heat was obtained as a result. He found that a given amount of work, of any kind, always produces a given amount of heat. Joule had, in other words, determined the mechanical equivalent of heat.

  Since heat could be converted into work, it must be considered a form of energy (from Greek words meaning “containing work”). Electricity, magnetism, light, and motion can all be used to do work, so they, too, are forms of energy. And work itself, being convertible into heat, is a form of energy.

  These ideas emphasized something that had been more or less suspected since Newton’s time: that energy is conserved and can be neither created nor destroyed. Thus, a moving body has kinetic energy (“energy of motion”), a term introduced by Lord Kelvin in 1856. Since a body moving upward is slowed by gravity, its kinetic energy slowly disappears. However, as the body loses kinetic energy, it gains energy of position, for, by virtue of its location high above the surface of the earth, it can eventually fall and regain kinetic energy. In 1853, the Scottish physicist William John Macquorn Rankine named this energy of position potential energy. It seemed that a body’s kinetic energy plus its potential energy (its mechanical energy) remain nearly the same during the course of its movement; this constancy was called conservation of mechanical energy. However, mechanical energy is not perfectly conserved: some is lost to friction, to air resistance, and so on.

  What Joule’s experiments showed above all was that such conservation could be made exact when heat is taken into account, for, when mechanical energy is lost to friction or air resistance, it appears as heat. Take that heat into account, and one can show, without qualification, that no new energy is created and no old energy destroyed. The first to make this plain was a German physicist, Julius Robert Mayer, in 1842, but his experimental backing was meager, and he lacked strong academic credentials. (Even Joule, who was a brewer by profession and also lacked academic credentials, had difficulty getting his meticulous work published.)

  It was not till 1847 that a sufficiently respectable academic figure put this notion into words. In that year, Heinrich von Helmholtz enunciated the law of conservation of energy: whenever a certain amount of energy seems to disappear in one place, an equivalent amount must appear in another. This is also called the first law of thermodynamics. It remains a foundation block of modern physics, undisturbed by either quantum theory or relativity.

  Now, although any form of work can be converted entirely into heat, the reverse is not true. When heat is turned to work, some of it is unusable and is unavoidably wasted. In running a steam engine, the heat of the steam is converted into work on
ly until the temperature of the steam is reduced to the temperature of the environment; after that, although there is much remaining heat in the cold water formed from the steam, no more of it can be converted to work. Even in the temperature range at which work can be extracted, some THE WAVES 367 of the heat does not go into work but is used up in heating the engine and the air around it, in overcoming friction between the piston and the cylinder, and so on.

  In any energy conversion—such as, electric energy into light energy, or magnetic energy into energy of motion—some of the energy is wasted. It is not lost—that would be contrary to the first law; but it is converted to heat that is dissipated in the environment.

  The capacity of any system to perform work is its free energy. The portion of the energy that is unavoidably lost as non useful heat is reflected in the measurement of entropy—a term first used in 1850 by the German physicist Rudolf Julius Emmanuel Clausius.

  Clausius pointed out that, in any process involving a Row of energy, there is always some loss, so that the entropy of the universe is continually increasing. This continual increase of entropy is the second law of thermodynamics, sometimes referred to as the “running-down of the universe” or the “heat-death of the universe.” Fortunately, the quantity of usable energy (supplied almost entirely by the stars, which are “running down” at a tremendous rate) is so vast that there is enough for all purposes for many billions of years.

  HEAT AND MOLECULAR MOTION

  A clear understanding of the nature of heat finally came with the understanding of the atomic nature of matter and developed from the realization that the molecules composing a gas are in continual motion, bouncing off one another and off the walls of their container. The first investigator who attempted to explain the properties of gases from this standpoint was the Swiss mathematician Daniel Bernoulli, in 1738, but he was ahead of his times. In the mid-nineteenth century, Maxwell and Boltzmann (see chapter 5) worked out the mathematics adequately and established the kinetic theory of gases (kinetic comes from a Greek word meaning “motion”). The theory showed heat to be equivalent to the motion of molecules. Thus, the caloric theory of heat received its deathblow. Heat was seen to be a vibrational phenomenon: the movement of molecules in gases and liquids or the jittery to-and-fro trembling of molecules in solids.

 

‹ Prev