So far as science is concerned, the universe at the scale of the Planck length is true terra incognita, not to be found on any map. I know of no one who has explored its story potential. You, as storyteller, are free to roam as you choose.
2.8 Strange physics: superconductivity. I was not sure where this ought to be in the book. It is a phenomenon which depends on quantum level effects, but its results show up in the macroscopic world of everyday events. The one thing that I was sure of is that this is too fascinating a subject to leave out, something that came as an absolute and total surprise to scientists when it was discovered, and remained a theoretical mystery for forty years thereafter. If superconductivity is not a fertile subject for writers, nothing is.
Superconductivity was first observed in materials at extremely low temperatures, so that is the logical place to begin.
Temperature, and particularly low temperature, is in historical terms relatively new. Ten thousand years ago, people already knew how to make things hot. It was easy. You put a fire underneath them. But as recently as two hundred years ago, it was difficult to make things cold. There was no "cold generator" that corresponded to fire as a heat generator. Low temperatures were something that came naturally, they were not man-made.
The Greeks and Romans knew that there were ways of lowering the temperature of materials, although they did not use that word, by such things as the mixture of salt and ice. But they had no way of seeking progressively lower temperatures. That had to wait for the early part of the nineteenth century, when Humphrey Davy and others found that you could liquefy many gases merely by compressing them. The resulting liquid will be warm, because turning gas to liquid gives off the gas's so-called "latent heat of liquefaction." If you now allow this liquid to reach a thermal balance with its surroundings, and then reduce the pressure on it, the liquid boils; and in so doing, it drains heat from its surroundings—including itself. The same result can be obtained if you take a liquid at atmospheric pressure, and put it into a partial vacuum. Some of the liquid boils, and what's left is colder. This technique, of "boiling under reduced pressure," was a practical and systematic way of pursuing lower temperatures. It first seems to have been used by a Scotsman, William Cullen, who cooled ethyl ether this way in 1748, but it took another three-quarters of a century before the method was applied to science (and to commerce; the first refrigerator was patented by Jacob Perkins in 1834).
Another way to cool was found by James Prescott Joule and William Thomson (later Lord Kelvin) in 1852. Named the Joule-Thomson effect, or the Joule-Kelvin effect, it relies on the fact that a gas escaping from a valve into a chamber of lower pressure will, under the right conditions, suffer a reduction in temperature. If the gas entering the valve is first passed in a tube through that lower-temperature region, we have a cycle that will move the chamber to lower and lower temperatures.
Through the nineteenth century the Joule-Thomson effect and boiling under reduced pressure permitted the exploration of lower and lower temperatures. The natural question was, how low could you go?
A few centuries ago, there seemed to be no answer to that question. There seemed no limit to how cold something could get, just as today there is no practical limit to how hot something can become.
The problem of reaching low temperatures was clarified when scientists finally realized, after huge intellectual efforts, that heat is nothing more than motion at the atomic and molecular scale. "Absolute zero" could then be identified as no motion, the temperature of an object when you "took out all the heat." (Purists will object to this statement since even at absolute zero, quantum theory tells us that an object still has a zero point energy; the thermodynamic definition of absolute zero is done in terms of reversible isothermal processes.)
Absolute zero, it turns out, is reached at a temperature of -273.16 degrees Celsius. Temperatures measured with respect to this value are all positive, and are said to be in Kelvins (written K). One Kelvin is the same size as one degree Celsius, but it is measured with respect to a reference point of absolute zero, rather than to the Celsius zero value of the freezing point of water. We will use the two scales interchangeably, whichever is the more convenient at the time.
Is it obvious that this absolute zero temperature must be the same for all materials? Suppose that you had two materials which reached their zero heat state at different temperatures. Put them in contact with each other. Then thermodynamics requires that heat should flow from the higher temperature body to the other one, until they both reach the same temperature. Since there is by assumption no heat in either material (each is at its own absolute zero), no heat can flow; and when no heat flows between two bodies in contact, they must be at the same temperature. Thus absolute zero is the same temperature for every material.
Even before an absolute zero point of temperature was identified, people were trying to get down as low in temperature as they could, and also to liquefy gases. Sulfur dioxide (boiling point -10deg.C) was the first to go, when Monge and Clouet liquefied it in 1799 by cooling in a mixture of ice and salt. De Morveau produced liquid ammonia (boiling point -33deg.C) in 1799 using the same method, and in 1805 Northmore claimed to have produced liquid chlorine (boiling point -35deg.C) by simple compression.
In 1834, Thilorier produced carbon dioxide snow (dry ice, melting point -78.5deg.C) for the first time using gas expansion. Soon after that, Michael Faraday, who had earlier (1823) liquefied chlorine, employed a carbon dioxide and ether mixture to reach the record low temperature of -110 degrees Celsius (163 K). He was able to liquefy many gases, but not hydrogen, oxygen, or nitrogen.
In 1877, Louis Cailletet used gas compression to several hundred atmospheres, followed by expansion through a jet, to produce liquid mists of methane (boiling point -164deg.C), carbon monoxide (boiling point -192deg.C), and oxygen (boiling point -183deg.C). He did not, however, manage to collect a volume of liquid from any of these substances.
Liquid oxygen was finally produced in quantity in 1883, by Wroblewski and Olszewski, who reached the lowest temperature to date (-136deg.C). Two years later they were able to go as low as -152deg.C, and liquefied both nitrogen and carbon monoxide. In that same year, Olszewski reached a temperature of -225deg.C (48 K), which remained a record for many years. He was able to produce a small amount of liquid hydrogen for the first time. In 1886, Joseph Dewar invented the Dewar flask (which we think of today as the thermos bottle) that allowed cold, liquefied materials to be stored for substantial periods of time at atmospheric pressure. In 1898, Dewar liquefied hydrogen in quantity and reached a temperature of 20 K. At that point, all known gases had been liquefied.
I have gone a little heavy on the history here, to make the point that most scientific progress is not the huge intellectual leap favored in bad movies. It is more often careful experiments and the slow accretion of facts, until finally one theory can be produced which encompasses all that is known. If a story is to be plausible and involves a major scientific development, then some (invented) history that preceded the development adds a feeling of reality.
However, we have one missing fact in the story so far. What about helium, which has not been mentioned?
In the 1890s, helium was still a near-unknown quantity. The gas had been observed in the spectrum of the Sun by Janssen and Lockyer, in 1868, but it had not been found on earth until the early 1890s. Its properties were not known. It is only with hindsight that we can find good reasons why the gas, when available, proved unusually hard to liquefy.
The periodic table had already been formulated by Dmitri Mendeleyev, in about 1870. Forty years later, Henry Moseley showed that the table could be written in terms of an element's atomic number, which corresponded to the number of protons in the nucleus of that element.
As other gases were liquefied, a pattern emerged. TABLE 2.1 (p. 57) shows the temperatures where a number of gases change from the gaseous to the liquid state, under normal atmospheric pressure, together with their atomic numbers and molecular weigh
ts.
What happens when we plot the boiling point of an element against its atomic number in the periodic table? For gases, there are clearly two different groups. Radon, xenon, krypton, argon, and neon remain gases to much lower temperatures than other materials of similar atomic number. This is even more noticeable if we add a number of other common gases, such as ammonia, acetylene, carbon dioxide, methane, and sulfur dioxide, and look at the variation of their boiling points with their molecular weights. They all boil at much higher temperatures.
Now, radon, xenon, krypton, and the others of the low-boiling-point group are all inert gases, often known as noble gases, that do not readily participate in any chemical reactions. TABLE 2.1 (p. 57) also shows that the inert gases of lower atomic number and molecular weight liquefy at lower temperatures. Helium, the second lightest element, is the final member of the inert gas group, and the one with the lowest atomic number. Helium should therefore have an unusually low boiling point.
It does. All through the late 1890s and early 1900s, attempts to liquefy it failed.
When the Dutch scientist Kamerlingh Onnes finally succeeded, in 1908, the reason for other people's failure became clear. Helium remains liquid until -268.9 Celsius—16 degrees lower than liquid hydrogen, and only 4.2 degrees above absolute zero. As for solid helium, not even Onnes' most strenuous efforts could produce it. When he boiled helium under reduced pressure, the liquid helium went to a new form—but it was a new and strange liquid phase, now known as Helium II, that exists only below 2.2 K. It turns out that the solid phase of helium does not exist at atmospheric pressure, or at any pressure less than 25 atmospheres. It was first produced in 1926, by P.H. Keeson.
The liquefaction of helium looked like the end of the story; it was in fact the beginning.
2.9 Super properties. Having produced liquid helium, Kamerlingh Onnes set about determining its properties. History does not record what he expected to find, but it is fair to guess that he was amazed.
Science might be defined as assuming something you don't know using what you know, and then measuring to see if it is true or not. The biggest scientific advances often occur when what you measure does not agree with what you predict. What Kamerlingh Onnes measured for liquid helium, and particularly for Helium II, was so bizarre that he must have wondered at first what was wrong with his measuring equipment.
One of the things that he measured was viscosity. Viscosity is the gooeyness of a substance, though there are more scientific definitions. We usually think of viscosity as applying to something like oil or molasses, but non-gooey substances like water and alcohol have well-defined viscosities.
Onnes tried to determine a value of viscosity for Helium II down around 1 K. He failed. It was too small to measure. As the temperature goes below 2 K, the viscosity of Helium II goes rapidly towards zero. It will flow with no measurable resistance through narrow capillaries and closely-packed powders. Above 2.2 K, the other form of liquid helium, known as Helium I, does have a measurable viscosity, low but highly temperature-dependent.
Helium II also conducts heat amazingly well. At about 1.9 K, where its conductivity is close to a maximum, this form of liquid helium conducts heat about eight hundred times as well as copper at room temperature—and copper is usually considered an excellent conductor. Helium II is in fact by far the best known conductor of heat.
More disturbing, perhaps, from the experimenter's point of view is Helium II's odd reluctance to be confined. In an open vessel, the liquid creeps in the form of a thin film up the sides of the container, slides out over the rim, and runs down to the lowest available level. This phenomenon can be readily explained, in terms of the very high surface tension of Helium II; but it remains a striking effect to observe.
Liquid helium is not the end of the low-temperature story, and the quest for absolute zero is an active and fascinating field that continues today. New methods of extracting energy from test substances are still being developed, with the most effective ones employing a technique known as adiabatic demagnetization. Invented independently in 1926 by a German, Debye, and an American, Giauque, it was first used by Giauque and MacDougall in 1933, to reach a temperature of 0.25 K. A more advanced version of the same method was applied to nuclear adiabatic demagnetization in 1956 by Simon and Kurti, and they achieved a temperature within a hundred thousandth of a degree of absolute zero. With the use of this method, temperatures as low as a few billionths of a degree have been attained.
However, the pursuit of absolute zero is not our main objective, and to pursue it further would take us too far afield. We are interested in another effect that Kamerlingh Onnes found in 1911, when he examined the electrical properties of selected materials immersed in a bath of liquid helium. He discovered that certain pure metals exhibited what is known today as superconductivity.
Below a few Kelvins, the resistance to the passage of an electrical current in these metals drops suddenly to a level too small to measure. Currents that are started in wire loops under these conditions continue to flow, apparently forever, with no sign of dissipation of energy. For pure materials, the cutoff temperature between normal conducting and superconducting is quite sharp, occurring within a couple of hundredths of a degree. Superconductivity today is a familiar phenomenon. At the time when it was discovered, it was an absolutely astonishing finding—a physical impossibility, less plausible than anti-gravity. Frictional forces must slow all motion, including the motion represented by the flow of an electrical current. Such a current could not therefore keep running, year after year, without dissipation. That seemed like a fundamental law of nature.
Of course, there is no such thing as a law of nature. There is only the Universe, going about its business, while humans scurry around trying to put everything into neat little intellectual boxes. It is amazing that the tidying-up process called physics works as well as it does, and perhaps even more astonishing that mathematics seems important in every box. But the boxes have no reality or permanence; a "law of nature" is useful until we discover cases where it doesn't apply.
In 1911, the general theories that could explain superconductivity were still decades in the future. The full explanation did not arrive until 1957, forty-six years after the initial discovery.
To understand superconductivity, and to explain its seeming impossibility, it is necessary to look at the nature of electrical flow itself.
2.10 Meanwhile, electricity. While techniques were being developed to reach lower and lower temperatures, the new field of electricity and magnetism was being explored in parallel—sometimes by the same experimenters. Just three years before the Scotsman, William Cullen, found how to cool ethyl ether by boiling it under reduced pressure, von Kleist of Pomerania and van Musschenbroek in Holland independently discovered a way to store electricity. Van Musschenbroek did his work at the University of Leyden—the same university where, 166 years later, Kamerlingh Onnes would discover superconductivity. The Leyden Jar, as the storage vessel soon became known, was an early form of electrical capacitor. It allowed the flow of current through a wire to take place under controlled and repeatable circumstances.
Just what it was that constituted the current through that wire would remain a mystery for another century and a half. But it was already apparent to Ben Franklin by 1750 that something material was flowing. The most important experiments took place three-quarters of a century later. In 1820, just three years before Michael Faraday liquefied chlorine, the Danish scientist Hans Christian Oersted and then the Frenchman André Marie Ampère found that there was a relation between electricity and magnetism—a flowing current would make a magnet move. In the early 1830s, Faraday then showed that the relationship was a reciprocal one, by producing an electric current from a moving magnet. However, from our point of view an even more significant result had been established a few years before, when in 1827 the German scientist Georg Simon Ohm discovered Ohm's Law: that the current in a wire is given by the ratio of the voltage between the ends of
the wire, divided by the wire's resistance.
This result seemed too simple to be true. When Ohm announced it, no one believed him. He was discredited, resigned his position as a professor at Cologne University, and lived in poverty and obscurity for several years. Finally, he was vindicated, recognized, and fourteen years later began to receive awards and medals for his work.
Ohm's Law is important to us because the resistance of a substance does not depend on the particular values of the voltage or the current. Thus it becomes a simple matter to study the dependence of resistance on temperature. It turns out that the resistance of a conducting material is roughly proportional to its absolute temperature. Just as important, materials vary enormously in their conducting power. For instance, copper allows electricity to pass through it 1020 times as well as quartz or rubber. The obvious question is, why? What makes a good conductor, and what makes a good insulator? And why should a conductor pass electricity more easily at lower temperatures?
The answers to these questions were developed little by little through the rest of the nineteenth century. First, heat was discovered to be no more than molecular and atomic motion. Thus changes of electrical resistance had somehow to be related to those same motions.
Second, in the 1860s, Maxwell, the greatest physicist of the century, developed Faraday and Ampère's experimental results into a consistent and complete mathematical theory of electricity and magnetism, finally embodied in four famous differential equations. All observed phenomena of electricity and magnetism must fit into the framework of that theory.
Borderlands of Science Page 5