The chlorine applied during water disinfection can react with organic compounds in the water to produce trihalomethanes and haloacetic acids, which have the potential to cause cancer. However, the risk of these compounds is low when compared with the risk of waterborne diseases. Alternatives to chlorination include disinfection with ozone, chloramine, and ultraviolet light.
According to spokespeople for the Darnall Army Medical Center, “It is safe to say that more lives have been saved and more sickness prevented by Darnall’s contribution to sanitary water than by any other single achievement in medicine.”
SEE ALSO Sewage Systems (c. 600 BCE) Germ Theory of Disease (1862), Antiseptics (1865).
LEFT: Water well in a medieval village in Spain. RIGHT: A water well at the Great Mosque of Kairouan, Tunisia (postcard from 1900, overlaid on patterns from the door of the prayer hall). In modern times, wells are sometimes periodically cleaned with a chlorine solution to reduce bacterial levels.
1910
Main Sequence • Jim Bell
Ejnar Hertzsprung (1873–1967), Henry Norris Russell (1877–1957)
In the early part of the twentieth century, astronomers worldwide were characterizing and classifying enormous numbers of stars in terms of their colors and spectroscopic lines, expanding on the methods pioneered by Edward Pickering’s Group at Harvard. Among the most important advances was the observation, noticed independently by the Danish astronomer Ejnar Hertzsprung and the American astronomer Henry Norris Russell, that when the spectral classes or temperatures of the stars were plotted against their actual brightness (that is, after their apparent brightness in the sky was corrected for their distance from us), most of the stars cluster in a broad sequence from upper left to lower right. Hertzsprung coined the term “main sequence” to describe this prominent trend among the stars. Such plots began being used around 1910 and are called Hertzsprung-Russell (H-R) diagrams.
Over the next few decades astronomers began to understand that the main sequence was more than just a random clustering—it represents an evolutionary pathway for tracking the age and eventual fate of the stars. Most stars are born when their central pressures and temperatures are high enough for the nuclear fusion of hydrogen atoms into helium. During this hydrogen-fusing phase of its lifetime, a normal star will plot on the main sequence at a position that depends on its mass, with luminous stars a few to ten times the mass of the Sun (blue giants) on the upper left end of the plot and dim stars from about one-tenth to one-half the Sun’s mass (red dwarfs) on the lower right. As stars age and run out of hydrogen fuel, they diverge off the main sequence and eventually “die” in characteristic (and often spectacular) ways that again depend on their mass.
As the details of stellar interiors later became understood by astrophysicists such as Arthur Eddington and Hans Bethe, it became possible to predict how stars of specific masses would live and die. Our Sun turns out to be an average mass, middle-aged, main sequence star that appears destined, in about 5 billion more years, to bloat up into a red giant, expel its outer layers into a planetary nebula, and then fade away as a white dwarf.
SEE ALSO Black Holes (1783), Pauli Exclusion Principle (1925), Neutron (1932).
Plots of the intrinsic luminosity of stars (on the y axis, normalized so the Sun’s luminosity = 1) versus their color—or equivalently, their temperature (on the x axis)—reveal a prominent diagonal band of stars known as the main sequence, bracketed by brighter blue and red giants and dimmer white dwarfs.
1911
Atomic Nucleus • Clifford A. Pickover
Ernest Rutherford (1871–1937)
Today, we know that the atomic nucleus, which consists of protons and neutrons, is the very dense region at the center of an atom. However, in the first decade of the 1900s, scientists were unaware of the nucleus and thought of the atom as a diffuse web of positively charged material, in which negatively charged electrons were embedded like cherries in a cake. This model was utterly destroyed when Ernest Rutherford and his colleagues discovered the nucleus after they fired a beam of alpha particles at a thin sheet of gold foil. Most of the alpha particles (which we know today as helium nuclei) went through the foil, but a few bounced straight back. Rutherford said later that this was “quite the most incredible event that has ever happened to me. . . . It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.”
The cherry-cake model of the atom, which was a metaphor for a somewhat uniform spread of density across the gold foil, could never account for this behavior. Scientists would have observed the alpha particles slow down, perhaps, like a bullet being fired through water. They did not expect the atom to have a “hard center” like a pit in a peach. In 1911, Rutherford announced a model with which we are familiar today: an atom consisting of a positively charged nucleus encircled by electrons. Given the frequency of collisions with the nucleus, Rutherford could approximate the size of the nucleus with respect to the size of the atom. Author John Gribbin writes that the nucleus has “one hundred-thousandth of the diameter of the whole atom, equivalent to the size of a pinhead compared with the dome of St. Paul’s cathedral in London. . . . And since everything on Earth is made of atoms, that means that your own body, and the chair you sit on, are each made up of a million billion times more empty space than ‘solid matter.’”
SEE ALSO Atomic Theory (1808), Electron (1897), E = mc2 (1905), Bohr Atom (1913), Neutron (1932), Nuclear Magnetic Resonance (1938), Energy from the Nucleus (1942), Stellar Nucleosynthesis (1946).
Artistic rendition of a classical model of the atom with its central nucleus. Only some of the nucleons (protons and neutrons) and electrons are seen in this view. In an actual atom, the diameter of the nucleus is very much smaller than the diameter of the entire atom. Modern depictions of the surrounding electrons often depict them as clouds that represent probability densities.
1911
Superconductivity • Clifford A. Pickover
Heike Kamerlingh Onnes (1853–1926), John Bardeen (1908–1991), Karl Alexander Müller (b. 1927), Leon N. Cooper (b. 1930), John Robert Schrieffer (b. 1931), Johannes Georg Bednorz (b. 1950)
“At very low temperatures,” writes science-journalist Joanne Baker, “some metals and alloys conduct electricity without any resistance. The current in these superconductors can flow for billions of years without losing any energy. As electrons become coupled and all move together, avoiding the collisions that cause electrical resistance, they approach a state of perpetual motion.”
In fact, many metals exist for which the resistivity is zero when they are cooled below a critical temperature. This phenomenon, called superconductivity, was discovered in 1911 by Dutch physicist Heike Onnes, who observed that when he cooled a sample of mercury to 4.2 degrees above absolute zero (−452.1°F), its electrical resistance plunged to zero. In principle, this means that an electrical current can flow around a loop of superconducting wire forever, with no external power source. In 1957, American physicists John Bardeen, Leon Cooper, and Robert Schrieffer determined how electrons could form pairs and appear to ignore the metal around them: Consider a metal window screen as a metaphor for the arrangement of positively charged atomic nuclei in a metal lattice. Next, imagine a negatively charged electron zipping between the atoms, creating a distortion by pulling on them. This distortion attracts a second electron to follow the first; they travel together in a pair, and encounter less resistance overall.
In 1986, Georg Bednorz and Alex Müller discovered a material that operated at the higher temperature of roughly −396°F (35 kelvins), and in 1987 a different material was found to superconduct at −297°F (90 kelvins). If a superconductor is discovered that operates at room temperature, it could be used to save vast amounts of energy and to create a high-performance electrical power transmission system. Superconductors also expel all applied magnetic fields, which allows engineers to build magnetically levitated trains. Superconductivity is also used to create pow
erful electromagnets in MRI (magnetic-resonance imaging) scanners in hospitals.
SEE ALSO Battery (1800), Electron (1897), Nuclear Magnetic Resonance (1938).
In 2008, physicists at the U.S. Department of Energy’s Brookhaven National Laboratory discovered interface high-temperature superconductivity in bilayer films of two cuprate materials—with potential for creating higher-efficiency electronic devices. In this artistic rendition, these thin films are built layer by layer.
1912
Bragg’s Law of Crystal Diffraction • Clifford A. Pickover
William Henry Bragg (1862–1942), William Lawrence Bragg (1890–1971)
“I was captured for life by chemistry and by crystals,” wrote X-ray crystallographer Dorothy Crowfoot Hodgkin whose research depended on Bragg’s Law. Discovered by the English physicists Sir W. H. Bragg and his son Sir W. L. Bragg in 1912, Bragg’s Law explains the results of experiments involving the diffraction of electromagnetic waves from crystal surfaces. Bragg’s Law provides a powerful tool for studying crystal structure. For example, when X-rays are aimed at a crystal surface, they interact with atoms in the crystal, causing the atoms to re-radiate waves that may interfere with one another. The interference is constructive (reinforcing) for integer values of n according to Bragg’s Law: nλ = 2d sin(θ). Here, λ is the wavelength of the incident electromagnetic waves (e.g. X-rays); d is the spacing between the planes in the atomic lattice of the crystal; and θ is the angle between the incident ray and the scattering planes.
For example, X-rays travel down through crystal layers, reflect, and travel back over the same distance before leaving the surface. The distance traveled depends on the separation of the layers and the angle at which the X-ray entered the material. For maximum intensity of reflected waves, the waves must stay in phase to produce the constructive interferences. Two waves stay in phase, after both are reflected, when n is a whole number. For example, when n = 1, we have a “first order” reflection. For n = 2, we have a “second order” reflection. If only two rows were involved in the diffraction, as the value of θ changes, the transition from constructive to destructive interference is gradual. However, if interference from many rows occurs, then the constructive interference peaks become sharp, with mostly destructive interference occurring in between the peaks.
Bragg’s Law can be used for calculating the spacing between atomic planes of crystals and for measuring the radiation’s wavelength. The observations of X-ray wave interference in crystals, commonly known as X-ray diffraction, provided direct evidence for the periodic atomic structure of crystals that was postulated for several centuries.
SEE ALSO Wave Nature of Light (1801), X-rays (1895), Hologram (1947).
LEFT: Copper sulfate. In 1912, physicist Max von Laue used X-rays to record a diffraction pattern from a copper sulfate crystal, which revealed many well-defined spots. Prior to X-ray experiments, the spacing between atomic lattice planes in crystals was not accurately known. RIGHT: Bragg’s Law eventually led to studies involving the X-ray scattering from crystal structures of large molecules such as enzymes. Shown here is a model of the human cytochrome P450 liver enzyme, which plays a role in drug detoxification.
1912
Continental Drift • Michael C. Gerald with Gloria E. Gerald
Alexander von Humboldt (1769–1859), Alfred Wegener (1880–1930)
Even casual inspection of a map of the Southern Hemisphere suggests that the coastlines of eastern South America and western Africa fit together like the pieces of a jigsaw puzzle. This same thought occurred to the naturalist-explorer Alexander von Humboldt who, during the early 1800s, found similarities between animal and plant fossils in South America and western Africa, and common elements between the mountain ranges in Argentina and South Africa. Subsequent explorers saw similarities between fossils in India and Australia.
In 1912, the German geophysicist-meteorologist and polar explorer Alfred Wegener went a step further and proposed that the present continents were once fused into a single landmass, which he called Pangaea (“All-Lands”). Expanding upon this theory in his 1915 book, The Origin of Continents and Oceans, Wegener described how Pangaea subsequently split into two supercontinents, Laurasia (corresponding to the present-day Northern Hemisphere) and Gondwanaland, also called Gondwana (Southern Hemisphere)—an event now thought to have occurred 180 to 200 million years ago. Wegener could not provide an explanation for continental drift, and his concept was roundly rejected until after his death in 1930 from heart failure during an expedition to Greenland. The occurrence of continental drift was finally accepted in the 1960s, when the concept of plate tectonics—involving plates that are in constant motion relative to each other, sliding under other plates and pulling apart—was established.
Long before continental drift was acknowledged by the scientific community, naturalists were finding ancient fossils of the same or similar plants and animals in continents thousands of miles apart and separated by oceans. Fossil remains of the tropical fern Glossopteris were found in South America, Africa, India, and Australia, while those of the family Kannemeyrid, a mammal-like reptile, were uncovered in Africa, Asia, and South America. By contrast, some living plants and animals on different continents are very different from one another. For example, all the native mammals in Australia are marsupials and not placental mammals, which suggests that Australia split off from Gondwanaland before placental mammals evolved.
SEE ALSO Darwin and the Voyages of the Beagle (1831), Fossil Record and Evolution (1836), Darwin’s Theory of Natural Section (1859).
According to the theory of continental drift, a single giant landmass, Pangaea, split into the two supercontinents, Laurasia (Northern Hemisphere) and Gondwana (Southern Hemisphere).
1913
Bohr Atom • Clifford A. Pickover
Niels Henrik David Bohr (1885–1962)
“Somebody once said about the Greek language that Greek flies in the writings of Homer,” writes physicist Amit Goswami. “The quantum idea started flying with the work of Danish physicist Niels Bohr published in the year 1913.” Bohr knew that negatively charged electrons are easily removed from atoms and that the positively charged nucleus occupied the central portion of the atom. In the Bohr model of the atom, the nucleus was considered to be like our central Sun, with electrons orbiting like planets.
Such a simple model was bound to have problems. For example, an electron orbiting a nucleus would be expected to emit electromagnetic radiation. As the electron lost energy, it should decay and fall into the nucleus. In order to avoid atomic collapse as well as to explain various aspects of the emission spectra of the hydrogen atom, Bohr postulated that the electrons could not be in orbits with an arbitrary distance from the nucleus. Rather, they were restricted to particular allowed orbits or shells. Just like climbing or descending a ladder, the electron could jump to a higher rung, or shell, when the electron received an energy boost, or it could fall to a lower shell, if one existed. This hopping between shells takes place only when a photon of the particular energy is absorbed or emitted from the atom. Today we know that the model has many shortcomings and does not work for larger atoms, and it violates the Heisenberg Uncertainty Principle because the model employs electrons with a definite mass and velocity in orbits with definite radii.
Physicist James Trefil writes, “Today, instead of thinking of electrons as microscopic planets circling a nucleus, we now see them as probability waves sloshing around the orbits like water in some kind of doughnut-shaped tidal pool governed by Schrödinger’s Equation. . . . Nevertheless, the basic picture of the modern quantum mechanical atoms was painted back in 1913, when Niels Bohr had his great insight.” Matrix mechanics—the first complete definition of quantum mechanics—later replaced the Bohr Model and better described the observed transitions in energy states of atoms.
SEE ALSO Electron (1897), Atomic Nucleus (1911), Pauli Exclusion Principle (1925), Schrödinger’s Wave Equation (1926), Heisenberg Uncertainty Principle (1927)
.
These amphitheater seats in Ohrid, Macedonia, are a metaphor for Bohr’s electron orbits. According to Bohr, electrons could not be in orbits with an arbitrary distance from the nucleus; rather, electrons were restricted to particular allowed shells associated with discrete energy levels.
1915
General Theory of Relativity • Clifford A. Pickover
Albert Einstein (1879–1955)
Albert Einstein once wrote that “all attempts to obtain a deeper knowledge of the foundations of physics seem doomed to me unless the basic concepts are in accordance with general relativity from the beginning.” In 1915, ten years after Einstein proclaimed his Special Theory of Relativity (which suggested that distance and time are not absolute and that one’s measurements of the ticking rate of a clock depend on one’s motion with respect to the clock), Einstein gave us an early form of his General Theory of Relativity (GTR), which explained gravity from a new perspective. In particular, Einstein suggested that gravity is not really a force like other forces, but results from the curvature of space-time caused by masses in space-time. Although we now know that GTR does a better job at describing motions in high gravitational fields than Newtonian mechanics (such as the orbit of Mercury about the Sun), Newtonian mechanics still is useful in describing the world of ordinary experience.
The Science Book Page 22