Book Read Free

The Science Book

Page 19

by Clifford A Pickover


  If a bulb is operated at low voltages, it can be surprisingly long lasting. For example, the “Centennial Light” in a California fire station has been burning almost continually since 1901. Generally, incandescent lights are inefficient in the sense that about 90% of the power consumed is converted to heat rather than visible light. Although today more efficient forms of light bulbs (e.g. compact fluorescent lamps) are starting to replace the incandescent light bulbs, the simple incandescent bulb once replaced the soot-producing and more dangerous lamps and candles, changing the world forever.

  SEE ALSO Wave Nature of Light (1801), Fiber Optics (1841), Electromagnetic Spectrum (1864).

  Edison light bulb with a looping carbon filament.

  1878

  Power Grid • Marshall Brain

  In 1878, at the Paris World’s Fair, visitors marveled at the Yablochov arc lamps (patented by Pavel Yablochov in 1876) powered by Zénobe Gramme dynamos. This was an example of an early commercial system of high-voltage power—the kind of power grid that exists invisibly behind the scenes around the world today.

  It is possible to imagine a society where there is no power grid—where every home and business generates its own power on-site. But this approach has efficiency problems. A big power plant can realize economies of scale when purchasing fuel and can apply significant resources to emission controls. Advanced technologies like nuclear power are not possible without a big power plant. And site-specific power sources like hydropower, solar power, and wind turbines only really make sense if there is a grid. A power grid can also improve reliability. When a big power plant needs to go offline for maintenance, other power plants in the area use the grid to make up the load.

  It is amazing to realize that the power grid has only two key components: wire and transformers. Transformers can multiply voltages up and down. For long distance transmission, transformers boost the voltage to 700,000 volts or more. Once the power arrives at its destination, transformers step down the voltage. It might travel at 40,000 volts in a community, and then 3,000 volts in a neighborhood. At your house, a final transformer brings it to 240 and 120 volts for use in your wall outlets and light switches.

  The grid is not perfect, and occasionally we see widespread blackouts. On a sweltering summer day with the whole grid running at peak loading, a failure in a key transmission line can cause an irresolvable problem. Other lines try to pick up the load from the failed line, but they overload and fail. A ripple effect can leave several states in the dark. Engineers are working on new architectures to prevent this problem as well. Once perfected, the grid will be even more invisible.

  SEE ALSO Von Guericke’s Electrostatic Generator (1660), Coulomb’s Law of Electrostatics (1785), Battery (1800), Electron (1897).

  A transmission tower often makes use of a steel lattice that supports an overhead power line. Without a power grid, energy would have to be generated on site.

  1887

  Michelson-Morley Experiment • Clifford A. Pickover

  Albert Abraham Michelson (1852–1931) and Edward Williams Morley (1838–1923)

  “It is hard to imagine nothing,” physicist James Trefil writes. “The human mind seems to want to fill empty space with some kind of material, and for most of history that material was called the aether. The idea was that the emptiness between celestial objects was filled with a kind of tenuous Jell-O.”

  In 1887, physicists Albert Michelson and Edward Morley conducted pioneering experiments in order to detect the luminiferous aether thought to be pervading space. The aether idea was not too crazy—after all, water waves travel through water and sound travels through air. Didn’t light also require a medium through which to propagate, even in an apparent vacuum? In order to detect aether, the researchers split a light beam into two beams that traveled at right angles to each other. Both beams were reflected back and recombined to produce a striped interference pattern that depended on the time spent traveling in both directions. If the Earth moved through an aether, this should be detectable as a change in the interference pattern produced when one of the light beams (which had to travel into the aether “wind”) was slowed relative to the other beam. Michelson explained the idea to his daughter, “Two beams of light race against each other, like two swimmers, one struggling upstream and back, while the other, covering the same distance just crosses and returns. The second swimmer will always win if there is any current in the river.”

  In order to make such fine measurements, vibrations were minimized by floating the apparatus on a pool of mercury, and the apparatus could be rotated relative to the motion of the Earth. No significant change in the interference patterns was found, suggesting that the Earth did not move through an “aether wind”—making the experiment the most famous “failed” experiment in physics. This finding helped to persuade other physicists to accept Einstein’s Special Theory of Relativity.

  SEE ALSO Wave Nature of Light (1801), Electromagnetic Spectrum (1864), Special Theory of Relativity (1905).

  The Michelson-Morley Experiment demonstrated that the earth did not move through an aether wind. In the late 1800s, a luminiferous aether (the light-bearing substance artistically depicted here) was thought to be a medium for the propagation of light.

  1888

  Tesseract • Clifford A. Pickover

  Charles Howard Hinton (1853–1907)

  I know of no subject in mathematics that has intrigued both children and adults as much as the idea of a fourth dimension, a spatial direction different from all the directions of our everyday three-dimensional space. Theologians have speculated that the afterlife, heaven, hell, angels, and our souls could reside in a fourth dimension. Mathematicians and physicists frequently use the fourth dimension in their calculations. It’s part of important theories that describe the very fabric of our universe.

  The tesseract is the four-dimensional analog of the ordinary cube. The term hypercube is used more generally when referring to cube analogues in other dimensions. Just as a cube can be visualized by dragging a square into the third dimension and watching the shape that the square traces through space, a tesseract is produced by the trail of a cube moving into the fourth dimension. Although it is difficult to visualize a cube being shifted a distance in a direction perpendicular to all three of its axes, computer graphics often help mathematicians develop a better intuition for higher-dimensional objects. Note that a cube is bounded by square faces and a tesseract by cubical faces. We can write down the number of corners, edges, faces, and solids for these kinds of higher-dimensional objects:

  The word tesseract was coined and first used in 1888 by British mathematician Charles Howard Hinton in his book A New Era of Thought. Hinton, a bigamist, was also famous for his set of colored cubes that he claimed could be used to help people visualize the fourth dimension. When used at séances, the Hinton cubes were thought to help people glimpse ghosts of dead family members.

  SEE ALSO Euclid’s Elements (c. 300 BCE), Projective Geometry (1639), Möbius Strip (1858).

  Rendering of a tesseract by Robert Webb using Stella4D software. The tesseract is the four-dimensional analog of the ordinary cube.

  1890

  Steam Turbine • Marshall Brain

  Sir Charles Parsons (1854–1931)

  If you go to any large power plant today, one of the landmarks will be a huge steam turbine bigger than a bus. You find steam turbines on aircraft carriers and nuclear submarines, too. With the steam turbine, engineers were able to reconceptualize the extraction of power from steam and thus abandon pistons.

  Let’s get in our time machine and go back to the engine room of the Titanic in 1912. Here they are using steam drawn from over one hundred massive coal-fired boilers and it is going into three steam engines driving three propellers. Two of these steam engines are gigantic piston machines that produce 30,000 hp (22 million watts) each, and the third is a steam turbine producing about half that. What we witness here is a period of transition. Steam turbines, first invented by Si
r Charles Parsons in 1890, had not yet been perfected, but they would soon replace pistons to extract rotational energy from steam.

  The basic idea behind a steam turbine is extremely simple. The expanding steam turns a series of vanes attached to a shaft. The vanes get progressively larger, so that the steam’s energy can be captured as it expands. Compare that process to the Titanic’s piston engines; the piston engines use three cylinders of increasing size. The steam first expands in the smallest cylinder. Then it flows to the next cylinder, somewhat larger in size to extract more power from the less dense exhaust of cylinder one. Then to the third even larger cylinder. This worked but made for a large and heavy piece of equipment. One steam piston engine on the Titanic weighed 1,000 tons.

  A steam turbine does the same job, but is much smaller, lighter, and more efficient than an equivalent steam piston engine. Modern steam turbines appear in almost every major coal-fired and nuclear power plant today because of these advantages. Instead of just three expansion chambers, the steam turbine can have many stages of vanes of increasing size to extract as much power as possible. This shows how engineers switch to completely new concepts to get better results.

  SEE ALSO Gears (c. 50), High-Pressure Steam Engine (1800), Carnot Engine (1824), Internal Combustion Engine (1908).

  Installation of a turbine blade on a steam turbine rotor being assembled in a factory. Contemporary turbines are so precisely made that they can only be constructed with computers.

  1890

  The Principles of Psychology • Wade E. Pickren

  William James (1842–1910)

  Artistic by temperament, William James bowed to his father’s wishes and was educated as a physician. He never practiced, however, and after a period of existential struggle he accepted an appointment as a lecturer at Harvard. There he pioneered the new field of psychology in America and wrote what proved to be the most influential text of his era, The Principles of Psychology. It took him twelve years to write, and after it was published in 1890, he wrote a friend, “Psychology is a damnable subject.”

  In Principles, James described psychology as the science of mental life. He wrote that the point of scientific psychology was to help us understand that consciousness and our minds evolved to help us adapt and survive in the world. Thus what consciousness does is more important than what it is or what it contains.

  How might the mind best be studied? In Germany, the first laboratory psychologists were using refined mechanical instruments such as the Hipp chronoscope to measure mental reactions. James rejected this approach, as he believed that one could never understand the complexity of human mental life by adding up its contents or by measuring the speed of reactions. James offered an alternative view of consciousness. In a beautiful and enduring metaphor, he said consciousness is like a stream, dynamic and ever changing. A person, he wrote, could never step into the same river twice. Thus no instrument could ever capture this experience.

  James also wrote about habit, calling it the “flywheel of life.” He proposed a theory of emotions, that feelings follow behavior, which now is known as the James-Lange theory of emotions. (Carl Lange was a Danish physician who made the same suggestion independently but at about the same time.) James also argued for a pragmatic, pluralistic view of truth; those things are true, he argued, that help us in life.

  James and his book have been the greatest influence on the development of American psychology to date. To indicate his breadth, it is worth noting his obituary headline in the New York Times: “William James Dies; Great Psychologist, brother of novelist, and foremost American philosopher was 68 years old. Long Harvard professor, virtual founder of modern American psychology, and exponent of pragmatism, dabbled in spooks.”

  SEE ALSO Psychoanalysis (1899), Classical Conditioning (1903), Placebo Effect (1955), Cognitive Behavior Therapy (1963), Theory of Mind (1978).

  LEFT: William James, c. 1890s. RIGHT: Tiber River at Rome, Italy, 2009. James used the phrase “stream of consciousness” as a metaphor for constantly changing mental processes.

  1891

  Neuron Doctrine • Clifford A. Pickover

  Heinrich Wilhelm Gottfried von Waldeyer-Hartz (1836–1921), Camillo Golgi (1843–1926), Santiago Ramón y Cajal (1852–1934)

  According to neurobiologist Gordon Shepherd, the Neuron Doctrine is “one of the great ideas of modern thought [comparable to] quantum theory and relativity in physics [and] the periodic table and the chemical bond in chemistry.” Having its birth in microscopy studies in the late 1800s, the Neuron Doctrine posits that distinct cells called neurons serve as the functional signal units of the nervous system and that neurons connect to one another in several precise ways. The doctrine was formally stated in 1891 by German anatomist Wilhelm von Waldeyer-Hartz, based on the observations of Spanish neuroscientist Santiago Cajal, Italian pathologist Camillo Golgi, and other scientists. Cajal improved upon the special silver stains of Golgi that allowed them to better microscopically visualize the incredible detail of cell branching processes.

  Although modern scientists have found exceptions to the original doctrine, most neurons contain dendrites, a soma (cell body), and an axon (which can be as long as three feet, or one meter!). In many cases, signals may propagate from one neuron to another by neurotransmitter chemicals that leave the axon of one neuron, travel through a tiny junction space called a chemical synapse, and then enter the dendrite of an adjacent neuron. If the net excitation caused by signals impinging on a neuron is sufficiently large, the neuron generates a brief electrical pulse called an action potential that travels along the axon. Electrical synapses, called gap junctions, also exist and create a direct connection between neurons.

  Sensory neurons carry signals from sensory receptor cells in the body to the brain. Motor neurons transmit signals from the brain to muscles. Glial cells provide structural and metabolic support to the neurons. While neurons in adults tend not to reproduce, new connections between neurons can form throughout life. Each of the roughly 100 billion neurons can have more than 1,000 synaptic connections.

  Multiple sclerosis results from insufficient myelin (chemical insulation) around axons. Parkinson’s disease is associated with a lack of the neurotransmitter dopamine, normally produced by certain neurons in the midbrain.

  SEE ALSO Cerebral Localization (1861), Antidepressant Medications (1957), Brain Lateralization (1964).

  LEFT: Cajal’s complex drawing of a Purkinje neuron in a cat’s cerebellar cortex. RIGHT: Neuron with multiple dendrites, a cell body, and one long axon. The capsulelike bulges on the axon are myelin-sheath cells.

  1892

  Discovery of Viruses • Clifford A. Pickover

  Martinus Willem Beijerinck (1851–1931), Dimitri Iosifovich Ivanovsky (1864–1920)

  Science journalist Robert Adler writes, “Rabies, smallpox, yellow fever, dengue fever, poliomyelitis, influenza, AIDS . . . The list [of diseases caused by viruses] reads like a catalog of human misery. . . . The scientists who deciphered the secrets of viruses were literally groping in the dark, trying to understand something they could not see . . . and, for many years, could not even imagine.”

  Viruses exist in a strange realm between the living and nonliving. They do not possess all of the required molecular machinery to reproduce themselves on their own, but once they infect animals, plants, fungi, or bacteria, they can hijack their hosts in order to generate numerous viral copies. Some viruses may coax their host cells into uncontrolled multiplication, leading to cancer. Today, we know that most viruses are too small to be seen by an ordinary light microscope, given that an average virus is about one one-hundredth the size of the average bacterium. The virions (virus particles) consist of genetic material in the form of DNA or RNA and an outer protein coating. Some viruses also have an envelope of lipids (small organic molecules) when the virus is outside a host cell.

  In 1892, Russian biologist Dimitri Ivanovsky took one of the early steps toward understanding viruses when he attempted to und
erstand the cause of tobacco mosaic disease, which destroys tobacco leaves. He filtered an extract of crushed diseased leaves by using a fine porcelain filter designed to trap all bacteria. To his surprise, the fluid that had flowed through the filter was still infectious. However, Ivanovsky never understood the true viral cause, believing that toxins or bacterial spores might be the causative agent. In 1898, Dutch microbiologist Martinus Beijerinck performed a similar experiment and believed that this new kind of infectious agent was liquid in nature, referring to it as “soluble living germs.” Later researchers were able to grow viruses in media containing guinea pig cornea tissue, minced hen kidneys, and fertilized chicken eggs. It was not until the 1930s that viruses could finally be seen via the electron microscope.

  SEE ALSO Germ Theory of Disease (1862), HeLa Cells (1951), Structure of Antibodies (1959).

  Most animal viruses are symmetrical (e.g., icosahedral) and nearly spherical in shape, as depicted in this artistic representation. Viruses are generally much smaller than bacteria.

 

‹ Prev