The Science Book
Page 26
1935
EPR Paradox • Clifford A. Pickover
Albert Einstein (1879–1955), Boris Podolsky (1896–1966), Nathan Rosen (1909–1995), Alain Aspect (b. 1947)
Quantum entanglement (QE) refers to an intimate connection between quantum particles, such as between two electrons or two photons. Once the pair of particles is entangled, a particular kind of change to one of them is reflected instantly in the other, and it doesn’t matter if the pair is separated by inches or by interplanetary distances. This entanglement is so counterintuitive that Albert Einstein referred to it as being “spooky” and thought that it demonstrated a flaw in quantum theory, particularly the Copenhagen interpretation that suggested quantum systems, in a number of contexts, existed in a probabilistic limbo until observed and then reached a definite state.
In 1935, Albert Einstein, Boris Podolsky, and Nathan Rosen published a paper on their famous EPR paradox. Imagine two particles that are emitted by a source so that their spins are in a quantum superposition of opposite states, labeled + and −. Neither particle has a definite spin before measurement. The particles fly apart, one going to Florida and the other to California. According to QE, if scientists in Florida measure the spin and finds a +, the California particle instantly assumes the − state, even though the speed of light prohibits faster-than-light (FTL) communication of information. Note, however, that no FTL communication of information has actually occurred. Florida cannot use entanglement to send messages to California because Florida does not manipulate the spin of its particle, which has a 50-50 chance of being in a + or − state.
In 1982, physicist Alain Aspect performed experiments on oppositely directed photons that were emitted in a single event from the same atom, thus ensuring the photon pairs were correlated. He showed that the instantaneous connection in the EPR paradox actually did take place, even when the particle pair was separated by an arbitrarily large distance.
Today, quantum entanglement is being studied in the field of quantum cryptography to send messages that cannot be spied upon without leaving some kind of trace. Simple Quantum Computers are now being developed that perform calculations in parallel and more quickly than by traditional computers.
SEE ALSO Complementarity Principle (1927), Schrödinger’s Cat (1935), Quantum Computers (1981).
Artist’s rendition of “spooky action at a distance.” Once a pair of particles is entangled, a particular kind of change to one of them is reflected instantly in the other, even if the pair is separated by interplanetary distances.
1935
Schrödinger’s Cat • Clifford A. Pickover
Erwin Rudolf Josef Alexander Schrödinger (1887–1961)
Schrödinger’s cat reminds me of a ghost, or maybe a creepy zombie—a creature that appears to be alive and dead at the same time. In 1935, Austrian physicist Erwin Schrödinger published an article on this extraordinary paradox with consequences that are so striking that it continues to mystify and concern scientists to this day.
Schrödinger had been upset about the recently proposed Copenhagen interpretation of quantum mechanics that stated, in essence, that a quantum system (e.g. an electron) exists as a cloud of probability until an observation is made. At a higher level, it seemed to suggest that it is meaningless to ask precisely what atoms and particles were doing when unobserved; in some sense, reality is created by the observer. Before being observed, the system takes on all possibilities. What could this mean for our everyday lives?
Imagine that a live cat is placed in a box with a radioactive source, a Geiger counter, and a sealed glass flask containing deadly poison. When a radioactive decay event occurs, the Geiger counter measures the event, triggering a mechanism that releases the hammer that shatters the flask and releases the poison that kills the cat. Imagine that quantum theory predicts a 50 percent probability that one decay particle is emitted each hour. After an hour, there is an equal probability that the cat is alive or dead. According to some flavors of the Copenhagen interpretation, the cat seemed to be both alive and dead—a mixture of two states that is called a superposition of states. Some theorists suggested that if you open the box, the very act of observation “collapses the superposition” making the cat either alive or dead.
Schrödinger said that his experiment demonstrated the invalidity of the Copenhagen interpretation, and Albert Einstein agreed. Many questions spun from this thought experiment: What is considered to be a valid observer? The Geiger counter? A fly? Could the cat observe itself and so collapse its own state? What does the experiment really say about the nature of reality?
SEE ALSO Radioactivity (1896), Geiger Counter (1908), Complementarity Principle (1927), EPR Paradox (1935), Parallel Universes (1956).
When the box is opened, the very act of observation may collapse the superposition, making Schrödinger’s cat either alive or dead. Here, Schrödinger’s cat thankfully emerges alive.
1936
Turing Machines • Clifford A. Pickover
Alan Turing (1912–1954)
Alan Turing was a brilliant mathematician and computer theorist who was forced to become a human guinea pig and subjected to drug experiments to “reverse” his homosexuality. This persecution occurred despite the fact that his code-breaking work helped shorten World War II and led to his award of the Order of the British Empire.
When Turing had called the police to investigate a burglary at his home in England, a homophobic police officer suspected that Turing was homosexual. Turing was forced to either go to jail for a year or take experimental drug therapy. To avoid imprisonment, he agreed to be injected with estrogen hormone for a year. His death at age 42, two years after his arrest, was a shock to his friends and family. Turing was found in bed. The autopsy indicated cyanide poisoning. Perhaps he had committed suicide, but to this day we are not certain.
Many historians consider Turing to be the “father of modern computer science.” In his landmark paper, “On Computable Numbers, with an Application to the Entscheidungs Problem” (written in 1936), he proved that Turing machines (abstract symbol-manipulating devices) would be capable of performing any conceivable mathematical problem that is represented as an algorithm. Turing machines help scientists better understand the limits of computation.
Turing is also the originator of the Turing test, which caused scientists to think more clearly about what it means to call a machine “intelligent” and whether machines may one day “think.” Turing believed that machines would eventually be able to pass his test by demonstrating they could converse with people in such a natural way that people could not tell if they were talking to a machine or a human.
In 1939, Turing invented an electromechanical machine that could help break the Nazi codes produced by their Enigma code machine. Turing’s machine, called the “Bombe,” was enhanced by mathematician Gordon Welchman, and it became the main tool for deciphering Enigma communications.
SEE ALSO ENIAC (1946), Information Theory (1948), Public-Key Cryptography (1977).
A replica of a Bombe machine. Alan Turing invented this electromechanical device to help break the Nazi codes produced by their Enigma code machine.
1937
Cellular Respiration • Derek B. Lowe
Otto Fritz Meyerhof (1884–1951), Albert Szent-Györgyi (1893–1986), Karl Lohmann (1898–1978), Fritz Albert Lipmann (1899–1986), Hans Adolf Krebs (1900–1981), Paul Delos Boyer (1918–2018), Peter Mitchell (1920–1992), John Ernest Walker (b. 1941)
Every living creature needs energy, and they all use the same molecule to carry it: adenosine triphosphate (ATP), discovered in 1929 by German chemist Karl Lohmann, working with Otto Fritz Myerhof. ATP has a phosphate bond that requires a good deal of energy to form, energy that it gives back when that bond is cleaved. The molecule serves as a storable form of power that is ready on demand, an idea proposed by German-American biochemist Fritz Albert Lipmann in 1941. Throughout your body, untold billions of adenosine diphosphates and triphosphates are shuttling ba
ck and forth, providing chemical energy like so many battery packs to all sorts of proteins. The ATP-binding pockets built into them are a standardized motif that shows up over and over.
An enzyme called ATP synthase—discovered by British biochemist Peter Mitchell, and further explained by the American and British biochemists Paul Boyer and John Walker—allows cells to produce ATP at all times in specialized structures called mitochondria. These resemble bacteria, and it’s no accident—mitochondria appear to have been bacteria at some point in the distant evolutionary past, moving into early cells and making a home. They’re now specialized ATP factories. The first part of their chemistry was worked out in 1937 by German-born British biochemist Hans Adolf Krebs, building on the work of Hungarian physiologist Albert Szent-Györgyi (of vitamin C fame). It’s a cycle of reactions starting with citric acid, which is generated again in the last step and sent around for more. Along the way, the two-carbon building block acetate (produced by breaking down carbohydrates and lipids) gets consumed, and carbon dioxide is released. The products of the Krebs cycle go straight into another series of enzyme reactions (called oxidative phosphorylation), which produce ATP while using up oxygen. This is where the food you eat and the oxygen you breathe end up, and where the carbon dioxide you exhale is made: in the nonstop furnaces of the mitochondria.
SEE ALSO Internal Combustion Engine (1908), Photosynthesis (1947), Endosymbiont theory (1967).
The mitochondria are the green ovals in this computer-generated model of the main structures in a typical cell. In muscle cells, mitochondria are more numerous.
1937
Superfluids • Clifford A. Pickover
Pyotr Leonidovich Kapitsa (1894–1984), Fritz Wolfgang London (1900–1954), John “Jack” Frank Allen (1908–2001), Donald Misener (1911–1996)
Like some living, creeping liquid out of a science-fiction movie, the eerie behavior of superfluids has intrigued physicists for decades. When liquid helium in a superfluid state is placed in a container, it climbs up the walls and leaves the container. Additionally, the superfluid remains motionless when its container is spun. It seems to seek out and penetrate microscopic cracks and pores, making traditionally adequate containers leaky for superfluids. Place your cup of coffee—with the liquid spinning in the cup—on the table, and a few minutes later the coffee is still. If you did this with superfluid helium, and your ancestors came back in a thousand years to view the cup, the superfluid might still be spinning.
Superfluidity is seen in several substances, but often studied in helium-4—the common, naturally occurring isotope of helium containing two protons, two neutrons, and two electrons. Below an extremely cold critical temperature called the lambda temperature (−455.49 ºF, 2.17 K), this liquid helium-4 suddenly attains the ability to flow without apparent friction and achieves a thermal conductivity that is millions of times the conductivity of normal liquid helium and much greater than the best metal conductors. The term helium I refers to the liquid above 2.17 K, and helium II refers to the liquid below this temperature.
Superfluidity was discovered by physicists Pyotr Kapitsa, John F. Allen, and Don Misener in 1937. In 1938, Fritz London suggested that liquid helium beneath the lambda point temperature is composed of two parts, a normal fluid with characteristics of helium 1 and a superfluid (with a viscosity value essentially equal to 0). The transition from ordinary fluid to superfluid occurs when the constituent atoms begin to occupy the same quantum state, and their quantum wave functions overlap. As in the Bose-Einstein Condensate, the atoms lose their individual identities and behave as one large smeared entity. Because the superfluid has no internal viscosity, a vortex formed within the fluid continues to spin essentially forever.
SEE ALSO Bernoulli’s Law of Fluid Dynamics (1738), Superconductivity (1911) Heisenberg Uncertainty Principle (1927).
Frame from Alfred Leitner’s 1963 movie Liquid Helium, Superfluid. The liquid helium is in the superfluid phase as a thin film creeps up the inside wall of the suspended cup and down on the outside to form a droplet at the bottom.
1938
Nuclear Magnetic Resonance • Clifford A. Pickover
Isidor Isaac Rabi (1898–1988), Felix Bloch (1905–1983), Edward Mills Purcell (1912–1997), Richard Robert Ernst (b. 1933), Raymond Vahan Damadian (b. 1936)
“Scientific research requires powerful tools for elucidating the secrets of nature,” writes Nobel Laureate Richard Ernst. “Nuclear Magnetic Resonance (NMR) has proved to be one of the most informative tools of science with applications to nearly all fields from solid state physics, to material science, . . . and even to psychology, trying to understand the functioning of the human brain.”
If an atomic nucleus has at least one neutron or proton unpaired, the nucleus may act like a tiny magnet. When an external magnetic field is applied, it exerts a force that can be visualized as causing the nuclei to precess, or wobble, like a spinning top. The potential energy difference between nuclear spin states can be made larger by increasing the external magnetic field. After turning on this static external magnetic field, a radio-frequency (RF) signal of the proper frequency is introduced that can induce transitions between spin states, and thus some of the spins are placed in their higher-energy states. If the RF signal is turned off, the spins relax back to the lower state and produce an RF signal at the resonant frequency associated with the spin flip. These NMR signals yield information beyond the specific nuclei present in a sample, because the signals are modified by the immediate chemical environment. Thus, NMR studies can yield a wealth of molecular information.
NMR was first described in 1937 by physicist Isidor Rabi. In 1945, physicists Felix Bloch, Edward Purcell, and colleagues refined the technique. In 1966, Richard Ernst further developed Fourier transform (FT) spectroscopy and showed how RF pulses could be used to create a spectrum of NMR signals as a function of frequency. In 1971, physician Raymond Damadian showed that the hydrogen relaxation rates of water could be different in normal and malignant cells, opening the possibility of using NMR for medical diagnosis. In the early 1980s, NMR methods started to be used in magnetic resonance imaging (MRI) to characterize the nuclear magnet moments of ordinary hydrogen nuclei of soft tissues of the body.
SEE ALSO X-rays (1895), Atomic Nucleus (1911), Superconductivity (1911).
An actual MRI/MRA (magnetic resonance angiogram) of the brain vasculature (arteries). This kind of MRI study is often used to reveal brain aneurysms.
1941
Doped Silicon • Marshall Brain
John Robert Woodyard (1904–1981)
If we had to pick a substance that engineers have used to impart the biggest effect on humanity, what might that substance be? Maybe it is gunpowder, which engineers use in guns, cannons, and bombs. By killing untold millions of people, gunpowder has certainly had an effect, although not a particularly happy one. Maybe it is uranium, which engineers use in both nuclear bombs and nuclear power plants. Or asphalt, which billions of people use every day for transportation, or the concrete used in so many structures. What about gasoline, powering most of our vehicles?
The award for the most influential material might best go to . . . drumroll please . . . doped silicon. Doped silicon was developed by physicist John Robert Woodyard while in the service of the Sperry Gyroscope Company in 1941. This material, the foundation of the transistor, has transformed our society in a thousand different ways. Look around you and count how many objects use computers in one form or another. Think about how much time you spend using a laptop, tablet, or smart phone. Think about the billions of computers connected to the Internet.
And think about where we are headed. The “Internet of things” is the next big thing. It is predicted that, in just a decade or two, there will be 100 trillion objects connected on the Internet. They will be everywhere: home appliances, cameras, sensors, cars, tracking devices, drones, our homes and their security systems. Doped silicon has made computers so inexpensive, so power-efficient, and so intelligent, that c
omputers are embedding in everything and connecting together on the Internet. And then there are robots, which will be arriving in massive numbers in the not-too-distant future.
The doping process is conceptually simple. Start with a pure silicon crystal. Add various dopants, like boron to create an area of holes, or phosphorous to create an area with free electrons. Combining these doped areas properly, an engineer can create diodes and transistors. With transistors, engineers can create amplifiers, receivers, and computers. Our computer and electronics industries are built on top of doped silicon.
SEE ALSO Transistor (1947), Integrated Circuit (1958), ARPANET (1969).
A silicon wafer, pictured here, is a thin slice of semiconductor material.
1942
Energy from the Nucleus • Clifford A. Pickover
Lise Meitner (1878–1968), Albert Einstein (1879–1955), Leó Szilárd (1898–1964), Enrico Fermi (1901–1954), Otto Robert Frisch (1904–1979)
Nuclear fission is a process in which the nucleus of an atom, such as uranium, splits into smaller parts, often producing free neutrons, lighter nuclei, and a large release of energy. A chain reaction takes place when the neutrons fly off and split other uranium atoms, and the process continues. A nuclear reactor, used to produce energy, makes use of the process that is moderated so that it releases energy at a controlled rate. A nuclear weapon uses the process in a rapid, uncontrolled rate. The products of nuclear fission are themselves often radioactive and thus may lead to nuclear waste problems associated with nuclear reactors.