The Science Book
Page 28
Physicists Michael Riordan and Lillian Hoddeson write, “It is hard to imagine any device more crucial to modern life than the microchip and the transistor from which it sprang. Every waking hour, people of the world take their vast benefits for granted—in cellular phones, ATMs, wrist watches, calculators, computers, automobiles, radios, televisions, fax machines, copiers, stoplights, and thousands of electronic devices. Without a doubt, the transistor is the most important artifact of the 20th century and the ‘nerve cell’ of our electronic age.” In the future, fast transistors made from graphene (sheets of carbon atoms) and carbon nanotubes may become practical. Note that, in 1925, physicist Julius Lilienfeld was the first to actually file a patent for an early version of the transistor.
SEE ALSO ENIAC (1946), Integrated Circuit (1958), Quantum Computers (1981).
The Regency TR-1 radio, announced in October 1954, was the first practical transistor radio made in bulk quantities. Shown here is a figure from Richard Koch’s transistor-radio patent. Koch was employed by the company that made the TR-1.
1948
Information Theory • Clifford A. Pickover
Claude Elwood Shannon (1916–2001)
Teenagers watch TV, cruise the Internet, spin their DVDs, and chat endlessly on the phone usually without ever realizing that the foundations for this Information Age were laid by American mathematician Claude Shannon, who in 1948 published “A Mathematical Theory of Communication.” Information theory is a discipline of applied mathematics involving the quantification of data, and it helps scientists understand the capacity of various systems to store, transmit, and process information. Information theory is also concerned with data compression and with methods for reducing noise and error rates to enable as much data as possible to be reliably stored and communicated over a channel. The measure of information, known as information entropy, is usually expressed by the average number of bits needed for storage or communication. Much of the mathematics behind information theory was established by Ludwig Boltzmann and J. Willard Gibbs for the field of thermodynamics. Alan Turing also used similar ideas when breaking of the German Enigma ciphers during World War II.
Information theory affects a diverse array of fields, ranging from mathematics and computer science to neurobiology, linguistics, and black holes. Information theory has practical applications such as breaking codes and recovering from errors due to scratches in movie DVDs. According to a 1953 issue of Fortune: “It may be no exaggeration to say that man’s progress in peace, and security in war, depend more on fruitful applications of Information Theory than on physical demonstrations, either in bombs or in power plants, that Einstein’s famous equation works.”
Claude Shannon died in 2001, at the age of 84, after a long struggle with Alzheimer’s disease. At one point in his life, he had been an excellent juggler, unicyclist, and chess player. Sadly, due to his affliction, he was unable to observe the Information Age that he helped create.
SEE ALSO Telegraph System (1837), Fiber Optics (1841), Turing Machines (1936), ENIAC (1946).
Information theory helps technologists understand the capacity of various systems to store, transmit, and process information. Information theory has applications in fields ranging from computer science to neurobiology.
1948
Quantum Electrodynamics • Clifford A. Pickover
Paul Adrien Maurice Dirac (1902–1984), Sin-Itiro Tomonaga (1906–1979), Richard Phillips Feynman (1918–1988), Julian Seymour Schwinger (1918–1994)
“Quantum electrodynamics (QED) is arguably the most precise theory of natural phenomena ever advanced,” writes physicist Brian Greene. “Through quantum electrodynamics, physicists have been able to solidify the role of photons as the ‘smallest possible bundles of light’ and to reveal their interactions with electrically charged particles such as electrons, in a mathematically complete, predictable and convincing framework.” QED mathematically describes interactions of light with matter and also the interactions of charged particles with one another.
In 1928, the English physicist Paul Dirac established the foundations for QED, and the theory was refined and developed in the late 1940s by physicists Richard P. Feynman, Julian S. Schwinger, and Sin-Itiro Tomonaga. QED relies on the idea that charged particles (such as electrons) interact by emitting and absorbing photons, which are the particles that transmit electromagnetic forces. Interestingly, these photons are “virtual” and cannot be detected, yet they provide the “force” of the interaction as the interacting particles change their speed and direction of travel when absorbing or releasing the energy of a photon. The interactions can be graphically represented and understood through the use of squiggly Feynman diagrams. These drawings also help physicists to calculate the probability that particular interactions take place.
According to QED theory, the greater the number of virtual photons exchanged in an interaction (i.e., a more complex interaction), the less likely is the chance of occurrence of the process. The accuracy of predictions made by QED is astonishing. For example, the predicted strength of the magnetic field carried by an electron is so close to the experimental value, that if you could measure the distance from New York to Los Angles with this accuracy, you would be accurate to within the thickness of a human hair.
QED has served as the launchpad for subsequent theories, such as quantum chromodynamics, which began in the early 1960s and involves the strong forces that hold quarks together through the exchange of particles called gluons. Quarks are particles that combine to form other subatomic particles such as protons and neutrons.
SEE ALSO Electron (1897), Photoelectric Effect (1905), Standard Model (1961), Quarks (1964), Theory of Everything (1984).
Modified Feynman diagram depicting the annihilation of an electron and a positron and creating a photon that decays into a new electron-positron pair.
1948
Randomized Controlled Trials • Clifford A. Pickover
Austin Bradford Hill (1897–1991)
The design of tests for determining the efficacy of a medical treatment can be surprisingly difficult for many reasons. For example, physicians and test subjects may view results in a biased and nonobjective fashion. Treatment effects may be subtle, and patients may respond favorably simply due to the placebo effect, in which a patient thinks her condition is improving after taking a fake “treatment” (such as an inert sugar pill) that she believes should be effective.
Today, one of the most reliable approaches for testing possible medical treatments is the randomized controlled trial (RCT). The nature of a treatment should be chosen at random, so that each patient has the same chance of getting each of the treatments under study. For example, each participant in the trial may randomly be assigned to one of two groups, with one group scheduled to receive medicine X and the other scheduled to receive medicine Y. RCTs may be double-blind, which implies that neither the primary researchers nor the patients know which patients are in the treated group (receiving a new drug) or the control group (receiving a standard treatment). For reasons of ethics, RCTs are usually performed when the researchers and physicians are genuinely uncertain about the preferred treatment.
The most famous early clinical study involving RCT is English statistician Bradford Hill’s “Streptomycin Treatment of Pulmonary Tuberculosis,” published in 1948 in the British Medical Journal. In this study, patients randomly received a sealed envelope containing a card marked S for streptomycin (an antibiotic) and bed rest, or C for control (bed rest only). Streptomycin was clearly shown to be effective.
Clinical epidemiologist Murray Enkin writes that this trial is “rightly regarded as a landmark that ushered in a new era of medicine. [Hundreds of thousands] of such trials have become the underlying basis for what is currently called ‘evidence-based medicine.’ The [RCT] concept has rightly been hailed as a paradigm shift in our approach to clinical decision making.”
SEE ALSO Aristotle’s Organon (c. 350 BCE), Scientific Method (1620), Placebo Effect (19
55).
Public health campaign poster, trying to halt the spread of tuberculosis. In 1948, Bradford Hill published a study using RCTs to demonstrate the effectiveness of streptomycin to treat tuberculosis.
1949
Radiocarbon Dating • Clifford A. Pickover
Willard Frank Libby (1908–1980)
“If you were interested in finding out the age of things, the University of Chicago in the 1940s was the place to be,” writes author Bill Bryson. “Willard Libby was in the process of inventing radiocarbon dating, allowing scientists to get an accurate reading of the age of bones and other organic remains, something they had never been able to do before. . . .”
Radiocarbon dating involves the measuring of the abundance of the radioactive element carbon-14 (14C) in a carbon-containing sample. The method relies on the fact that 14C is created in the atmosphere when cosmic rays strike nitrogen atoms. The 14C is then incorporated into plants, which animals subsequently eat. While an animal is alive, the abundance of 14C in its body roughly matches the atmospheric abundance. 14C continually decays at a known exponential rate, converting to nitrogen-14, and once the animal dies and no longer replenishes its 14C supply from the environment, the animal’s remains slowly lose 14C. By detecting the amount of 14C in a sample, scientists can estimate its age if the sample is not older than 60,000 years. Older samples generally contain too little of 14C to measure accurately. 14C has a half-life of about 5,730 years due to radioactive decay. This means that every 5,730 years, the amount of 14C in a sample has dropped by half. Because the amount of atmospheric 14C undergoes slight variations through time, small calibrations are made to improve the accuracy of the dating. Also, atmospheric 14C increased during the 1950s due to atomic bomb tests. Accelerator mass spectrometry can be used to detect 14C abundances in milligram samples.
Before radiocarbon dating, it was very difficult to obtain reliable dates before the First Dynasty in Egypt, around 3000 BCE. This was quite frustrating for archeologists who were feverish to know, for example, when Cro-Magnon people painted the caves of Lascaux in France or when the last Ice Age finally ended.
SEE ALSO Olmec Compass (c. 1000 BCE) Radioactivity (1896), Atomic Clocks (1955).
Because carbon is very common, numerous kinds of materials are potentially useable for radiocarbon investigations, including ancient skeletons found during archeological digs, charcoal, leather, wood, pollen, antlers, and much more.
1949
Time Travel • Clifford A. Pickover
Albert Einstein (1879–1955), Kurt Gödel (1906–1978), Kip Stephen Thorne (b. 1940)
What is time? Is time travel possible? For centuries, these questions have intrigued philosophers and scientists. Today, we know for certain that time travel is possible. For example, scientists have demonstrated that objects traveling at high speeds age more slowly than a stationary object sitting in a laboratory frame of reference. If you could travel on a near light-speed rocket into outer space and return, you could travel thousands of years into the Earth’s future. Scientists have verified this time slowing or “dilation” effect in a number of ways. For example, in the 1970s, scientists used atomic clocks on airplanes to show that these clocks had a slight slowing of time with respect to clocks on the Earth. Time is also significantly slowed near regions of very large masses.
Although seemingly more difficult, numerous ways exist in which time machines for travel to the past can theoretically be built that do not seem to violate any known laws of physics. Most of these methods rely on high gravities or on wormholes (hypothetical “shortcuts” through space and time). To Isaac Newton, time was like a river flowing straight. Nothing could deflect the river. Einstein showed that the river could curve, although it could never circle back on itself, which would be a metaphor for backwards time travel. In 1949, mathematician Kurt Gödel went even further and showed that the river could circle back on itself. In particular, he found a disturbing solution to Einstein’s equations that allows backward time travel in a universe that rotated. For the first time in history, backward time travel had been given a mathematical foundation!
Throughout history, physicists have found that if phenomena are not expressly forbidden, they are often eventually found to occur. Today, designs for time travel machines are proliferating in top science labs and include such wild concepts as Thorne wormhole time machines, Gott loops that involve cosmic strings, Gott shells, Tipler and van Stockum cylinders, and Kerr rings. In the next few hundred years, perhaps our heirs will explore space and time to degrees we cannot currently fathom.
SEE ALSO Special Theory of Relativity (1905), General Theory of Relativity (1915), Atomic Clocks (1955).
If time is like space, might the past, in some sense, still exist “back there” as surely as your home still exists even after you have left it? If you could travel back in time, which genius of the past would you visit?
1950
Chess Computer • Marshall Brain
Alan Turing (1912–1954), Claude Elwood Shannon (1916–2001)
In 1950, American mathematician Claude Elwood Shannon wrote a paper about how to program a computer to play chess. In 1951, British mathematician and computer scientist Alan Turing was the first to produce a program that could complete a full game. Since then, software engineers have improved the software and computer hardware engineers have improved the hardware. In 1997, a custom computer called Deep Blue, developed by IBM, beat the best human player for the first time. Since then, humans have not had a chance because computer chess hardware and software keeps improving year after year.
How do engineers create a computer that can play chess? They do it by employing machine intelligence, which in the case of chess is very different from human intelligence. It is a brute force way to solve the chess problem.
Think of a board with a set of chess pieces on it. Engineers create a way to “score” that arrangement of pieces. The score might include the number of pieces on each side, the positions of the pieces, whether the king is well protected or not, etc. Now imagine a very simple chess program. You are playing black, the computer is playing white, and you have just made a move. The program could try moving every white piece to every possible valid position, scoring the board on each move. Then it would pick the move with the best score. This program would not play very well, but it could play chess.
What if the computer went a step further? It moves every white piece to every possible position. Then on each possible white move, it tries every black move, and scores all of those boards. The number of possible moves that the computer has to score has grown significantly, but now the computer can play better.
What if the computer looks multiple levels ahead? The number of boards the computer has to score explode with each new level. The computer gets better. When Deep Blue won in 1996, it was able to score 200 million boards per second. It had memorized all common openings and gambits. It could prune out vast numbers of moves by realizing certain paths were unproductive. In 2017, the program AphaZero beat world-champion chess-playing computer programs, having taught itself how to play in less than a day!
SEE ALSO Slide Rule (1621), Babbage Mechanical Computer (1822), ENIAC (1946), Transistor (1947).
Pictured: IBM’s supercomputer, Deep Blue.
1950
Fermi Paradox • Clifford A. Pickover
Enrico Fermi (1901–1954), Frank Drake (b. 1930)
During our Renaissance, rediscovered ancient texts and new knowledge flooded medieval Europe with the light of intellectual transformation, wonder, creativity, exploration, and experimentation. Imagine the consequences of making contact with an alien race. Another, far more profound Renaissance would be fueled by the wealth of alien scientific, technical, and sociological information. Given that our universe is both ancient and vast—there are an estimated 250 billion stars in our Milky Way galaxy alone—the physicist Enrico Fermi asked in 1950, “Why have we not yet been contacted by an extraterrestrial civilization?” Of course, many
answers are possible. Advanced alien life could exist, but we are unaware of their presence. Alternatively, intelligent aliens may be so rare in the universe that we may never make contact with them. The Fermi Paradox, as it is known today, has given rise to scholarly works attempting to address the question in fields ranging from physics and astronomy to biology.
In 1960, astronomer Frank Drake suggested a formula to estimate the number of extraterrestrial civilizations in our galaxy with whom we might come into contact:
N = R* × fp × ne × fℓ × fi × fc × L
Here, N is the number of alien civilizations in the Milky Way with which communication might be possible; for example, alien technologies may produce detectable radio waves. R* is the average rate of star formation per year in our galaxy. fp is the fraction of those stars that have planets (hundreds of extra solar planets have been detected). ne is the average number of “Earth-like” planets that can potentially support life per star that has planets. fℓ is the fraction of these ne planets that actually yield life forms. fi is the fraction of fℓ that actually produce intelligent life. The variable fc represents the fraction of civilizations that develop a technology that releases detectable signs of their existence into outer space. L is the length of time such civilizations release signals into space that we can detect. Because many of the parameters are very difficult to determine, the equation serves more to focus attention on the intricacies of the paradox than to resolve it.