The Science Book

Home > Other > The Science Book > Page 34
The Science Book Page 34

by Clifford A Pickover


  Since then, many more examples of gravitational lensing have been found, and the effect seems to occur in three ways: strong lensing is when distinct multiple or partial (usually arc-like) images are formed; weak lensing has been detected by observing small and subtle shifts in star or galaxy positions over large regions; and microlensing events have been detected when random distant stars (or even planets) have their brightness temporarily amplified by the gravitational lensing effect of a large foreground mass, such as another star or galaxy.

  Gravitational lenses were initially discovered and studied as accidental, serendipitous events. Recently, however, a number of astronomical surveys have been conducted to intentionally search for gravitational lensing events, in order to obtain unique measurements of the properties of distant galaxies that would not be visible without the amplification from the lens, as well as the properties (such as mass) of the lensing galaxies and clusters themselves.

  SEE ALSO Newton’s Laws of Motion and Gravitation (1687), Black Holes (1783), General Theory of Relativity (1915).

  The thin arcs seen here are gravitationally lensed galaxies in the galactic cluster Abell 2218, photographed by the Hubble Space Telescope in 1999. These so-called Einstein rings are the smeared-out light from distant galaxies being bent by a massive foreground galaxy.

  1980

  Cosmic Inflation • Clifford A. Pickover

  Alan Harvey Guth (b. 1947)

  The Big Bang theory states that our universe was in an extremely dense and hot state 13.7 billion years ago, and space has been expanding ever since. However, the theory is incomplete because it does not explain several observed features in the universe. In 1980, physicist Alan Guth proposed that 10−35 seconds (a 100 billion trillion trillionths of a second) after the Big Bang, the universe expanded (or inflated) in a mere 10−32 seconds from a size smaller than a proton to the size of a grapefruit—an increase in size of 50 orders of magnitude. Today, the observed temperature of the background radiation of the universe seems relatively constant even though the distant parts of our visible universe are so far apart that they do not appear to have been connected, unless we invoke inflation that explains how these regions were originally in close proximity (and had reached the same temperature) and then separated faster than the speed of light.

  Additionally, inflation explains why the universe appears to be, on the whole, quite “flat”—in essence why parallel light rays remain parallel, except for deviations near bodies with high gravitation. Any curvature in the early universe would have been smoothed away, like stretching the surface of a ball until it is flat. Inflation ended 10−30 seconds after the Big Bang, allowing the universe to continue its expansion at a more leisurely rate.

  Quantum fluctuations in the microscopic inflationary realm, magnified to cosmic size, become the seeds for larger structures in the universe. Science-journalist George Musser writes, “The process of inflation never ceases to amaze cosmologists. It implies that giant bodies such as galaxies originated in teensy-weensy random fluctuations. Telescopes become microscopes, letting physicists see down to the roots of nature by looking up into the heavens.” Alan Guth writes that inflationary theory allows us to “consider such fascinating questions as whether other big bangs are continuing to happen far away, and whether it is possible in principle for a super-advanced civilization to recreate the big bang.”

  SEE ALSO Cosmic Microwave Background (1965), Hubble’s Law of Cosmic Expansion (1929), Parallel Universes (1956), Dark Energy (1998).

  A map produced by the Wilkinson Microwave Anisotropy Probe (WMAP) showing a relatively uniform distribution of cosmic background radiation, produced by an early universe more than 13 billion years ago. Inflation theory suggests that the irregularities seen here are the seeds that became galaxies.

  1981

  Quantum Computers • Clifford A. Pickover

  Richard Phillips Feynman (1918–1988), David Elieser Deutsch (b. 1953)

  One of the first scientists to consider the possibility of a quantum computer was physicist Richard Feynman who, in 1981, wondered just how small computers could become. He knew that when computers finally reached the size of sets of atoms, the computer would be making use of the strange laws of quantum mechanics. Physicist David Deutsch in 1985 envisioned how such a computer would actually work, and he realized that calculations that took virtually an infinite time on a traditional computer could be performed quickly on a quantum computer.

  Instead of using the usual binary code, which represents information as either a 0 or 1, a quantum computer uses qubits, which essentially are simultaneously both 0 and 1. Qubits are formed by the quantum states of particles, for example, the spin state of individual electrons. This superposition of states allows a quantum computer to effectively test every possible combination of qubits at the same time. A thousand-qubit system could test 21,000 potential solutions in the blink of an eye, thus vastly outperforming a conventional computer. To get a sense for the magnitude of 21,000 (which is approximately 10301), note that there are only about 1080 atoms in the visible universe.

  Physicists Michael Nielsen and Isaac Chuang write, “It is tempting to dismiss quantum computation as yet another technological fad in the evolution of the computer that will pass in time. . . . This is a mistake, since quantum computation is an abstract paradigm for information processing that may have many different implementations in technology.”

  Of course, many challenges still exist for creating a practical quantum computer. The slightest interaction or impurity from the surroundings of the computer could disrupt its operation. “These quantum engineers . . . will have to get information into the system in the first place,” writes author Brian Clegg, “then trigger the operation of the computer, and, finally, get the result out. None of these stages is trivial. . . . It’s as if you were trying to do a complex jigsaw puzzle in the dark with your hands tied behind your back.”

  SEE ALSO Complementarity Principle (1927), EPR Paradox (1935), Parallel Universes (1956), Integrated Circuit (1958).

  In 2009, physicists at the National Institute of Standards and Technology demonstrated reliable quantum information processing in the ion trap at the left center of this photograph. The ions are trapped inside the dark slit. By altering the voltages applied to each of the gold electrodes, scientists can move the ions between the six zones of the trap.

  1982

  Artificial Heart • Marshall Brain

  Robert Jarvik (b. 1946)

  In healthy people, the heart beats without pause for an entire lifetime, pumping 5 million gallons (19 million liters) of blood or more during 70 or 80 years of operation. But when something goes wrong and a heart needs to be replaced, the scarcity of natural replacement hearts creates a big problem. Thus engineers set about trying to design and build artificial mechanical hearts. However, nature’s pump is very difficult to duplicate.

  There were four problems that doctors and engineers had to solve to make an artificial heart work: 1) Finding materials with the right chemistry and properties so that they did not cause an immune reaction or internal clotting in the patient, 2) Finding a pumping mechanism that did not damage blood cells, 3) Finding a way to power the device, 4) Making the device small enough to fit inside the chest cavity.

  The Jarvik-7 heart, designed by American scientist Robert Jarvik and his team in 1982, was the first device to meet these requirements reliably. It features two ventricles like a natural heart. Its materials avoided rejection and were smooth and seamless enough to prevent clotting. The pumping mechanism used a balloon-like diaphragm in each ventricle that, as they inflated, pushed blood through the one-way valve without damaging blood cells. The only compromise was the air compressor, which remained outside the body and transmitted air pulses to the heart with hoses running through the abdominal wall. The basic design was successful and has since been improved as the Syncardia heart, used in over 1,000 patients. One patient lived almost four years with the heart before receiving a transplan
t. The Abiocor heart uses a different approach that allows batteries and an inductive charging system to be implanted completely inside the body. It also has a diaphragm arrangement but fluid flow rather than air fills the diaphragms. The fluid flow comes from a small electric motor embedded inside the heart.

  Two different engineering approaches: One goes for complete embedding. But if something goes wrong, it probably means death. In the other, much of the system is outside the body for easy access and repair, but tubes pass into the body from outside.

  SEE ALSO Circulatory System (1628), Blood Transfusion (1829), Heart Transplant (1967).

  The SynCardia temporary Total Artificial Heart (TAH-t), pictured, is the first and only temporary TAH approved by the FDA (U.S. Food and Drug Administration) and Health Canada. It is also approved for use in Europe through the CE (Conformité Européenne) mark.

  1983

  Epigenetics • Clifford A. Pickover

  Bert Vogelstein (b. 1949)

  Just as a pianist interprets the notes in a musical score, controlling the volume and tempo, epigenetics affects the interpretation of DNA genetic sequences in cells. Epigenetics usually refers to the study of heritable traits that do not involve changes to the underlying DNA sequences of cells.

  One way in which DNA expression can be controlled is by the addition of a methyl group (a carbon atom with three attached hydrogen atoms) to one of the DNA bases, making this “marked” area of the DNA less active and potentially suppressing the production of a particular protein. Gene expression can also be modified by the histone proteins that bind to the DNA molecule.

  In the 1980s, Swedish researcher Lars Olov Bygren discovered that boys in Norrbotten, Sweden, who had gone from normal eating to vast overeating in a single season produced sons and grandsons who lived much shorter lives. One hypothesis is that inherited epigenetic factors played a role. Other studies suggest that environmental factors such as stress, diet, smoking, and prenatal nutrition make imprints on our genes that are passed through generations. According to this argument, the air your grandparents breathed and the food they ate can affect your health decades later.

  In 1983, American medical researchers Bert Vogelstein and Andrew P. Feinberg documented the first example of a human disease with an epigenetic mechanism. In particular, they observed widespread loss of DNA methylation in colorectal cancers. Because methylated genes are typically turned off, this loss of methylation can lead to abnormal activation of genes in cancer. Additionally, too much methylation can undo the work of protective tumor-suppressor genes. Drugs are currently being developed that affect epigenetic markers that silence bad genes and activate good ones.

  The general concept of epigenetics is not new. After all, a brain cell and liver cell have the same DNA sequence, but different genes are activated through epigenetics. Epigenetics may explain why one identical twin develops asthma or bipolar disorder while the other remains healthy.

  SEE ALSO Causes of Cancer (1761), Chromosomal Theory of Inheritance (1902), DNA Structure (1953), Human Genome Project (2003), Gene Therapy (2016).

  Loss of DNA methylation (marking) can occur in colorectal cancer, such as in the polyps shown here. Because methylated genes are typically turned off, this loss of methylation can lead to abnormal activation of genes involved in cancer.

  1983

  Polymerase Chain Reaction • Clifford A. Pickover

  Kary Banks Mullis (b. 1944)

  In 1983, while driving along a California highway, biochemist Kary Mullis had an idea for how to copy a microscopic strand of genetic material billions of times within hours—a process that has since had countless applications in medicine. Although his idea for the polymerase chain reaction (PCR) turned into a billion-dollar industry, he received only a $10,000 bonus from his employer. Perhaps his Nobel Prize ten years later can be viewed as a sufficiently awesome consolation prize.

  Scientists usually require a significant amount of a particular DNA genetic sequence in order to study it. The groundbreaking PCR technique, however, can start with as little as a single molecule of the DNA in a solution, with the aid of Taq polymerase, an enzyme that copies DNA and that stays intact when the solution is heated. (Taq polymerase was originally isolated from a bacterium that thrived in a hot spring of America’s Yellowstone National Park.) Also added to the brew are primers—short segments of DNA that bind to the sample DNA at a position before and after the sequence under study. During repeated cycles of heating and cooling, the polymerase begins to rapidly make more and more copies of the sample DNA between the primers. The thermal cycling allows the DNA strands to repeatedly pull apart and come together as needed for the copying process. PCR can be used for detecting food-borne pathogens, diagnosing genetic diseases, assessing the level of HIV viruses in AIDS patients, determining paternity of babies, finding criminals based on traces of DNA left at a crime scene, and studying DNA in fossils. PCR was important in advancing the Human Genome Project.

  Medical journalist Tabitha Powlege writes, “PCR is doing for genetic material what the invention of the printing press did for written material—making copying easy, inexpensive, and accessible.” The New York Times referred to Mullis’s invention as “dividing biology into the two epochs of before PCR and after PCR.”

  SEE ALSO Chromosomal Theory of Inheritance (1902), DNA Structure (1953).

  PCR has been used to amplify DNA from 14,000-year-old fossil bones of saber-toothed cats preserved in asphalt. Such studies help scientists compare these extinct animals to various living cat species in order to better understand cat evolution.

  1984

  Telomerase • Clifford A. Pickover

  Elizabeth Helen Blackburn (b. 1948), Carolyn Widney “Carol” Greider (b. 1961)

  The chromosomes in our cells are each made of a long coiled DNA molecule wrapped around a protein scaffold. The ends of each chromosome have a special protective cap called a telomere that contains a sequence of bases represented as TTAGGG. Although the enzyme that copies DNA for cell division cannot quite copy to the very end of each chromosome, the telomere compensates for this potential flaw because the endpieces are simply TTAGGG, potentially repeated more than 1,000 times. However, with each cell division, a portion of the telomere is lost through this erosion process, and when the telomere becomes too short, the chromosome can no longer be replicated in these “old” cells. Many body cells enter a state of senescence (inability to divide) after about 50 cell divisions in a culture dish.

  In 1984, while studying the microscopic protozoan Tetrahymena, biologists Carol Greider and Elizabeth Blackburn discovered telomerase, an enzyme with an RNA component that can counteract the chromosome erosion and elongate the telomeres by returning TTAGGG to the chromosome ends. Telomerase activity is very low in most somatic (nonreproductive) cells, but it is active in fetal cells, adult germ cells (which produce sperm and egg), immune system cells, and tumor cells—all of which may divide regularly. These discoveries suggest a connection between telomerase activity and both aging and cancer. Thus, various experiments are underway to test the idea of triggering telomerase activation or inactivation in order to either increase lifespan (by making cells immortal) or inhibit cancers (by changing immortal, continuously dividing cells to mortal ones). Several premature-aging diseases in humans are associated with short telomeres, and telomerase activation has been discovered in a majority of human tumors. Note that because the single-celled, freshwater Tetrahymena organism has active telomerase, it can divide indefinitely—and is, essentially, immortal.

  Greider and Blackburn write, “In the early 1980s, scientists would not have set out to identify potential anticancer therapies by studying chromosome maintenance in Tetrahymena. . . . In studies of nature, one can never predict when and where fundamental processes will be uncovered.”

  SEE ALSO Causes of Cancer (1761), Cell Division (1855), Chromosomal Theory of Inheritance (1902), HeLa cells (1951), DNA Structure (1953).

  Mice that are engineered to lack telomera
se become prematurely old but return to health when the enzyme is replaced. Researchers can use certain stains to study development and degeneration of bone and cartilage in mice.

  1984

  Theory of Everything • Clifford A. Pickover

  Michael Boris Green (b. 1946), John Henry Schwarz (b. 1941)

  “My ambition is to live to see all of physics reduced to a formula so elegant and simple that it will fit easily on the front of a T-shirt,” wrote physicist Leon Lederman. “For the first time in the history of physics,” writes physicist Brian Greene, we “have a framework with the capacity to explain every fundamental feature upon which the universe is constructed [and that may] explain the properties of the fundamental particles and the properties of the forces by which they interact and influence one another.”

  The theory of everything (TOE) would conceptually unite the four fundamental forces of nature, which are, in decreasing order of strengths: 1) the strong nuclear force—which holds the nucleus of the atom together, binds quarks into elementary particles, and makes the stars shine, 2) the electromagnetic force—between electric charges and between magnets, 3) the weak nuclear force—which governs the radioactive decay of elements, and 4) the gravitational force—which holds the Earth to the Sun. Around 1967, physicists showed how electromagnetism and the weak forces could be unified as the electroweak force.

  Although not without controversy, one candidate for a possible TOE is M-theory, which postulates that the universe has ten dimensions of space and one of time. The notion of extra dimensions also may help resolve the hierarchy problem concerning why gravity is so much weaker than the other forces. One solution is that gravity leaks away into dimensions beyond our ordinary three spatial dimensions. If humanity did find the TOE, summarizing the four forces in a short equation, this would help physicists determine if time machines are possible and what happens at the center of black holes, and, as astrophysicist Stephen Hawking said, it gives us the ability to “read the mind of God.”

 

‹ Prev