Book Read Free

Life's Ratchet: How Molecular Machines Extract Order from Chaos

Page 7

by Hoffmann, Peter M.


  Cardano makes an interesting hero for our story. As a physician, he understood the chance occurrences that play a role in people’s lives. His life was a jumble of random events. He founded probability theory, invented a method to writing secret messages, and even had a connection to my current hometown of Detroit. He was the inventor of the Cardan shaft, or universal joint, still used in automobiles (and initially designed for a water-pumping system). Sadly, randomness finally got the better of him when a string of bad luck landed him in jail and finally in the poorhouse.

  After Cardano, the mathematical treatment of games of chance became commonplace in the seventeenth and eighteenth centuries. Rich (and apparently quite bored) aristocrats sponsored mathematicians to figure the odds of various games. In one such case, the mathematicians Blaise Pascal (1623–1662) and Pierre de Fermat (1601–1665) were commissioned by the Chevalier de Méré to solve the problem of points: How should wagers be divided if a game is interrupted too early? Here is the scene: The Chevalier de Méré challenges the Comte de Dubois (a fictitious scenario) to a simple game of dice, and the winner is whoever throws the first 5 sixes. After ten minutes of play, the count is suddenly summoned to meet the king in Versailles. So far, he has thrown 3 sixes, and the knight 4. Neither of them trusts the other. But can the wager of sixteen pieces of gold be fairly divided? Clearly, the wager had to be distributed according to the probability that either player could still win the game. Pascal decided that this problem was too difficult to solve by himself. He contacted the renowned amateur mathematician Fermat to help him out. Fermat and Pascal corresponded about this problem for a while, until Fermat found a rather tedious way to solve it using Cardano’s simple rule. This inspired Pascal to improve Fermat’s result by devising a general formula, based on a triangle of numbers, now called Pascal’s triangle.

  FIGURE 2.1. Pascal’s triangle. Each number is the sum of the two numbers directly above it. For example, the number 6 in the fourth row is the sum of the two 3’s directly above it (arrows). The numbers represent how many ways you can choose k (column) items out of a total of n (row) possibilities. This triangle was used by Pascal to solve the problem of points. The fourth row, which is discussed in the text, is highlighted.

  The numbers in Pascal’s triangle provide the number of ways you can choose a certain number of items (let’s say k items) out of n available items. An example is the lottery. In a lottery, we pick, say, 5 numbers out of a possible 56, as in the MegaMillion game. In how many ways could we do that? A lot! Let’s first try a simpler example: How many ways are there to select 2 items out of 4 available items A, B, C, and D? To find out, we go to the fourth row in Pascal’s triangle (Figure 2.1). This row contains the numbers 1, 4, 6, 4, and 1. These numbers tell us how many ways there are to pick k items out of 4 available items (if we had 5 available items, we would need to look at row 5 of the triangle, and so on). The numbers 1, 4, 6, 4, and 1 correspond, from left to right, to picking 0, 1, 2, 3 or 4 items out of 4 available items.

  Let’s go step by step: As we move from left to right along the fourth row, the first number in the fourth row is a 1. This number tells us in how many ways we can select zero items out of 4. Not selecting an item can only be done in exactly one way (mathematics can be strange). The next number in the row is a 4; it tells in how many ways we can select 1 item out of the 4. Since there are 4 possible items, we have 4 ways to pick 1 of them (i.e., we can pick either A, B, C, or D). It gets interesting (and less obvious) when we pick more than 1 item out of 4. How many ways are there to pick 2 items out of 4? Looking at the next number in the fourth row, there should be 6 different ways to do this. And indeed there are: (A, B), (A, C), (A, D), (B, C), (B, D), and (C, D) (we are not allowing picking the same letter twice).

  Now, to find out how many ways there are to pick 5 numbers out of 56 would require us to continue the triangle all the way down to the 56th row. You are invited to try, but you will quickly realize that it is a difficult task. The numbers grow quite large. Fortunately, there is a formula to calculate any entry in Pascal’s triangle, without having to draw the triangle. It is called the binomial coefficient, given by n!/((n – k)! k!), where n is the number of items, k is the number of possibilities, and n! = 1 · 2 · . . . · n. The expression n! is called the factorial of n. For example, the factorial of 3 is 3! = 1 · 2 · 3 = 6. For our lottery problem, we find that there are 3,819,816 ways to select 5 numbers out of 56. No wonder I haven’t won the lottery yet.

  How do the binomial coefficients help to solve the problem of points? Pascal and Fermat realized that you need to figure out in how many ways each person could still win the game. If the Chevalier de Méré needs only two games to win, and the Comte de Dubois three, what is the maximum number of rounds they need to play until one of the men is the winner? The answer is four rounds (or 2 + 3 − 1 = 4). Why? If the knight wins none or only one game, then the count must have won at least three and is the winner. If the knight wins two or more rounds, the count must have won less than his needed three, and loses the game. Either way, one of them will be the winner.

  Now that we know they need to play four more rounds to have a winner, we only have to calculate in how many ways the knight can pick his two wins out of the four rounds. And that is the same problem we just solved: He has six different ways to win (think of our items A, B, C, and D as labels for the four rounds they need to play). If he wins more than two rounds, he also wins the game, so we have to consider those possibilities as well. If he wins three rounds, he has four ways of doing so (see Pascal’s triangle), and if he wins all four, there is only one way to achieve this feat. In total, he therefore has 6 + 4 + 1 = 11 ways of winning the game. The count, by contrast, only has 4 + 1 = 5 different ways of winning the game (picking three or four wins out of four rounds). Therefore, if they stop the game before the last four rounds, the ratio of the payout that each of them should receive is 11/5 in favor of the knight (who has more ways of winning the game than the count). If there are 16 pieces of gold left, the knight should get 11 and the count 5.

  The Science of Ignorance

  Around the time Pascal and Fermat solved the problem of points, a salesman of buttons and ribbons, named John Graunt (1620–1674), noticed an interesting pattern in the mortality rolls of London. It seemed that the number of deaths was always about the same every year, even though there were many causes of death and the exact circumstances of each death were unique. When he looked at a large-enough sample—provided by the city of London—Graunt found that individual differences became irrelevant and general patterns emerged. The science of statistics was born.

  Statistics has been called the theory of ignorance. It’s an apt description. Statistics is what we do when we face complex situations with too many influencing factors, when we are ignorant of the underlying causes of events, and when we cannot calculate a priori probabilities. In many situations, from the motion of atoms to the value of stocks, patterns emerge when we average over a large number of events—patterns not obvious from looking at individual events. Statistics provides the clues to understanding the underlying regularities or the emergence of new phenomena arising from the interaction of many parts.

  The work of Graunt led to the first life tables, which gave the probability that a newborn would end up living to a certain age. This was the kind of information life insurance companies needed to make money: If you insured enough people and knew your life tables, you could charge people enough money to make sure you ended up in the black, even if occasionally someone died before his or her time. Life insurance became well-informed gambling, with probabilities taken from real life. In his book The Drunkard’s Walk, Leonard Mlodinow reproduces Graunt’s life table for London in 1662. In the late 1600s, 60 percent of all Londoners died before their sixteenth birthday. Such an awful statistic makes modern-day Afghanistan look like paradise. There, the death rate of 60 percent is close to age sixty. By comparison, the 60 percent death rate in Japan is around ninety years old.

&
nbsp; Although statistics emerged from the need to quantify economic and sociological data, it was soon recognized that this new science could benefit the hard sciences as well. Repeated measurements of the same phenomenon, especially in astronomy, were observed in the eighteenth century to follow a law of errors: Errors seemed to obey a universal distribution. However, it was difficult to find the correct mathematical function that would fit the error distributions. After all, every set of measurements only fit the distribution approximately, and the approximation only became good enough to allow guessing the right function after a huge number of measurements. After several false guesses by various eminent mathematicians, the German mathematician Carl Friedrich Gauss (1777–1855), using some of his astronomical data, recognized that the so-called normal distribution seemed to fit the bill.

  The normal distribution had been under mathematician’s noses all along. The French mathematician and gambling theorist Abraham de Moivre (1667–1754), in his 1733 book, The Doctrine of Chances, had published a formula that extended Pascal’s triangle to very large numbers of trials, much larger than could be practically obtained by Pascal’s method. In the limit of large numbers, Pascal’s triangle could be approximated by a formula describing a curve that looked like a bell. This bell curve, or normal distribution, is what Gauss found in errors of astronomical data. The French mathematician Laplace picked up where Gauss left off and proved that any measurement that depends on a number of random influences tends to have errors that follow the normal distribution. Today, Laplace’s central limit theorem is a key part of statistics, which can predict distributions as varied as people’s heights or masses of stars.

  The use of the normal distribution in statistics was perfected by the Belgian scientist Adolphe Quetelet (1796–1874), who subjected everything he could get his hands on to statistical analysis: chest sizes of sailors, heights of men and women, murders committed with various weapons, drunkenness, and marriages. Wherever he looked, he found the normal distribution. And when he did not find it, he knew that something had gone awry. When, for example, he found the height of French military conscripts strangely deficient at the low end of the scale, he realized that many short men of military age had lied about their height to get out of serving. The minimum size for military service was 157 centimeters (5 feet 2 inches). If you were 157.5 centimeters, why not buckle your knees a little bit during measurement and escape the dreaded military service?

  Quetelet’s work illustrated how the error law, the normal distribution, and the central limit theorem governed almost everything. His books were eagerly read not only by future sociologists, but also by future physicists and biologists. If statistics was useful in economics, medicine, and astronomy, why not in other areas as well? As we will see in Chapter 3, nineteenth-century physicists like James Clerk Maxwell and Ludwig Boltzmann used Quetelet’s work to develop the statistics of atoms and molecules.

  Mathematician Francis Galton, Charles Darwin’s cousin, was one of the first to apply Quetelet’s ideas to a wide range of biological phenomena. He found that the normal distribution governed almost every measurement of an organism: heights, masses of organs, circumferences of limbs. One of his most important discoveries was regression toward the mean: Galton found that any offspring of a parent who was at the outer ranges of a distribution, for example, a very short man or a very tall woman, generally tended to “regress” toward the mean of the distribution. In other words, the son of an exceptionally short man tended to be taller than his father, and the daughter of an extremely tall mother tended to be shorter than her mother. Mozart’s children were not geniuses like their father; neither were Einstein’s. And parents of below-average intelligence often have smarter children. In the long run, we all tend toward the average. In some sense, this is a good thing. Genius is unpredictable—which makes it all the more puzzling that Galton became one of the founders of eugenics, the idea that selective breeding of humans could improve humanity. Beyond the obvious human rights issues with this awful idea, Galton’s own “regression to the mean” suggested that the prospect of success would have been highly questionable. “Breeding” two highly intelligent humans would never guarantee that their offspring would be more intelligent than, or even as intelligent as, their parents. According to Galton’s own “regression towards the mean,” the best bet may be on “less intelligent.”

  Despite his tragically misguided ideas, Galton made other important contributions to statistics, such as the coefficient of correlation. This statistic measured how two different variables were statistically linked, or correlated. One of the main statistical tools of modern biology and medicine, the coefficient of correlation is mathematically sound but has to be used with care. Just because stork numbers and baby births may be correlated over a few years does not mean that storks bring babies. A correlation is a hint of a possible connection, not a proof.

  With Galton’s and Quetelet’s ideas, statistics flourished in the nineteenth century. It became an integral part of biology, and through his use of statistics, Mendel discovered the laws of heredity. The scene was now set to recognize randomness as an important player in the story of life.

  Randomness and Life: Three Views

  The question of chance versus necessity occupied the human mind for thousands of years. For much of this time, philosophers, theologians, and scientists denied randomness any meaningful role in nature, with atomism being the most obvious victim of randomness phobia. However, with scientific advances in the late nineteenth and early twentieth centuries, this stance became more and more difficult to maintain. Randomness reared its ugly head first in biology, through Quetelet and Galton’s work, and finally in physics. The final battles over randomness played out in debates over the existence of atoms, the emergence of statistical and quantum mechanics, and new insights into molecular evolution and the role of mutations.

  The randomness debate involved several famous protagonists. Nineteenth-century Austrian physicist Ludwig Boltzmann, cofounder of statistical physics, fought for the existence of atoms, which were still disbelieved more than two thousand years after Democritus and Epicurus. Twenty years later, Albert Einstein proved the existence of atoms in his famous papers on Brownian motion, but was unhappy with the implication that the new science of quantum mechanics was at heart built on randomness. These debates also continued in biology. Ever since Quetelet and Galton had shown aspects of physiology to be governed by statistics, and ever since the theory of evolution had introduced randomness as a driver of novelty in the development of life, pitched battles were fought over the role of chance in the history of life. Three of the most prominent combatants included D’Arcy Wentworth Thompson (1860–1948), Pierre Teilhard de Chardin (1881–1955), and Jacques Monod (1910–1976).

  An English mathematician and biologist, Thompson was best known for his 1917 masterpiece On Growth and Form, a book filled with astonishing insight (and many fascinating diagrams) of how mathematical principles guide the shapes and forces of living organisms. Thompson did not believe in teleological life forces. It seemed frivolous to him to invoke nebulous reasons, when a mathematical or physical explanation would do. He saw a continuity of complexity acting throughout all of nature and therefore no unbridgeable chasm between the living and the dead. “The search for differences . . . between the phenomena of organic and inorganic, of animate and inanimate things, has occupied many men’s minds, while the search for community of principles or essential similitudes has been pursued by few. . . . Cell and tissue, shell and bone, leaf and flower, are so many portions of matter, and it is in obedience to the laws of physics that their particles have been moved, molded and conformed.”

  In all of this, Thompson was humble—he allowed the possibility that not all phenomena of life could be explained through physical laws alone. But he felt that too often, biologists of his time surrendered too early, and that many phenomena of life could be explained, if scientists were given sufficient time to gain insight into the involved complexit
ies. In the battle between purpose and mechanism, he understood the usefulness of invoking final causes, but realized that one cannot stop there, but must find the physical reasons for how structures arise. “In Aristotle’s parable, the house is there that men can live in it; but it is also there because the builders have laid one stone upon another.” For Thompson, a full explanation needed to contain an explanation of both why a structure was there and how it was constructed.

  On the other hand, Thompson, while fond of mechanistic explanations, did not support the theory of evolution. For him, explanations should be explanations of necessity—chance was to play no role. He also did not favor theological hand-waving: “How easy it is, and how vain, to survey the operations of Nature and idly refer her wondrous works to chance or accident, or to the immediate interposition of God.” Invoking chance, God, or any extraneous life principle when met with ignorance was a cheap trick, according to Thompson, designed to keep us from doing the hard work of finding the true causes.

  Where Thompson had polite disdain for final causes, Chardin, a French Jesuit priest, paleontologist, and anthropologist, celebrated them— envisioning even atoms and molecules as bound to a higher purpose. Chardin made the reconciliation of science and religion his life’s work. As a scientist, he knew that the theory of evolution was the best explanation for the development of the living world, and he became an enthusiastic champion of evolution, although with a twist: In his masterpiece The Phenomenon of Man (written in the 1930s, but published in 1955), Chardin envisioned evolution as an upward motion toward more complex and sophisticated forms of life, which would ultimately culminate in a single, universal mind, which he equated with God. According to Chardin, evolution was guided by a mysterious psychic energy, an energy not yet measured or discovered, but nevertheless evident from the progress seen in the evolution of our universe. Mind was primary, pulling matter along in its wake. Chardin’s philosophy was to give “primacy to the psychic and to thought in the stuff of the universe.” Voltaire had made fun of such ideas two hundred years earlier, but now a better understanding of the awe-inspiring history of evolution, and the need to define humanity’s place, made such animistic philosophies fashionable again.

 

‹ Prev