The number of stable molecules rises so explosively that beyond ten atoms it quickly becomes impossible to count them all. A mere few dozen atoms can have an energy landscape with billions or trillions of valleys. A few of these valleys are deep, corresponding to the most stable molecules. Their atoms often show highly regular arrangements—cubes, tetrahedra, octahedra (two pyramids glued together at their base), or the truncated icosahedra of bucky-balls. But most valleys are shallow and harbor the least stable molecules, often an irregular jumble of atoms where a small nudge can push all atoms into entirely new configurations.9
Potential-energy landscapes of molecules can be rugged, just like the adaptive landscapes where biological evolution unfolds. And just like adaptive landscapes, in which locations correspond to different DNA sequences or genotypes, energy landscapes also exist in an abstract realm with more dimensions than our three-dimensional minds can visualize.
These similarities are momentous. Just like fitness landscapes can teach us how biological evolution creates new kinds of life, so too can energy landscapes teach us how the inorganic world creates the new and beautiful. They can teach us how a jumble of atoms self-assembles into molecules like bucky-balls that are not only complex and easy on the eye, but also so stable that their radiation signature can reach us from other galaxies.
But you may have noticed a difference between the two kinds of landscapes. The high peaks of evolution’s landscapes—occupied by well-adapted organisms—are the best places to be, whereas the peaks in an energy landscape are the worst places. They correspond to the most unstable molecules, whose atoms immediately shift their position to release their potential energy, rearranging themselves until they occupy a valley-bound stable molecule.
That difference is less profound than it seems. Consider a raised relief map, the kind of landscape model you find in visitors centers of some national parks like Acadia or Grand Canyon National Park. It is a three-dimensional scale model of the park’s landscape that highlights its peaks and valleys. Underneath the surface, such maps are often hollow, and by flipping a hollow relief map by 180 degrees, you turn its peaks into valleys and its valleys into peaks. This simple change in perspective is all you need to convert an adaptive landscape to the energy landscape of a molecule. Where evolution seeks out the highest peaks in an adaptive landscape, atoms and molecules seek out the deepest valleys of their energy landscapes—those corresponding to the most stable molecules.
These landscapes do not just harbor molecules with beautiful architectures; they also contain glittering objects that delight chemical engineers and nanotechnologists. Take the catalytic convertor cleansing a car’s exhaust. It contains very expensive metals like platinum and gold whose atomic surface texture can accelerate chemical reactions. These reactions break down toxic molecules such as carbon monoxide, which is how a catalytic converter helps detoxify exhaust gas.
Because a catalyst’s surface is so important, equipping a catalytic converter with a solid chunk of gold would be a very bad idea. Most of the gold would be buried inside that chunk. Even in a particle of a mere hundred thousand gold atoms only 10 percent of the atoms would be on the surface. Much better to scatter that chunk into myriad tiny gold particles—gold clusters, as chemists like to call them—so that most of the atoms are on the surface. How big a difference that can make was shown in 2012 by Spanish chemists who increased a gold catalyst’s efficiency one-hundred-thousand-fold by equipping it with clusters of no more than ten gold atoms.10
Atomic clusters of less precious metals like iron, nickel, or cobalt, together with nonmetals like sulfur or carbon, are important far beyond catalytic converters. They catalyze countless chemical reactions, some of them crucial to the chemical industry, like those creating synthetic lubricants from coal, or fuels from biological waste. Other atomic clusters keep our body going by helping to extract energy from nutrients. Such clusters can self-assemble into various shapes. Their atoms can be spread out like a sheet of dough, bunched up into a ball, or arranged in a crystalline lattice. This shape can make the difference between an efficient catalyst and a sluggish one. To find out whether a catalytic cluster can self-assemble into a shape that is perfect for catalysis, chemists study the deepest valleys in its energy landscape and scrutinize this landscape for obstacles that can prevent this shape from emerging.11 And when chemists do that, they find a formidable obstacle that is already familiar—the same one that evolution faces when searching the highest peak in an adaptive landscape.
Just as natural selection can only push uphill, gravity can only pull downhill. When atoms of gold, carbon, or iron associate haphazardly, the result is a jumble, an atom cluster with no organization, corresponding to a marble that is dropped on a random place in a multidimensional energy landscape. Perhaps that marble lands at a valley or at a peak, but more likely than not it will come down somewhere on the slope between a peak and a valley. From there, it will slide downslope to the nearest valley bottom, where the cluster will find a stable pattern of atoms that requires the least rearranging. There are many more shallow valleys than deep ones, so this resting place will almost certainly be shallow—a not-very-stable cluster with its atoms arranged in a messy jumble. And there the cluster will be stuck forever.
But nature does create bucky-balls, so something must be missing from this picture. And that something is easy to understand: good vibrations.
The technical term is heat, the incessant tremors of atoms and molecules all around us. The hotter it is, the more strongly the atoms and molecules will vibrate. When it gets too hot, these vibrations eventually get so violent that the chemical bonds tying a molecule or cluster together rupture, and its atoms scatter every which way. Conversely, at ever colder temperatures, these vibrations get weaker and weaker until they cease completely at absolute zero, a temperature of −273.16 degrees Celsius. In between, a molecule’s bonds—those figurative springs—hold, but they are incessantly pulled and pushed as atoms are shaken and jostled. (It’s the same kind of jostling that is responsible for the vibrations of proteins, which allow enzymes to perform useful tasks.)
A metaphorical marble exploring an energy landscape would jitter unceasingly from these vibrations, as if the landscape itself were constantly trembling, like an adaptive landscape trembling from genetic drift. The hotter it gets, the stronger the tremors become. If that marble started in a shallow valley, even small jitters might help it climb the saddle connecting to a nearby valley, which may be deeper. If the marble started in a deep valley, that saddle would be more like a mountain pass, and the marble could not traverse it unless the tremors were very strong. At the highest temperature where the atoms don’t yet fly apart, the tremors would be so strong that the marble would jump around erratically and visit all parts of the landscape, although it would spend most of its time in the deepest valleys—those that are hardest to jump out of.
These jumps help the marble explore the landscape, but they also create a new problem: the marble will never settle in a valley for good—the molecule’s atoms will incessantly shift and reconfigure. Fortunately for nature’s creative powers, this problem can be avoided by cooling the atoms. Cooling weakens the marble’s jitters, so the marble is less likely to leave any deep valley that it is visiting. It will continue to explore that valley, which can itself contain many clefts, cracks, and crevices, and as the temperature drops further, the marble will burrow deeper and deeper, exploring the valleys within a valley, and the even shallower valleys within. If the atoms are cooling slowly enough, the marble will eventually come to rest at the very bottom of the deepest valley, which corresponds to the most stable molecule.
That’s at least the world according to theory, the theory of statistical physics—a branch of physics dealing with large numbers of particles like atoms. But this theory works. Ask an amateur chemist laboring to grow large crystals from everyday materials like sugar, salt, or borax in their kitchen, and they will tell you that slow is the way to go—the slower y
ou cool, the larger and more regular a crystal will grow.12 To be sure, many crystals are not quite like bucky-balls. Their atoms are not bound by strong covalent bonds, but rather by weaker bonds that rupture at more modest temperatures or that dissolve in water, like salt crystals do.13 What is more, the building blocks of crystals need not be atoms like carbon. They may themselves be molecules like table sugar. But no matter whether it’s a sugar crystal, a bucky-ball, or a gold cluster, the principle is the same: particles like atoms and molecules can self-assemble into a stable architecture when each molecular part is free to vibrate and jitter—just enough and not too much—and free to probe myriad configurations of an enormous puzzle. Whenever one of the puzzle pieces latches on in the right place, the marble has climbed down into a deeper valley—the particles have found a more stable configuration. Innumerable such descents later, nature has created one of those wonders where trillions of atoms or molecules are arranged in a perfect geometric pattern—all without a guiding hand.
Molecules that explore vast energy landscapes are behind much of the inanimate world’s beauty, from the atmospheric wonders of snowflakes, to crystalline rocks like granite, and to gemstones like diamonds, rubies, and emeralds. And just as their materials are diverse, so too are the amount of heat and the rate of cooling needed to build them. To get a bucky-ball to self-assemble, carbon must be heated to thousands of degrees, far above the temperatures at which a snowflake grows, but then it takes mere milliseconds of cooling to create that perfect ball.14 In contrast, to solve the complex puzzle needed to assemble a large diamond, nature often needs more than a billion years.15
Most crystals in nature do not have the perfect shape dictated by the arrangement of their atoms with the lowest potential energy.16 The bewildering diversity of snowflakes makes that plain. Their shapes are often very different from the ideal shape of crystalline water ice—a hexagonal prism. To be sure, the smallest snow crystals often do display this perfection, but larger ones don’t. They grow from such a prism, a tiny crystalline seed inside a swirling cloud of ice-cold water vapor, and as they do, fewer water molecules tend to attach along the prism’s flat surfaces than attach near the prism’s protruding edges—they get caught there as they drift through the air. In other words, a snowflake grows more slowly in some places and faster in others. Where it grows faster, branches can sprout in a process that physicists call a branching instability. These branches can father new branches, and so on. This is how the familiar arborescent filigree of a full-grown snowflake builds itself.17
When, on a cold winter day, we watch millions of these creations quietly rain from the sky as far as the eye can see, we can begin to grasp the vastness of nature’s creative landscapes. Each snowflake is far from an amorphous jumble of water molecules. It is a good but not perfect solution to the problem of minimizing potential energy. Each one occupies a different deep—but not the deepest—valley in the energy landscape of water ice. And that’s why each one is unique.
Snowflakes and other crystals not only teach us that imperfection can harbor great beauty, but they also make bucky-balls all the more remarkable. That’s because the landscape of bucky-balls also has valleys beyond measure. Some are shallow, with an amorphous jumble of carbon atoms. Others are deep, but not quite as deep as the deepest valley of that perfect carbon soccer ball—they correspond to imperfect ovoids with various distortions of the perfect soccer ball.18 Yet, when the conditions of the experiment are just right, the majority of all carbon atoms aggregate into bucky-balls. Every single one of them has found the deepest valley.19 The lesson: the right amount of vibration can be so powerful that it conquers even the most complex landscape.
It is no coincidence that we encountered similar jostling in the inverted landscapes of life’s evolution. Many of these landscapes cannot be conquered by selection’s steady uphill moves, because their topography is no less varied than the energy landscapes of the inorganic world. The jostling in evolution’s adaptive landscapes does not come from heat, of course, but from heat’s analog, the tremors of genetic drift, which allow small populations to escape shallow adaptive peaks. When these tremors are very strong—in the smallest populations—they can drive a population across the landscape no matter how rough the terrain. The smaller a population is, the stronger genetic drift becomes, the more violent these tremors are, and the faster the population explores its adaptive landscape. Drift prevents a population’s capture by a shallow peak, just like heat helps a molecule, crystal, or atom cluster avoid entrapment in a shallow valley of an energy landscape.
Heat is as important to creating the beauty of the inorganic world as genetic drift is to the beauty of the living world. Superficially, heat and drift are completely unrelated, but deep down, they are means to the same end—conquering the landscapes of creation. Both of them permit imperfection to find perfection.
Incredibly, the beauty of diamonds and butterflies comes from the same source, even though they are so different. Perhaps, then, this source is important far beyond nature. Some scientists and engineers have had this suspicion for decades. More than that, they found proof, which helps them not only find perfect solutions to the hard problems they face, but also allows them to do something even more important: delegate the task of solving such problems. Not to people but to computers—creative computers.
Chapter 6
Creative Machines
When we say that trucking is tough business, we usually think of the drivers and their lonely hours on the road, extended separations from family, exercise-deprived work days, and artery-clogging food eaten at highway rest stops. But for the companies employing them, life is no bed of roses either. Elbowing their way through a highly competitive market with razor-thin profit margins,1 the name of their game is efficiency—in fuel use, downtime, and, most of all, routing. The difference between success and failure can hinge on a few percent of excessive mileage.
Routing a truck seems easy enough. Load goods for delivery at depot, mark customers on map, connect the dots, and off you go. But don’t be fooled. A truck has limited capacity, needs to deliver within a certain time window, may have more than one depot to visit, and must visit multiple drop-off points. Even the problem of finding the shortest route to these drop-off points would stump any driver, rare geniuses aside. Just consider the numbers. Six different routes are possible to visit three different customers—ok, your average driver might still be able to figure that one out. But with twenty-four different routes for four customers, and 120 routes with five customers, the problem gets harder. Ten customers? Three million routes. Fifteen? More than a trillion. The numbers rise so fast it is scary, and they become large beyond comprehension for realistic numbers of hundreds of customers.2 And that’s just for one truck. Companies like Federal Express have more than forty thousand trucks, not to speak of their six hundred planes that fly to more than two hundred countries. Figuring out the best route takes more than a genius.
The problem—to deliver the largest amount of cargo at the lowest possible cost—is a mathematical one. Because it is complex, and because companies need to solve it every day, it’s a problem ready-made for computers. But the answers are not, unless you have an algorithm—a prescribed sequence of simpler computations that computers can follow—that is able to solve the problem day after day after day.
Finding algorithms is another tough business—not that of truckers, but of computer scientists. The problem of finding the shortest delivery route is so famous (and hard) that computer scientists have created their own acronym for it: the VRP, or vehicle routing problem. It is a close cousin of an even more famous and older problem that goes back to the nineteenth century, the traveling salesman problem, or TSP. This problem arises when a salesman needs to visit multiple customers and tries to keep his route as short as possible. The problem was not only important for the up to three hundred and fifty thousand traveling salesmen who peddled their wares throughout the United States in the late nineteenth and early twentiet
h centuries. It also came up in other professions, such as traveling preachers and traveling judges. Judges, for example, journeyed through their districts on a fixed route of towns—a circuit—that held court on specific days of the year. The name of circuit courts for US regional courts survives this long-extinct practice.3
Thousands of computer scientists have grappled with difficult problems like the TSP and VRP, and not because they care so deeply about truckers and salespeople, but because these problems pop up in many other areas. Take chemistry. To map the atoms in a complex molecule, a chemist can crystallize the molecule, shine an X-ray beam onto the crystal, and measure how its atoms diffract the beam. The problem is that she needs to measure the beam’s diffraction not just from one angle, but from hundreds of different angles. In other words, she needs to rotate the crystal through hundreds of positions. The faster she can do that—the shorter the path between these positions—the shorter her experiment will be.
An astronomer who wants to observe hundreds of stars or galaxies in the night sky faces a similar problem. For each observation, a telescope must be rotated into a precise position, a job that computer-driven motors handle for large telescopes. Modern telescopes are hugely expensive and are shared by researchers all around the world, who must line up to obtain precious observation time. The faster a telescope can visit all desired positions—the shorter the path through them—the more time is available for observations, and the more efficiently the telescope is used.
Likewise, when computer-chip designers lay out millions of transistors on a new chip, they need to keep the wiring between transistors short. Otherwise, their design would waste precious square millimeters of the chip’s real estate. Plus, a chip whose electrons have to travel long distances between transistors would squander costly energy.
Life Finds a Way Page 10