Book Read Free

This Explains Everything

Page 33

by Mr. John Brockman


  I hardly remember the details (being no scientist, of course), but I remember reading about Paul Dirac’s theory of the sea of negative energy out of which popped—by a hole, an absence—the positron, which built the world we know. I hope I have this right and haven’t made a fool of myself by misrepresenting it—but, in a sense, that wouldn’t matter. Because this image, fueled by this explanation, energized my exploration into a new kind of theater in which (evoking a kind of negative theology) I tried, and still try, to pull an audience into the void rather than feeding it what it already feels about the “real” world and wants confirmed.

  Shortly thereafter (this was all in the 1950s), encountering the unjustly neglected philosopher Ortega y Gasset and being sent spinning by his explanation that a human being is not a “whole persona” (in a world where the mantra was to become “well rounded”) but, as he famously put it, “I am myself and my circumstances”—that is, a split creature.

  And what Ortegean circumstantial setting set me up to be seduced by the Dirac-ian explanation? It had something to do with growing up in privileged Scarsdale, hating it but hiding the fact that I felt out of place and awkward by becoming a successful high-school achiever. Dirac’s (for me) powerful poetic metaphor let me imagine that the unreachable source (the sea of negative energy) was the real ground on which we all secretly stood—and to take courage from the fact that the world surrounding me was blind to the deeper reality of things and that my alienation was in some sense justified.

  THE ORIGINS OF BIOLOGICAL ELECTRICITY

  JARED DIAMOND

  Professor of geography, University of California–Los Angeles; author, Collapse: How Societies Choose to Fail or Succeed

  My favorite deep, elegant, and beautiful explanation is the solution to the problem of the biological generation of electricity by animals and plants, provided by the British physiologists Alan Hodgkin and Andrew Huxley in 1952, for which they received the Nobel Prize in physiology or medicine in 1963.

  It had been known for over a century that nerves, muscles, and some other organs of animals and a few plants generate electricity. Most of that electricity is at low voltages of just a fraction of a volt. However, electric eels arrange 6,000 muscle membranes in series and thereby generate 600 volts, enough to kill their prey, shock horses wading rivers, and shock me when I was studying eel electricity generation as a graduate student and got so focused on thinking about physiological mechanisms that I forgot their consequences.

  Electricity involves the movement of charged particles. In our light bulbs and electric grids, those charged particles are negatively charged electrons. What are they in biological systems? Already over a century ago, the German physiologist Julius Bernstein speculated that the charged particles whose motion was responsible for biological electricity were not electrons but positively charged ions.

  Hodgkin and Huxley started the decisive experiments in the late 1930s. They expected to find that the voltage across a resting nerve membrane went transiently to zero during an electrical impulse, due to a loss of selective permeability to the positively charged potassium ion. To their surprise, they found that the nerve voltage didn’t just go to zero, and the nerve membrane didn’t just become indiscriminately permeable: The voltage actually reversed in sign, requiring something special. But then Hitler invaded Poland, and Hodgkin and Huxley spent the next six years using their understanding of electricity to build radar sets for the British military.

  In 1945, they resumed their experiments, using giant nerves that had been discovered in the backs of squid and that were big enough to insert an electrode in for measuring the voltage across the nerve membrane. They confirmed their tantalizing prewar discovery that the nerve voltage really did reverse in sign and that that reversal got transmitted along a nerve to constitute an electrical impulse. In a series of experiments that define the word “elegance,” they then artificially clamped the voltage across the nerve membrane at various levels, measured the electric currents going in and out of the membrane as a function of time at each level after the voltage clamp, translated those voltage measurements into changes of permeability to the positively charged potassium ion and then to the positively charged sodium ion as a function of voltage and time, and finally reconstructed the whole course of a nerve impulse from those time-dependent and voltage-dependent permeability changes. Today, physiology students do the necessary calculations to reconstruct an action potential in an afternoon on their desk computers. In 1952, before the era of modern computers, Andrew Huxley had to do the calculations much more laboriously, with a desk calculator: It took him about a month to calculate one nerve impulse.

  The four papers that Hodgkin and Huxley published in the British Journal of Physiology in 1952 were so overwhelming in their detailed unraveling of sodium and potassium movements, and their reconstruction of nerve impulses, that the scientific world became convinced almost immediately. Those permeability changes to positive ions (not to negative electrons) make it possible for nerves to convey electrical impulses, for muscles to convey impulses that activate contraction, for nerve/muscle junctions to convey impulses by which nerves activate muscles, for nerve/nerve junctions (so-called synapses) to convey impulses by which one nerve activates another nerve, for sense organs to produce impulses that translate light and sound and touch into electricity, and for nerves and our brains to function. That is, the operation of animal electricity unraveled by Hodgkin and Huxley is what makes it possible for us to read this page, to think about this page, to pick up this page, to call out in surprise, to reflect about Edge Questions, and to do everything else that involves motion and sensation and thought. The underlying principle—movement of positively charged particles—was simple, but God resided in the complex details and the elegant reconstruction.

  WHY THE GREEKS PAINTED RED PEOPLE ON BLACK POTS

  TIMOTHY TAYLOR

  Archaeologist, University of Bradford, UK; author, The Artificial Ape

  An explanation of something that seems not to need explaining is good. If it leads to further explanations of things that didn’t seem to need explaining, that’s better. If it makes a massive stink, as academic vested interests attempt to preserve the status quo in the face of far-reaching implications, it is one of the best. I have chosen Michael Vickers’s simple and immensely influential explanation of why the ancient Greeks painted little red figures on their pots.

  The “red-figure vase” is an icon of antiquity. The phrase is frequently seen on museum labels, and the question of why the figures were not white, yellow, purple, or black—other colors the Greeks could and did produce in pottery slips and glazes—does not seem important. Practically speaking, Greek pottery buyers could mix and match without fear of clashing styles, and the basic scheme allowed the potters to focus on their real passion: narrative storytelling. The black background and red silhouettes make complex scenes—mythological, martial, industrial, domestic, sporting, and ambitiously sexual—graphically crisp. Anyone can understand what is going on (for which reason museums often keep their straight, gay, lesbian, group, bestial, and olisbos [dildo-themed] stuff out of public view, in study collections).

  Vickers’s brilliance was to take an idea well known to the scholar Vitruvius in the first century B.C. and apply it in a fresh context. Vitruvius noted that many features of Greek temples that seemed merely decorative were a hangover from earlier practical considerations: Little rows of carefully masoned cubes and gaps just under the roof line were in fact a skeuomorph, or formal echo, of the beam ends and rafters that had projected at that point when the structures were made of wood. Michael argued that Greek pottery was skeuomorphic too, being the cheap substitute for aristocratic precious metal. He argued that the red figures on black imitated gilded figures on silver, while the shapes of the pots, with their sharp carinations and thin, straplike handles, so easily broken in clay, were direct translations of the silversmith’s craft.

  This still seems implausible to many. But to those of us
, like myself, working in the wilds of Eastern European Iron Age archaeology, with its ostentatious barbarian grave mounds packed with precious-metal luxuries, it makes perfect sense. Ancient silver appears black on discovery, and the golden figuration is a strongly contrasting reddish-gold. Museums typically used to “conserve” such vessels, not realizing that (as we now know) the sulfidized burnish to the gold was deliberate and that no Greek would have been seen dead with shiny silver (a style choice of the hated Persians, who flaunted their access to the exotic lemons with which they cleaned it).

  For me, an enthusiast from the start, the killer moment was when Vickers photographed a set of lekythoi, elegant little cylindrical oil or perfume jars, laid down end to end in decreasing order in an elegant curve. He demonstrated thereby that no lekythos (the only type of major pottery with a white background, and black only for base and lid) had a diameter larger than the largest lamellar cylinder that could be obtained from an elephant tusk. These vessels, he explained, were skeuomorphs of silver-mounted ivory originals.

  The implications are not yet settled, but the reputation of ancient Greece as a philosophically oriented, art-for-art’s sake culture can now be contrasted with an image of a world where everyone wanted desperately to emulate the wealthy owners of slave-powered silver mines, with their fleets of trade galleys. In my view, the scale of the ancient economy in every dimension—slavery, trade, population levels, social stratification—has been systematically underestimated, and with it the impact of colonialism and emergent social complexity in prehistoric Eurasia.

  The irony for the modern art world is that the red-figure vases that change hands for vast sums today are not what the Greeks themselves actually valued. Indeed, it is now clear that the illusion that these intrinsically cheap antiquities were the real McCoy was deliberately fostered, through highly selective use of Greek texts by 19th-century auction houses intent on creating a market.

  LANGUAGE AS AN ADAPTIVE SYSTEM

  ANDY CLARK

  Philosopher, professor of logic and metaphysics, University of Edinburgh; author: Supersizing the Mind: Embodiment, Action, and Cognitive Extension

  The iterated-learning explanation of structured language is one of those beautiful explanations that turns things on their head, exposing their workings and origins in a brand-new way. It suggests powerful alternatives to views that depict human brains as heavily adapted to the learning of humanlike languages. Instead, it depicts humanlike languages as heavily adapted to the shape of the learning devices housed in human brains.

  The core idea is that language is itself a kind of adaptive system that alters its forms and structures so as to become increasingly easily learnable by the host agents (us). This general idea first appears (as far as I am aware) in Terry Deacon’s 1997 book, The Symbolic Species. It has been pursued in depth by computationally minded linguists such as Simon Kirby, Morten Christiansen, and others. Much of that work involved computer simulations, but in 2008 Kirby et al. published a paper in Proceedings of the National Academy of Sciences augmenting those proofs in principle with a laboratory demonstration using human subjects.

  In these experiments, subjects were taught a simple artificial language made up of string/meaning pairs, and then tested on that language. Some of the test items were meanings that had featured in training, while others were new meanings. Then comes the trick. A “new generation” of subjects is then trained, using not the original items but the data from the previous generation. The language is thus forced through a kind of generational bottleneck, such that one generation’s choices (including errors and alterations) provide the next generation’s data. What the experimenters robustly found (echoing the earlier simulation results) was that languages subject to this kind of cumulative cultural evolution became increasingly easy to learn, exhibiting growing regularities of construction and inflection. This is because the languages alter and morph in ways that are a better and better fit with the basic biases of the subjects (the hosts). In other words, the languages adapt to become easier to learn by the kinds of agent that are there to learn them. They do this because learners’ expectations and biases affect both how well they recall actual training items and how they behave when presented with novel ones.

  Language thus behaves a bit like an organism adapting to an environmental niche.

  We are that niche.

  THE MECHANISM OF MEDIOCRITY

  NICHOLAS G. CARR

  Journalist; author, The Shallows: What the Internet Is Doing to Our Brains

  In 1969, a Canadian-born educator named Laurence J. Peter pricked the maidenhead of American capitalism. “In a hierarchy,” he stated, “every employee tends to rise to his level of incompetence.” He called it the Peter Principle, and it appeared in a book of the same name. The little volume, not even 180 pages long, went on to become the year’s top seller, with some 200,000 copies going out bookstore doors. It’s not hard to see why. Not only did the Peter Principle confirm what everyone suspected—bosses are dolts—but it explained why this had to be so. When a person excels at a job, he gets promoted. And he keeps getting promoted until he attains a job that he’s not very good at. Then the promotions stop. He has found his level of incompetence. And there he stays, interminably.

  The Peter Principle was a hook with many barbs. It didn’t just expose the dunderhead in the corner office. It took the centerpiece of the American dream—the desire to climb the ladder of success—and revealed it to be a recipe for mass mediocrity. Enterprise was an elaborate ruse, a vector through which the incompetent made their affliction universal. But there was more. The principle had, as a New York Times reviewer put it, “cosmic implications.” It wasn’t long before scientists developed the “Generalized Peter Principle,” which went thus: “In evolution, systems tend to develop up to the limit of their adaptive competence.” Everything progresses to the point at which it founders. The shape of existence is the shape of failure.

  The most memorable explanations strike us as alarmingly obvious. They take commonplace observations—things we’ve all experienced—and tease the hidden truth out of them. Most of us go through life bumping into trees. It takes a great explainer, like Laurence J. Peter, to tell us we’re in a forest.

  THE PRINCIPLE OF EMPIRICISM, OR SEE FOR YOURSELF

  MICHAEL SHERMER

  Publisher, Skeptic magazine; monthly columnist, Scientific American; author, The Believing Brain

  Empiricism is the deepest and broadest principle for explaining the most phenomena in both the natural and social worlds. Empiricism is the principle that says we should see for ourselves instead of trusting the authority of others. Empiricism is the foundation of science, as the motto of the Royal Society of London—the first scientific institution—notes. Nullius in Verba—Take nobody’s word for it.

  Galileo took nobody’s word for it. According to Aristotelian cosmology, the Catholic Church’s final and indisputable authority of Truth on matters heavenly, all objects in space must be perfectly round and perfectly smooth, and revolve around Earth in perfectly circular orbits. Yet when Galileo looked for himself through his tiny tube with a refracting lens on one end and an enlarging eyepiece on the other, he saw mountains on the moon, spots on the sun, phases of Venus, moons orbiting Jupiter, and a strange object around Saturn. Galileo’s eminent astronomer colleague at the University of Padua, Cesare Cremonini, was so committed to Aristotelian cosmology that he refused even to look through the tube, proclaiming: “I don’t believe that anyone but he saw them, and besides, looking through glasses would make me dizzy.” Those who did look through Galileo’s tube could not believe their eyes—literally. One of Galileo’s colleagues reported that the instrument worked for terrestrial viewing but not celestial, because “I tested this instrument of Galileo’s in a thousand ways, both on things here below and on those above. Below, it works wonderfully; in the sky it deceives one.”* A professor of mathematics at the Collegio Romano was convinced that Galileo had put the four moons of Jupiter in
side the tube. Galileo was apoplectic: “As I wished to show the satellites of Jupiter to the professors in Florence, they would neither see them nor the telescope. These people believe there is no truth to seek in nature, but only in the comparison of texts.”*

  By looking for themselves, Galileo, Kepler, Newton, and others launched the Scientific Revolution, which in the Enlightenment led scholars to apply the principle of empiricism to the social as well as the natural world. The great political philosopher Thomas Hobbes, for example, fancied himself as the Galileo and William Harvey of society: “Galileus . . . was the first that opened to us the gate of natural philosophy universal, which is the knowledge of the nature of motion. . . . The science of man’s body, the most profitable part of natural science, was first discovered with admirable sagacity by our countryman, Doctor Harvey. . . . Natural philosophy is therefore but young; but civil philosophy is yet much younger, as being no older . . . than my own de Cive.”*

  From the Scientific Revolution through the Enlightenment, the principle of empiricism slowly but ineluctably replaced superstition, dogmatism, and religious authority. Instead of divining truth through the authority of an ancient holy book or philosophical treatise, people began to explore the book of nature for themselves.

  Instead of looking at illustrations in illuminated botanical books, scholars went out into nature to see what was actually growing out of the ground.

  Instead of relying on the woodcuts of dissected bodies in old medical texts, physicians opened bodies themselves to see with their own eyes what was there.

  Instead of burning witches after considering the spectral evidence as outlined in the Malleus Maleficarum—the authoritative book of witch-hunting—jurists began to consider other forms of more reliable evidence before convicting someone of a crime.

 

‹ Prev