Book Read Free

Electric Universe

Page 12

by David Bodanis


  None of it worked: the RAF planes were now unopposed. High explosives were dropped first, to pierce the water mains (a later count showed they had been broken in over two thousand places) and to break open the houses below. Bricks were shattered and fragments blew apart. Then the main bomb doors opened, releasing the chemical incendiaries.

  Much of Hamburg was built of wood, and wood is made when the miniature photovoltaic units we call leaves take ordinary, separate carbon atoms and hook them together in long chains. It takes months or even years’ worth of light energy pouring down from the sun to hook up carbon atoms that way.

  When the RAF bombs shattered the clusters in the wood, each carbon atom was on its own. By itself, that would have meant a great deal of rubble and dust, and many people hurt from the collapsing wood, but the damage would then have been over. Yet it didn’t end there, because the explosives the RAF had deposited sent out immense amounts of heat.

  The heat hurtled along the Hamburg streets, transforming everything in its path. It soaked into dust flecks in the air till they exploded, and it heated the carbon in Hamburg’s wooden homes so much that they reacted with oxygen and exploded into flames as well. The energy that the sun had poured into that wood over the long years when it had grown in forests now reappeared, in a sudden horrifying burst.

  We can’t see the electric waves of radar, but in the fury of the burning buildings, the electrical waves coming out were shorter, more intense. When they struck human eyes, retinal cells sent signals to the brain.

  In the Hamburg firestorm, Faraday’s invisible waves were turned into light.

  Fires began, and the flames joined together, and then the whole city ignited. People tried to escape, but how? One fifteen-year old girl remembered: “Mother wrapped me in wet sheets, kissed me, and said: ‘Run!’ I hesitated at the door…but then I ran out to the street….I never saw her again.”

  An older girl, nineteen, joined a group trying to get across the large boulevard of the Eiffestrasse, but at the last moment she realized she had to stop. The heat of the fires was making the street melt:

  “There were people on the roadway…alive but stuck on the asphalt. They must have rushed on to the roadway without thinking. Their feet had got stuck and they had put out their hands to get out again. They were on their hands and knees, screaming.”

  Up above, a captain in one of the Pathfinder squadrons—at age twenty-seven, older than almost every other pilot—looked down at the storm that had been a living city. “Those poor bastards,” he muttered over the radio. He pushed his hands on the steering control, and the great plane began to turn. Insulated wires in his cockpit guided electrons along copper strands as indications of the wing’s dipping appeared on his cockpit displays; Faraday’s waves flooded in through his thick glass windshield, some broadcast invisibly from the thousands of fluttering aluminum strips in the air; others broadcast visibly, painfully so, from the glaring flames. A final glance, and his Lancaster bomber swung away. The single night’s attack was over—but the bombing would continue on and off for another two years.

  The attack was shattering, yet all that horror and struggle—the Chain Homes defenses, as well as the Hamburg-destroying war machines—only skims the surface of what electrical effects can produce. For there’s yet another level that goes beyond the image of powerful waiting electric charges, beyond even the invisible, space-crossing waves that can force those charges to move. The vision that Maxwell had of atoms was incomplete.

  In the 1910s and 1920s—even before Watson Watt found himself in Slough—a small number of theorists had begun exploring this new, submicroscopic world. If they were right, then the world beneath our own is composed of electrons that travel in abrupt teleporting jumps—known as “quantum” jumps—and also in sudden stops and starts.

  This would change everything, for electrons are central to electricity, and whenever we discover something new about them, the groundwork is laid for a fresh technology. In late Victorian times, the vision of electrons as hard little balls led to the technology of telephones, lightbulbs, and electric motors. Faraday’s and Hertz’s understanding of waves had led to radio and radar, which were so central to World War II. Now the realization that electrons could dematerialize—that they could in effect pop through space and be made to start and stop in fresh places—would open the way for yet another device, a thinking machine, which would shape our era as much as electric lighting and telephones had shaped the nineteenth century.

  In the 1920s the English word computer (and its cognate in other languages), still meant a person, usually female, who spent laborious hours at a desk, using a mechanical calculator, or even old-fashioned pencil and paper, to compute whatever dull arithmetical task was assigned. It seemed impossible to go further, for if any genuine thinking machine were to match the quick flicks of human thought, it would need to shift its internal circuitry far faster than anyone could then imagine. No solid, mechanical object could do that.

  But perhaps the wildly teleporting flights of tiny electrons could.

  PART IV

  A COMPUTER BUILT OF ROCK

  The metals that were mined for the planet’s war machines carried electrons that could leap instantly from one adjacent atom to another, never appearing in the space in between. But in other substances those leaps did not always so easily take place.

  When those clusters of atoms came together, in common rocks and crystals scattered across the planet’s surface, the electrons powerfully blocked one another’s flights. Newly arrived electrons might try to move faster—to increase their energy levels—but an exclusion zone seemed to operate, keeping them at bay. The electrons in these common rocks and clays were slowed; they were almost stopped.

  Humans had spent a century transforming their civilization by making fast electrons serve their purposes. Now the twentieth century’s bloodiest war had ended. The power of slow electrons was still to be unleashed.

  9

  Turing

  CAMBRIDGE, 1936, AND BLETCHLEY PARK, 1942

  There had been some efforts to build a computer in 1820s England, but the prevailing technology of steam engines and ball bearings and metal cogs was too crude ever to make it work. The failure was not just in technology but in imagination. Even a full century later, in the 1920s, there were many ingenious machines in the world—there were locomotives, and assembly lines, and telephones, and airplanes. But each did only one thing. Everyone accepted the idea that to get a different task done, you needed to build a different machine.

  Everyone was wrong. Alan Turing was the man who first showed in persuasive detail how it would be possible to change that. His life ended in tragedy, for although he conceived a perfect, clearly describable computer, and although the new insights about how electrons can leap or seemingly stop might have allowed him to construct it, the technology remained elusive. New ideas in science don’t automatically produce new machines. He would be lauded in death—but not while he lived.

  As a boy, in the 1910s and early 1920s, Alan Turing loved the way he could think his way out of problems. He had trouble distinguishing right and left, so he dabbed a red dot on his left thumb, and then was proud that he could get around as well as other children his age. Soon he could outnavigate both children and adults. At a picnic in Scotland, to get his father’s approval for being suitably brave and adventurous, he found wild honey for the family by drawing the vector lines along which nearby honeybees were flying, and charting their intersection to find the hive.

  But as an adolescent and then a young adult, he found it harder and harder to blend in. By the time he was sixteen he realized that he was physically attracted to men, which was bad enough, but he also realized he was without question an intellectual, and in 1920s England, especially at its private schools, that was even worse.

  His father was far enough away, serving in India with the Civil Service, not to have to pay much attention, but his mother, who was from a proper upper-middle-class background, would
have none of it. Alan was a normal boy, she insisted, who would one day learn to control his strange musings on beauty, consciousness, and, above all, on science. She was sure he would also—as he seems to have dutifully suggested in his letters from prep school—quite soon bring back for a visit one of those pretty girls he hoped to meet at nice London parties.

  Instead, by age seventeen, he’d fallen in love with an older boy at his school, Christopher Morcom. They built telescopes and peered out of their dormitory windows late at night. They read physics books together, and talked about stars, mortality, quantum mechanics, and free will. In their discussions, they “usually didn’t agree,” Alan happily wrote, “which made things much more interesting.”

  But then, just a few months after they met, Morcom died of tuberculosis. Turing had been reserved with his mother until then, but now opened his heart: He and Morcom had always felt there was “some work for us to do together,” he wrote, “…[Now] I am left to do it alone.” But what was that work? Many people question their faith after someone they love dies, but adolescent deaths are raw, immensely so: the survivor experiences the intensity of adult emotions, yet can’t place what happened in familiar cycles of life. A hole is ripped in the universe.

  Turing seems to have lost whatever religious faith he once had. He angrily dropped the usual Edwardian belief that only the body is lost in death, and that an immortal soul, not made of any earthly substance, lives on. Morcom was gone. People who tried to comfort him by saying his friend somehow survived were liars.

  That anger, that belief in cold materialism, was indispensable for the great electrical device that Turing imagined just a few years later. It’s hard to conceive of creating an artificial device that duplicates human thinking, if you believe in an immortal soul. The perishable stuff that the computer has to be made of—the wires or electrons or whatever—will lack all semblance of that soul. But if you’re sure, with all the anger of adolescence, that nothing but dead earth is what remains when we die, then cold wires will do just as well as any living being.

  For several years Turing acted the part of a contented Cambridge undergraduate, but he often turned to the Morcom family when he was stressed, either through visits or heartfelt letters to Christopher’s mother. Eventually, to his Cambridge friends’ puzzlement, he began repeatedly quoting a line from Disney’s new Sleeping Beauty movie, about the poisoned apple and how quickly biting it could produce eternal rest.

  Early in the summer of 1935, when Turing was twenty-two, he came across the problem that triggered his major work. It dated back to the turn of that century, when, at a lecture hall in Paris, on a hot August day, the great German mathematician David Hilbert had read out loud what he considered the most important mathematical problems for the twentieth century. A follow-up of one of the hardest—still unsolved when Turing heard of it—dealt with a deep problem in logic, asking how certain long chains of reasoning could be carried out. Most researchers assumed the answer would come with an abstract mathematical proof. Turing, however, had always liked to tinker: he was excellent at building radios, fixing bicycles, and putting together metal contraptions of almost any sort. Now, as he lay in a meadow in a small town near Cambridge after one of his long, solitary afternoon runs, he imagined an actual machine that could crank through the steps of Hilbert’s logical problem.

  In the next few months, Turing showed that this imaginary machine could resolve the questions Hilbert posed about how to prove the truth or falsity of any abstract statement. The machine would need electricity, of course, perhaps in a form not yet imagined, but that didn’t preoccupy Turing now. Instead, he wondered what else such a perfect machine could do. This took longer, for he realized that, in theory, a machine that clicked through these strings of logic could quite likely do almost anything.

  All that the machine’s operator would have to do was write down, very clearly, the instructions he wanted it to follow. The machine wouldn’t have to understand what those instructions meant; it would simply have to execute them. Turing proved that almost any action he could imagine—adding numbers or drawing a picture—could be translated into simple logical steps that a machine could follow.

  If a critic protested that the machine wasn’t as powerful as Turing proposed, and named some other tasks that it couldn’t perform, Turing would simply have the critic break the tasks down into discrete steps, and describe the steps using this same clear, logical language. Then Turing could give those instructions to the machine and it would chug along, faithfully carrying them out—thus showing that the critic had been wrong. We today are so used to machines carrying out sequences of instructions—we automatically assume that a computer or cell phone will follow our tapped-in commands—that it’s hard to remember a time before it was accepted. But when Turing was a student hardly anyone imagined that inert machinery could accomplish such intelligent work.

  It was a stunning intellectual achievement, but also a lonesome one. This “Universal Machine,” which Turing described in his 1937 paper for the Proceedings of the London Mathematical Society, was self-contained and entirely without emotion. If it was fed the right instructions, then from that point on, it could operate on its own—forever.

  The machine wouldn’t even need an operator to reach inside and change it when the tasks it faced changed. For Turing had also begun developing the concept of software. He realized his machine wouldn’t be useful if it had to be rebuilt each time it was given a new problem to work on. Instead he imagined that the inner parts of the machine could simply be rearranged as needed. This software might seem to be a part of the solid substance of the computer, but it would really be constantly shifting around, configuring itself in one restless way, then another.

  This is where electricity comes in. Turing’s imagined computer couldn’t simply have a lot of wires locked into one particular configuration. For when we think, we’re comparing and combining lots of different sensations and thoughts; we perform a huge number of rearrangements with them, and we do it very quickly. If Turing’s computer were to match the human mind, it too would need lots of switches that could rearrange themselves just as quickly. The switches would have to be so small and work so fast that miniature metal cogs and gears—the stuff of conventional adding machines—wouldn’t suffice.

  Telephone companies had reached the limits of simple metal switches a long time before. Their first improved switches consisted of sturdy young men who physically took out plugs from one line that ran up through a hole in a large board and pushed them into another line. (Since that board was where all the switching took place, the term switchboard was soon born.) When supervisors found that the men swore too much (and also that they were easily roused to join unions), the males were replaced by more genteel females; when Bell System administrators in the late 1890s found that even great offices filled with such women were overburdened, the first semi-electrical switches came into use.

  Those switches used very thin metal wires that acted like miniature swiveling drawbridges. Electrons poured up to the mouth of the drawbridge, and if the bridge was in position, the electrons would hurry across the gap. But if the drawbridge wire had been raised or had swiveled to one side, the electrons skidded to a halt, or just fell aimlessly into the gap, and the signal they carried didn’t get through.

  Unfortunately, even the most advanced telephone switches of the 1930s were far too big for Turing’s purposes. The electric thinking machine Turing envisaged would be sorting and arranging so many different “thoughts” that it would need thousands, perhaps millions, of simultaneously operating switches. That many metal wires, however slender, wouldn’t fit.

  What he needed, of course, were the new insights in physics about “teleporting” electrons and the rest of what was termed quantum mechanics. The new theories suggested the electrons could do the switching without having to be guided along slowly swiveling wires. Instead, if the right quantum rules could ever be applied, electrons could be made to jump and switch positio
n even within solid, unmoving matter.

  That was the dream, and Turing had studied enough physics to know of the new research in quantum mechanics. Many of the founders of that field were working around him in Cambridge. But, like other engineers and mathematicians, he believed that quantum effects were perpetually hidden away from us, limited to a submicroscopic realm too small ever to be of use. It seems, at this time, he never seriously considered that quantum switches could be his answer.

  He still missed Morcom, but knew he needed a new kindred spirit, someone to share his increasingly intense thoughts about consciousness and machines—perhaps even about what it meant if we were just flitting arrangements of software, yoked to Yeats’s dying animal. Because of the professional fame from his Hilbert papers, Turing was invited to spend some time at Princeton, where the equally brilliant Professor John von Neumann taught. At first von Neumann seemed to be the intellectual companion Turing sought. But von Neumann had always been a great chameleon. In Hungary, where he’d been raised, his first name had been Janos, then in Göttingen he’d happily become Johann, and now in Princeton he was good old Johnny. He’d become hyper-Americanized, and insisted on a social life that was dominated by loud cocktail parties. Turing seems to have gone to a few of these parties, but he could never reach von Neumann emotionally, and returned to England after only a few semesters.

  During World War II, Turing was hired as part of the British government’s codebreaking group at Bletchley Park, in the south of England. There were academics there, exulting at being freed from the stuffiness of old Oxbridge College, and there were crossword experts and naval officers and a director of research for the John Lewis department stores (who was also a chess champion), and in this mix Turing came alive. He’d always liked using his wits for practical goals—at Cambridge he’d casually taught himself to tell the time by looking at the stars—and at Bletchley, around the impromptu cricket pitches and the red-brick huts on the manor house’s grounds, he got his chance. He was assigned to the team working on cracking the Enigma machines, which much of the German army and navy used to encode their messages. Within a few weeks Turing had helped devise new techniques for codebreaking, and within two months he was head of the unit breaking all German naval codes.

 

‹ Prev