Book Read Free

The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Great Discoveries)

Page 5

by David Leavitt


  *Keynes may here be alluding to Hardy’s observation that it “would seem, on Russell’s theory, that if you can judge that Edward is the father of George, you should be equally capable of judging that Edward is the father of blue.”

  *In Infinity and the Mind, Rudy Rucker describes a visit to the famous Bocca della Verità in Rome, a Roman drain cover carved to resemble a face with an open mouth; according to legend, anyone who puts his hand into the mouth and tells a lie will not be able to pull it out again. Rucker put his hand into the mouth and said, “I will not be able to pull my hand back out again.”

  *In “Boolean algebra,” + and - give way to prepositions. A major breakthrough in computer design came with the realization that switches, which have only two states, on and off, could be made to correspond to the operations at the heart of binary arithmetic: 0 and 1, true and false.

  *It is probably worth noting here that Frege, to quote the philosopher Michael Dummett, was “a virulent racist, specifically an anti-semite . . . a man of extreme right-wing opinions, bitterly opposed to the parliamentary system, democrats, liberals, Catholics, the French, and, above all, Jews, who he thought ought to be deprived of political rights and, preferably, expelled from Germany.”

  † Cardinal numbers are the nouns of mathematics (1, 2, 3, . . .), while ordinal numbers are the adjectives (1st, 2nd, 3rd, . . .).

  *John Kemeny described Principia Mathematica as “a masterpiece discussed by practically every philosopher and read by practically none.”

  *To this remark G. H. Hardy added wryly, “The worst that can happen is that we shall have to be a little more particular about our clothes.”

  *The fundamental theorem of arithmetic states that any positive integer can be represented exactly one way as a product of prime numbers.

  *Goldbach’s conjecture—still unproven as of this writing—was the subject not just of a 1992 novel (Uncle Petros and Goldbach’s Conjecture, by Apostolos Doxiadis) but of a marketing campaign on the part of the novel’s English and American publishers, Faber & Faber and Bloomsbury USA, respectively, that offered $1,000,000 to anyone who could do what the novel’s Uncle Petros could not, and construct a proof. It was not a particularly risky gamble for the publishers to take, and no one won.

  3

  The Universal Machine

  1.

  Although it was Hilbert who first called for a solution to the Entscheidungsproblem, the decision problem itself dates back to the thirteenth century, when the medieval thinker Raimundus Lullus (1232–1316) conceived of a general problem-solving method that he called ars magna. Leibniz expanded on the work of Lullus, both by calling for the establishment of a symbolic language (the characteristica universalis) with which to carry out the problem solving and by drawing a distinction “between two different versions of ars magna. The first version, ars inveniendi, finds all true scientific statements. The other, ars iudicandi, allows one to decide whether any given scientific statement is true or not.” The decision problem, as Hilbert expressed it, falls under the rubric of ars iudicandi and “can be sharpened to a yes/no question: Does there exist an algorithm that decides the validity of any given first-order formula?*

  Before we continue, an aside about the word “algorithm.” It has an interesting history. The American Heritage Dictionary defines an algorithm as a “step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps.” The word itself is derived from the name of the ninth-century Persian mathematician Muhammad ibn Mûsâ al-Khowârizmi, who around 825 AD wrote an important mathematics text, Kitab al-jabr wa’l-muqabala. (The word “algebra” is a derivation of al-jabr.) As an example of an algorithm, Roger Penrose cites Euclid’s algorithm, a method for finding the highest common factor of two numbers. It works like this. Pick two numbers at random—let’s say 4,782 and 1,365. What is the largest single whole number that divides into both numbers without leaving a remainder? To find this out, we first divide the larger of the two numbers by the smaller:

  4,782 ÷ 1,365 = 3 with a remainder of 687

  We now divide the smaller of the two numbers—1,365—by the remainder, 687:

  1,365 ÷ 687 = 1 with a remainder of 678

  Continuing by the same method, we get the following:

  687 ÷ 678 = 1 with a remainder of 9

  678 ÷ 9 = 75 with a remainder of 3

  9 ÷ 3 = 3 with a remainder of 0

  Therefore 3 is the highest common factor between the two numbers.

  Of course, some algorithms are vastly more complex than this one, just as some are much simpler. Adding up figures by hand, for instance, requires the use of a simple algorithm. So does determining whether a number is prime. For each algorithm, there are infinitely many possibilities, as any of an infinity of numbers is fed in. What is crucial is that the algorithmic procedure is systematic: that is to say, the procedure is guaranteed to arrive at an answer in a finite period of time, and within a finite number of steps. In a sense, the Entscheidungsproblem might be described as the quest for a sort of ur-algorithm, one by means of which the validity or provability of any argument can be determined. This was a tall order, as Hilbert himself acknowledged; indeed, he called it “the main problem of mathematical logic.”

  It was in his Grundzüge der theoretischen Logik, co-written with Wilhelm Ackermann and published in 1928, that Hilbert stated his own version of the Entscheidungsproblem. Here a chapter entitled “The Decision Problem” begins, “From the considerations of the preceding section, there emerges the fundamental importance of determining whether or not a given formula of the predicate calculus is universally valid.”* Take Goldbach’s Conjecture: was there an algorithm to determine whether it follows from a particular given set of axioms written in the language of first order logic? “There is of course no such theorem,” the ever skeptical Hardy wrote, “and this is very fortunate, since if there were we should have a mechanical set of rules for the solution of all mathematical problems, and our activities as mathematicians would come to an end.”* Certainly a positive result would have done much to counteract the dispiriting effect (for some) of Gödel’s paper, since in principle that result would amount to a fulfillment of Leibniz’s idealistic notion of a calculus ratiocinator. Such a result, moreover, was not considered unthinkable. Writing in 1931, the mathematician Jacques Herbrand (1908–1931) noted that “although at present it seems unlikely that the decision problem can be solved, it has not yet been proved that it is impossible to do so.” Such a result might even allow mathematicians to put Gödel’s results aside as a kind of logical aberration along the lines of the liar’s paradox. One senses a division that is almost political, with one group viewing as a positive achievment what another feared might bring on the collapse of mathematical endeavor itself.

  Turing was probably in neither group. His isolation (not to mention his homosexuality) disinclined him to overidentify with larger collectives. Notably he avoided, during the politically turbulent years he spent at Cambridge, acquiring a political affiliation, despite his fervent (and pragmatic) opposition to war. Along the same lines, he looked upon the Entschei-dungsproblem as simply a question that required resolution. Perhaps because he did not come at the problem hoping for either a positive or a negative result, he was able to attack it in an entirely new way.

  He was first exposed to the Entscheidungsproblem in 1934, when he took Professor M. H. A. “Max” Newman’s course on the foundation of mathematics. Newman (1897–1984) was an avatar of the branch of mathematics called topology, which deals with the formalization of such concepts as connectedness, convergence, and continuity; with the properties of geometric figures that can be stretched without tearing. At the heart of topology is set theory, which in turn led to Hilbert, which in turn led to the questions that Hilbert had posed at the 1928 conference in Bologna. Although Gödel’s 1931 paper had established that the axiomatic system embodied in PM was undecidable and inconsistent, the Ent
scheidungsproblem, which Newman characterized as a matter of finding a “mechanical process” for testing the validity of an assertion, remained unresolved. In a memoir written after Turing’s death, Newman summed up the situation at the point when Turing elected to take on Hilbert’s final challenge:

  The Hilbert decision-programme of the 1920’s and 30’s had for its objective the discovery of a general process, applicable to any mathematical theorem expressed in fully symbolical form, for deciding the truth or falsehood of the theorem. A first blow was dealt at the prospects of finding this new philosopher’s stone by Gödel’s incompleteness theorem (1931), which made it clear that the truth or falsehood of A could not be equated to provability of A or not-A in any finitely based logic, chosen once for all; but there still remained in principle the possibility of finding a mechanical process for deciding whether A, or not-A, or neither, was formally provable in a given system. Many were convinced that no such process was possible, but Turing set out to demonstrate the impossibility rigorously.

  The summer after he was elected a fellow of King’s, Turing took to running long distances in and around Cambridge. His friend Robin Gandy later wrote, “I remember Turing telling me that the ‘main idea for the paper’ came to him when he was lying in the grass in Grantchester meadows.” Gandy speculated that by then Turing had “already conceived of some form of Turing machine, and that what he meant by ‘the main idea’ was the realisation that there could be a universal machine and that this could permit a diagonal argument.” Sometime later Turing shared this idea with his friend David Champernowne. He did not mention it to Newman, to whom he presented a finished typescript in April 1936. Just as inspiration—the rare thrill of seeing a way through—had come to him in solitude, it was in solitude that he undertook the labor of constructing and writing up the proof.

  What he produced was remarkable. Earlier, Hardy had dismissed those naïve enough to assume that mathematicians made their discoveries by turning the handle of some “miraculous machine.” Remember, however, that Turing was famously literal-minded. When Newman, in his lecture, described Hilbert’s “definite method” as a “mechanical process,” he started an idea in Turing’s head the future repercussions of which would be immense. The word “mechanical,” in its original sense, had referred to manual occupation, to work performed by human beings. By the 1930s, however, mechanical meant gears, rotors, vacuum tubes. It meant a machine. Turing took both definitions to heart.

  In the 1930s, when he began his work on the Entscheidungs-problem, the word “computer” also had a meaning different from the one it has today: it meant simply a person who did computations—that is to say, a person engaged in the active use of algorithms. Computation in the 1930s required long hours of human labor, during which the computer might be aided by such tools as an abacus or even an adding machine but was nonetheless required to do the work herself.* No computational machines existed, and though the eccentric genius Charles Babbage had in the nineteenth century conceived of and designed one, his “analytical engine” was never built. Babbage’s invention foreshadowed Turing’s “universal machine” in that it would in principle have been capable of any mathematical calculation. It differed in that Babbage failed to make the crucial conceptual breakthrough of recognizing that the instructions could be written in the same mathematical language as the procedure to which they applied. Instead, he envisioned an essentially industrial device the basis for which was a machine designed to weave the rich patterns on brocade, with the instructions encoded on punch cards. Once again, in the case of Babbage, the milieu of computer science brushed up against that of literature, since one of his champions was Ada, the countess of Lovelace, the daughter of Lord Byron. Indeed, of Babbage’s engine, Ada Byron wrote, “We may say most aptly, that the Analytic Engine weaves algebraic patterns just as the jacquard-loom weaves flowers and leaves.”†

  According to Gandy, Turing was unaware of Babbage’s planned machine when he undertook his work on the Entscheidungsproblem. Nonetheless, he shared with Babbage an approach that reflected the essentially industrial ethos of the England in which he had grown up. Technology, for Turing, meant factories teeming with human labor—a milieu not unlike the one which Sidney Stratton makes his discoveries in The Man in the White Suit. The machine of which he conceived bore a closer resemblance to a knitting or packing machine than to an iPod, though with the advent of electronics, this too would change.

  Turing presented his results in a paper modestly entitled “On Computable Numbers, with an Application to the Entscheidungsproblem.” He finished the first draft in April of 1936, and the paper was published in early 1937 in Proceedings of the London Mathematical Society. It divides into roughly three sections: the first defines the idea of the “computable number” and of the “computing machine”; the second posits the concept of a “universal machine”; and the third employs these concepts to prove that the Entscheidungsproblem is insoluble. Like most of Turing’s papers, “Computable Numbers” is marked by a curious blend of humbly phrased, somewhat philosophical speculation and highly technical mathematics. The result, for the general reader, is disconcerting, since invariably those passages the import of which is easy to grasp segue immediately into dense bogs of unfamiliar symbols, German and Greek letters, and binary numbers. Yet what is perhaps even more striking than the paper’s style is its utter lack of intellectual grandstanding. Indeed, one comes away from reading it with the distinct sense that Turing had no clue as to the importance of what he had just done.

  As is so often the case in mathematics, the central question that the paper addresses appears, on the surface at least, to be bewilderingly simple. “What,” Turing asks, “are the possible processes which can be carried out in computing a number?” Already he had defined computable numbers

  as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly the computable numbers, it is almost equally easy to define and investigate computable functions of an integral variable or a real or computable variable, computable predicates, and so forth. The fundamental problems involved are, however, the same in each case, and I have chosen the computable numbers for explicit treatment as involving the least cumbrous technique.

  As Hodges observes, “it is characteristic of Turing that he refreshed Hilbert’s question by casting it in terms not of proofs, but of computing numbers. The reformulation staked a clearer claim to have found an idea central to mathematics.” At the same time, Turing wants to make sure that we remember (to quote Roger Penrose) that “the issue of computability is an important one generally in mathematics. . . . One can have Turing machines which operate directly on mathematical formulae, such as algebraic or trigonometric expressions, for example, or which carry through the formal manipulations of calculus.” Such machines are more technically complex versions of the number-oriented machine that Turing posits a few sentences later: “According to my definition, a number is computable if its decimal can be written down by a machine.”

  The significance of this statement should not be underestimated. To speak of a hypothetical computing “machine,” especially in a mathematics paper in the 1930s, was to break the rules of a fairly rigid orthodoxy. No such machines existed at the time, only calculating devices too crude to undertake any complex mathematics, and certainly not programmable. Yet Turing offers the sentence with no fanfare whatsoever, and then—just as quickly as he has brought up the important concept of the computing machine—he drops it, in order to give an outline of what the rest of the paper will contain. He returns to his machine only in the second paragraph of the next section, in which he compares “a man in the process of computing a real number to a machine which is only capable of a finite number of conditions.” These conditions he calls m-configurations.

  Turing now describes how the machine actually works. Running through it is a tape divided into squares each of which can be marked with a symbol. At any moment, only one squa
re can be “in the machine.” This square is the “scanned square,” while the symbol it bears is the “scanned symbol.” The scanned symbol “is the only one of which the machine is, so to speak, ‘directly aware.’ However, by altering its m-configuration, the machine can effectively remember some of the symbols which it has ‘seen’ (scanned) previously.” The machine’s behavior at any moment is determined by its m-configuration and by the scanned symbol, which, taken together, Turing defines as the machine’s configuration. Depending on its configuration, the machine will write a symbol in a blank square; erase a symbol already written there; move the tape one space to the left; or move the tape one space to the right. What determines how it will act is a “table of behavior” specifying the sequence of m-configurations according to which the machine can carry out its particular algorithm. “At any stage of the motion of the machine,” Turing continues, “the number of the scanned square, the complete sequence of all symbols on the tape, and the m-configuration will be said to describe the complete configuration at that stage. The changes of the machine and tape between successive complete configurations will be called the moves of the machine.” The distinction between the machine’s m-configuration, configuration, and complete configuration is worth taking note of, because it will become relevant as the argument progresses. Although such a machine is now commonly referred to as a “Turing machine,” Turing himself called it an “automatic machine” or “a-machine.”

  Before we look at an example of how a specific Turing machine does its job, it is worth reminding ourselves that when Turing wrote his paper, he was not, in fact, thinking of a machine that would or could ever be built. Nor did he share, at this stage, Babbage’s Rube Goldberg–like enthusiasm for cranks and gears.* The engineer in Turing would emerge later; when he wrote “Computable Numbers,” he intended his machine as a kind of literary device—the analogy, as it were, by means of which he could convey the central concept of the computable numbers most cleanly and economically. Analogy is an important tool in making mathematics comprehensible to the nonmathematician; Turing was unusual in that he built the analogy into his proof. By doing so, he distinguished himself from mathematicians who were following less elegant (or, as he might have put it, “more cumbrous”) avenues in their pursuit of the same idea.

 

‹ Prev