Book Read Free

The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Great Discoveries)

Page 22

by David Leavitt


  Turing’s repudiation of scientific induction, however, is more than just a dig at the insularity and closed-mindedness of England. His purpose is actually much larger: to call attention to the infinite regress into which we are likely to fall if we attempt to use disabilities (such as, say, the inability, on the part of a man, to feel attraction to a woman) as determining factors in defining intelligence. Nor is the question of homosexuality far from Turing’s mind, as the refinement that he offers in the next paragraph attests:

  There are, however, special remarks to be made about many of the disabilities that have been mentioned. The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities, e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.

  To the brew of gender and sexuality, then, race is added, as “strawberries and cream” (earlier bookended between the ability to fall in love and the ability to make someone fall in love) becomes a code word for tastes that Turing prefers not to name. In many ways the passage recalls the rather campy bathhouse scene in the 1960 film Spartacus, in which a dialogue about other “delicious dishes” encodes a subtle erotic bargaining between Crassus (Laurence Olivier) and his slave Antoninus (Tony Curtis).

  Crassus: Do you eat oysters?

  Antoninus: When I have them, master.

  Crassus: Do you eat snails?

  Antoninus: No, master.

  Crassus: Do you consider the eating of oysters to be moral, and the eating of snails to be immoral?

  Antoninus: No, master.

  Crassus: Of course not. It’s all a matter of taste.

  Antoninus: Yes, master.

  Crassus: And taste is not the same as appetite and therefore not a question of morals, is it?

  Antoninus: It could be argued so, master.

  Crassus: Um, that’ll do. My robe, Antoninus. Ah, my taste . . . includes both oysters and snails.

  In this exchange Crassus, too, is engaging in a kind of imitation game, the purpose of which is to assess whether it would or would not be a good idea to offer Antoninus (who prefers oysters) some of his snails. Antoninus, at the same time, recognizes the advantage, at least on occasion, of giving the “wrong” answer (“No, master”)—just as a machine would have to if it were to have a chance of winning the game:

  The claim that “machines cannot make mistakes” seems a curious one. . . . I think this criticism can be explained in terms of the imitation game. It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy. The reply to this is simple. The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.

  “Errors of functioning,” then, must be kept distinct from “errors of conclusion.” Nor should it be assumed that machines are not capable of deception. On the contrary, the criticism “that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity.”

  Turing wraps up his catalog of possible objections to the thinking machine with four rather curious examples. The first, which he calls “Lady Lovelace’s Objection” (in reference to Byron’s daughter and Babbage’s muse), is that computers are incapable of “originating” anything. Instead (and here Turing quotes Lady Lovelace), “a computer can do whatever we know how to order it to perform.” But, as Turing points out, in actual practice, machines surprise human beings all the time. Turing then rebuts the “Argument from Continuity in the Nervous System”—although it is true that a discrete-state machine cannot mimic the behavior of the nervous system, “if we adhere to the conditions of the imitation game, the interrogator will not be able to take any advantage of this difference”—and assesses the “Argument from Informality of Behavior”: “If each man had a definite set of rules of conduct by which he regulated life he would be no better than a machine. But there are no such rules, so men cannot be machines.” This objection Turing answers, first, by distinguishing “rules of conduct” from the “laws of behavior” by which machines are presumably regulated, then by pointing out that “we cannot so easily convince ourselves of the absence of complete laws of behaviour as of complete rules of conduct.” By way of example, he describes another experiment:

  I have set up on the Manchester computer a small programme using only 1000 units of storage, whereby the machine supplied with one sixteen figure number replies with another within two seconds. I would defy anyone to learn from these replies sufficient about the program to be able to predict any replies to untried values.

  The last—and most peculiar—objection that Turing takes on is the argument “from extra-sensory perception,” which he prefaces with a surprisingly credulous description of telepathy, clairvoyance, precognition, and psychokinesis. Of these he remarks, “Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one’s ideas so as to fit these new facts in.” Without giving a source for this “overwhelming” evidence, Turing goes on to give the “strong” argument from ESP against a machine’s winning the imitation game:

  Let us play the imitation game, using as witnesses a man who is good as a telepathic receiver, and a digital computer. The interrogator can ask such questions as “What suit does the card in my right hand belong to?” the man by telepathy or clairvoyance gives the right answer 130 times out of 400 cards. The machine can only guess at random, and perhaps gets 104 right, so the interrogator makes the right identification.

  For Turing, the scenario as described opens up the “interesting possibility” of equipping the digital computer in question with a random number generator.

  Then it will be natural to use this to decide what answer to give. But then the random number generator will be subject to the psycho-kinetic powers of the interrogator. Perhaps this psycho-kinesis might cause the machine to guess right more often than would be expected on a probability calculation, so that the interrogator might still be unable to make the right identification. On the other hand, he might be able to guess right without any questioning, by clairvoyance. With E. S. P. anything may happen.

  Rather than offering a refutation of this argument, Turing says only that perhaps the best solution would be to put the competitors into a “telepathy-proof room”—whatever that means. One wonders what the editors of that august scientific publication Mind made of this bizarre appeal to a pseudoscience as baseless, if not as pernicious, as the one on the altar of which Turing would soon be laid out, as a kind of experiment. For how could they know that years before Turing had loved a boy named Christopher Morcom, with whose spirit he had been determined to remain connected even after death?*

  “Computing Machinery and Intelligence” concludes with a meditation on teaching and learning that reiterates much of the technique prescribed in “Intelligent Machinery.” Here, however, Turing adds the proviso that his system of punishments and rewards does not “presuppose any feelings on the part of the machine.” Moving a bit away from the rigorously behaviorist ethos that animated “Intelligent Machinery,” he also reminds his readers that “the use of punishments and rewards can at best be a part of the teaching process. . . . By the time a child has learnt to repeat ‘Casabianca’ he would probably feel very sore indeed, if the text could only be discovered by a ‘Twenty Questions’ technique, every ‘NO’ taking the form of a blow.” Less emotional techniques need to be employed as well, especially when the objective is to teach the machine to obey orders in a symbolic language.

  Probably the biggest shift from “Intelligent Machinery,
” however, is that here Turing elects to anthropomorphize his child-machine to a much greater degree than in the earlier paper, putting more emphasis on its childishness than on its machinishness. For example, near the end of the paper, he asks, “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? . . . Presumably the child-brain is something like a note-book as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets.” But because this notebook mind is contained within a machine body, a slightly different teaching process has to be applied to it than would be to the “normal” child:

  It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. But however well these deficiencies might be overcome by clever engineering, one could not send the creature to school without the other children making excessive fun of it. It must be given some tuition. We need not be too concerned about the legs, eyes, etc. The example of Miss Helen Keller shows that education can take place provided that communication in both directions between teacher and pupil can take place by some means or other.

  One thinks of Turing as a boy, “watching the daisies grow.” Does he feel some sense of identification with Helen Keller, provided (as Turing was not) with an education to suit her particular disabilities? Certainly

  the imperatives that can be obeyed by a machine that has no limbs are bound to be of a rather intellectual character. . . . For at each stage when one is using a logical system, there is a very large number of alternative steps, any of which one is permitted to apply, so far as obedience to the rules of the logical system is concerned. These choices make the difference between a brilliant and a footling reasoner, not the difference between a sound and a fallacious one.

  And the ability to reason is, finally, the ultimate evidence of intelligence. If it is to be attained, however, flexibility is essential, even if “the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity. The reader may draw a parallel with the Constitution of the United States.”

  In the end, Turing believes, the goal should be to do exactly what alarms Jefferson: to construct machines that “will eventually compete with men in all purely intellectual fields.” Perhaps the best way to start would be to teach the machine some “very abstract activity,” such as how to play chess; or perhaps it would make more sense to provide it with “the best sense organs that money can buy, and then teach it to understand and speak English.” In either case, the final note that Turing sounds in “Computing Machinery and Intelligence” combines triumph with a certain detached self-assurance. For Turing, thinking machines are inevitable, whether we like them or not. It is as if his faith in future tolerance had once again bolstered him against the very real threat of present injustice.

  4.

  The years Turing spent working with the Manchester computer were marked by an increasing isolation from other people, as he became less and less interested in the computer itself and more and more involved in the experiments he was using it for. Not that he only did experiments: he also wrote a programmer’s handbook in which he urged potential users of the Manchester machine to employ an almost literary sensibility in designing programs. Most of his time, though, he devoted to the application of the machine to such pure mathematical problems as constructing a new proof for the word problem for semigroups (a major achievement), and to working with permutation theory, which had played an important role in his code breaking at Bletchley. His colleague Christopher Strachey also taught the machine to sing “God Save the King.”

  Probably the experiment that meant the most to Turing, however, was the one with which he had the least success. For years he had remained fascinated by the Riemann hypothesis, which for some reason he had convinced himself had to be false. True, the machine he had tried to build with Donald MacPhail at Cambridge had ended up on the scrap heap. Yet he had never forgotten his ambition of beating Titchmarsh’s record for the calculation of zeros, and still hoped he might one day be able to find a zero off the critical line. Toward that end, in 1943 he had published a paper entitled “A Method for the Calculation of the Zeta-function” in the Proceedings of the London Mathematical Society. Titchmarsh, using hand methods, had shown that all the zeros up to t = 1,468 were on the critical line. Now Turing put his own method to the test. In 1953 he designed a program by means of which the Manchester computer could calculate zeta zeros using its complex base-32 code, and by means of that program he proved the validity of the Riemann hypothesis as far as t = 1,540—72 more zeros than Titchmarsh had found—before the machine broke down.

  It was, as Turing ruefully noted, “a negligible advance.”

  * * *

  *Julia Robinson later proved that 2521 – 1 was, in fact, prime.

  *Turing might have been thinking of Trethowan when at the end of his 1951 Manchester lecture he remarked, “There would be plenty to do in trying, say, to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

  *For an interesting analysis of skin imagery—of which there is a lot—in Turing’s paper, see Jean Lassègue, “What Kind of Turing Test Did Turing Have in Mind?” http://tekhnema.free.fr/3Lasseguearticle.htm.

  *Turing used similar language during a 1952 BBC roundtable discussion, in which, as an example of the sort of question to use in the imitation game, he proposed the following: “I put it to you that you are only pretending to be a man.” In such a case, “the machine would be permitted all sorts of tricks so as to appear more manlike. . . .”

  *In Maurice the hero asks Alec, “Scudder, why do you think it’s ‘natural’ to care both for men and women? You wrote so in your letter. It isn’t natural for me. I have really got to think that ‘natural’ only means oneself.”

  *By 1952, when he was interviewed on the BBC, the estimate had gone up to at least a hundred years.

  *See Lassègue, “What Kind of Turing Test Did Turing Have in Mind?,” for an interesting discussion of the role Christopher Morcom might have played—even subliminally—in the paper.

  8

  Pryce’s Buoy

  1.

  In the spring of 1951 Alan Turing was elected a member of the Royal Society. Among the congratulatory notes he received was one from his old antagonist Sir Geoffrey Jefferson, who wrote, “I am so glad; and I sincerely trust that all your valves are glowing with satisfaction, and signalling messages that seem to you to mean pleasure and pride! (but don’t be deceived!).”

  As it happened, Turing and Jefferson were destined to tangle once more. The occasion was a roundtable discussion of machine intelligence broadcast on the BBC Third Programme on January 14, 1952, in which the other participants were Max Newman and Turing’s old Cambridge friend Richard Braithwaite, one of the two mathematicians who had long ago asked for offprints of “Computable Numbers.” Braithwaite acted as moderator, and while the conversation did little to advance the cause of the thinking machine, at the very least it gave the speakers a chance to refine and clarify some of their positions. As always, Jefferson insisted that it was the “high emotional content of mental processes in the human being that makes him quite different from a machine,” while Newman—ever the pragmatist—struggled valiantly to keep the focus on what existing machines could actually do, as opposed to what future machines might do. Much airtime was devoted to gratuitous speculation; Braithwaite wanted to know whether it would be legitimate to ask the machine what it had had for breakfast, while Jefferson insisted that in creating a model of actual thinking, “the intervention of ext
raneous factors, like the worries of having to make one’s living, or pay one’s taxes, or get food that one likes” could not be “missed out.” Newman, in turn, emphasized pure mathematics and quoted the sculptor Henry Moore as saying, “When the work is more than an exercise, inexplicable jumps occur. This is where the imagination comes in.”

  As for Turing, his answers to the questions put to him were tinged with weariness, perhaps at having to defend his ideas for the thousandth time against the same objections. Once again, he outlined the mathematics of machine learning. Once again, he asserted that intelligence was not the same thing as infallibility. Once again, he reminded his listeners that sometimes “a computing machine does something rather weird that we hadn’t expected.” He was not able to get much support from Braithwaite, who had a habit of punctuating the discussion with inane observations that only made Turing’s job harder; at one point, for instance, Braithwaite wondered whether it would be “necessary for the machine to be able to throw fits or tantrums”—a rather stupefying query that Turing elided by saying that he would be “more interested in curbing such displays than in encouraging them.”

  Not surprisingly, Jefferson got the last word. A remark to the effect that he would not be willing to believe a computing machine could think “until he saw it touch the leg of a lady computing machine” was cut from the broadcast; still, he brought things to a jaunty enough conclusion to diminish Turing’s seriousness, averring, “That old slow coach, man, is the one with the ideas—or so I think. It would be fun some day, Turing, to listen to a discussion, say on the Fourth Programme, between two machines on why human beings think that they think!” The transcript does not say whether or not Turing laughed in reply.

 

‹ Prev