We shall eventually obtain a word-for-word account of the conference. All the information given was ‘unclassified’.
If, like the Second Front, the plans had been delayed again and again, no loss of confidence was betrayed when Alan gave a talk on 20 February to the London Mathematical Society. This was when he elaborated in detail* on the imagined operation of the ACE, and spoke as if its realisation was almost a formality: before long the terminals would be humming with activity, and the programmers would be busy with the work of converting the nation’s problems into logical instructions.
His talk, however, dwelt rather more upon the dream that lay behind the practicalities of an installation, bringing out in a public form the ideas that he had long been developing in Bletchley conversations. In fact his discussion opened with the picture of ‘masters’ and ‘servants’ who would attend the ACE, very much as the high-level cryptanalysts and the ‘girls’ had worked to decipher naval Enigma. The masters would attend to its logical programming, and the servants to its physical operation. But, he said, ‘as time goes on the calculator itself will take over the functions both of masters and of servants. The servants will be replaced by mechanical and electrical limbs and sense organs. One might for instance provide curve followers to enable data to be taken direct from curves instead of having girls read off values and punch them on cards.’ This was not a new idea, for F. C. Williams had built just such a thing for the old Manchester differential analyser. But the novelty lay in suggesting that:
The masters are liable to get replaced because as soon as any technique becomes at all stereotyped it becomes possible to devise a system of instruction tables which will enable the electronic computer to do it for itself. It may happen however that the masters will refuse to do this. They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well chosen gibberish, whenever any dangerous suggestions were made. I think that a reaction of this kind is a very real danger. This topic naturally leads to the question as to how far it is in principle possible for a computing machine to simulate human activities.
This was a more controversial claim. Hartree, for instance, writing to The Times in November, had repeated his statement in Nature that ‘use of the machine is no substitute for the thought of organising the computations, only for the labour of carrying them out.’ Darwin had written more expansively that
In popular language the word ‘brain’ is associated with the higher realms of the intellect, but in fact a very great part of the brain is an unconscious automatic machine producing precise and sometimes very complicated reactions to stimuli. This is the only part of the brain we may aspire to imitate. The new machines will in no way replace thought, but rather they will increase the need for it …
To describe such careful and responsible statements as ‘gibberish’ was not the most tactful policy.
Darwin and Hartree were, in fact, echoing the comment by Ada, Countess of Lovelace, who wrote an account38 of Babbage’s planned Analytical Engine in 1842, and claimed that ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.’ At one level, this assertion certainly had to be urged against the very naive view that a machine doing long and elaborate sums could be called clever for so doing. As the first writer of programs for a universal machine, Lady Lovelace knew that the cleverness lay in her own head. Alan Turing would not have disputed this point, as far as it went. The manager who took all decisions from the rule book would hardly be ‘intelligent’, or really taking a decision. It would be the writer of the rule book who was determining what happened. But he held that there was no reason in principle why the machine should not take over the work of the ‘master’ who programmed it, to a point where, according to the imitation principle, it could be called intelligent or original.
What he had in mind went much further than the development of languages which would take over the detailed work of the ‘masters’ in compiling instruction tables. He mentioned this future development, which in the ACE report he had already explored a little, quite briefly:
Actually one could communicate with these machines in any language provided it was an exact language, i.e. in principle one should be able to communicate in any symbolic logic, provided that the machines were given instruction tables which would enable it to interpret that logical system. This should mean that there will be much more practical scope for logical systems than there has been in the past. Some attempts will probably be made to get the machines to do actual manipulations of mathematical formulae. To do so will require the development of a special logical system for the purpose. This system should resemble normal mathematical procedure closely, but at the same time should be as unambiguous as possible.
Rather, in speaking of a computer ‘simulating human activities’, he had in mind the simulation of learning, in such a way that after a point the machine would not merely be doing ‘whatever we know how to order it to perform,’ as Lady Lovelace had claimed, for no one would know how it was working:
It has been said that computing machines can only carry out the purposes that they are instructed to do. This is certainly true in the sense that if they do something other than what they were instructed then they have just made some mistake. It is also true that the intention in constructing these machines in the first instance is to treat them as slaves, giving them only jobs which have been thought out in detail, jobs such that the user of the machine fully understands in principle what is going on all the time. Up till the present machines have only been used in this way. But is it necessary that they should always be used in such a manner? Let us suppose we have set up a machine with certain initial instruction tables, so constructed that these tables might on occasion, if good reason arose, modify those tables. One can imagine that after the machine had been operating for some time, the instructions would have altered out of recognition, but nevertheless still be such that one would have to admit that the machine was still doing very worthwhile calculations.
It was in this passage that he drew first attention to the richness inherent in a stored-program universal machine. He was well aware that, strictly speaking, exploitation of the ability to modify the instructions could not enlarge the scope of the machine, later writing:39
How can the rules of a machine change? They should describe completely how the machine will react whatever its history might be, whatever changes it might undergo. The rules are thus quite time-invariant…. The explanation of the paradox is that the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity. The reader may draw a parallel with the Constitution of the United States.
But with that strictly logical reservation, he held the process of changing instructions to be significantly close to that of human learning, and deserving of emphasis. He imagined the progress of the machine altering its own instructions, as like that of a ‘pupil’ learning from a ‘master’. (It was a typically quick shift to the ‘states of mind’ idea of the machine from the ‘instruction note’ view.) A learning machine, he went on to explain:
might still be getting results of the type desired when the machine was first set up, but in a much more efficient manner. In such a case one would have to admit that the progress of the machine had not been foreseen when its original instructions were put in. It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence. As soon as one can provide a reasonably large memory capacity it should be possible to begin to experiment on these lines. The memory capacity of the human brain is of the order of ten thousand million binary digits. But most of this is probably used in remembering visual impressions, and other comparatively wasteful ways. One might reasonably hope to be able to make some real progress with a few
million digits, especially if one confined one’s investigation to some rather limited field such as the game of chess.
The ACE, as planned, would have at most 200,000 digits in store, so to speak of a ‘few million’ was looking well into the future. He described the storage planned for the ACE as ‘comparable with the memory capacity of a minnow’. But even so, he perceived the development of ‘learning’ programs as something that would be feasible within a short period: not merely a hypothetical possibility, but affecting current research in a practical way. On 20 November 1946 he had replied to an enquiry from W. Ross Ashby, a neurologist eager to make progress with mechanical models of cerebral function, in the following terms:40
The ACE will be used, as you suggest, in the first instance in an entirely disciplined manner, similar to the action of the lower centres, although the reflexes will be extremely complicated. The disciplined action carries with it the disagreeable feature, which you mentioned, that it will be entirely uncritical when anything goes wrong. It will also be necessarily devoid of anything that could be called originality. There is, however, no reason why the machine should always be used in such a manner: there is nothing in its construction which obliges us to do so. It would be quite possible for the machine to try out variations of behaviour and accept or reject them in the manner you describe and I have been hoping to make the machine do this. This is possible because, without altering the design of the machine itself, it can, in theory at any rate, be used as a model of any other machine, by making it remember a suitable set of instructions. The ACE is in fact, analogous to the ‘universal machine’ described in my paper on computable numbers. This theoretical possibility is attainable in practice, in all reasonable cases, at worst at the expense of operating slightly slower than a machine specially designed for the purpose in question. Thus, although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model, within the ACE, in which this possibility was allowed for, but in which the actual construction of the ACE did not alter, but only the remembered data, describing the mode of behaviour applicable at any time. I feel that you would be well advised to take advantage of this principle, and do your experiments on the ACE, instead of building a special machine.
Enlarging in his talk upon the favourite example of chess-playing, Alan claimed that
It would probably be quite easy to find instruction tables which would enable the ACE to win against an average player. Indeed Shannon of Bell Telephone Laboratories tells me that he has won games playing by rule of thumb; the skill of his opponents is not stated.
This was probably a misunderstanding. Shannon had been thinking about mechanising chess-playing, since about 1945, by a minimax strategy requiring the ‘backing up’ of search trees – the same basic idea as Alan and Jack Good had formalised in 1941. But he had not claimed to have produced a winning program. In any case, however, Alan
would not consider such a victory very significant. What we want is a machine that can learn from experience. The possibility of letting the machine alter its own instructions provides the mechanism for this, but this of course does not get us very far.
Alan next turned a little aside from this central idea in order to consider the objection to the idea of machine ‘intelligence’ that was raised by the existence of problems insoluble by a mechanical process – by the discovery of Computable Numbers, in fact. In the ‘ordinal logics’ he had invested the business of seeing the truth of an unprovable assertion with the psychological significance of ‘intuition’. But this was not the view that he put forward now. Indeed, his comments verged on saying that such problems were irrelevant to the question of ‘intelligence’. He did not probe far into the significance of Gödel’s theorem and his own result, but instead cut the Gordian knot:
I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.
This was very true. Gödel’s theorem and his own result were concerned with the machine as a sort of papal authority, infallible rather than intelligent. But his real point lay in the imitation principle, couched in traditional British terms of ‘fair play for machines’, when it came to ‘testing their IQ’, a point which brought him back to the idea of mechanical learning by experience:
A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge. Why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards. The game of chess may perhaps be rather suitable for this purpose, as the moves of the opponent will automatically provide this contact.
At the end of this talk there was a moment of stunned incredulity, during which his audience looked round with disbelief. This was probably much to Alan’s delight. He knew perfectly well that he was upsetting the conventional armistice between science and religion, and it was all the more grist to his mill. He had thought it all out since reading Eddington while in the sixth form, and he was not now going to toe this official line that separated the ‘unconscious automatic machine’ from the ‘higher realms of the intellect’. There was no such line – that was his thesis.
At heart it was the same problem of mind and matter that Eddington had tried to rescue for the side of the angels by invoking the Heisenberg Uncertainty Principle. But there was a difference. Eddington had addressed himself to the determinism of physical law, in order to deal with the kind of Victorian scientific world-picture that Samuel Butler had parodied in Erewhon:
If it be urged that the action of the potato is chemical and mechanical only, and that it is due to the chemical and mechanical effects of light and heat, the answer would seem to lie in an enquiry whether every sensation is not chemical and mechanical in its operation? … Whether there be not a molecular action of thought, whence a dynamical theory of the passions shall be deducible? Whether strictly speaking we should not ask what kinds of levers a man is made of rather than what is his temperament? How are they balanced? How much of such and such will it take to weigh them down so as to make him do so and so?
It was a picture drawn from nineteenth-century physics, chemistry and biology. But the Turing challenge was on a different level of deterministic description, that of the abstract logical machine, as he had defined it. There was another difference. Victorians like Butler, Shaw and Carpenter had concerned themselves with identifying a soul, a spirit or life force. Alan Turing was talking about ‘intelligence’.
Alan did not define what he meant by this word, but the chess-playing paradigm, to which he constantly returned, would make it the faculty of working out how to achieve some goal, and the reference to IQ tests would indicate some measurable kind of performance of this skill. Coming from Bletchley, this kind of ‘intelligence’ was of burning and obvious significance. Intelligence had won the war. They had solved countless chess problems, and had beaten the Germans at the game. And more broadly, for his scientific generation, life had been a battle for ‘intelligence’, fought against stupid out-of-date schools, a stupid economic system, and stupid Blimps from ‘a profession for fools’ during the war – not to mention the Nazis, who had elevated stupidity into a religion. There was the influence of a Webbsian vision of socia
lism, in which society was going to be administered by intelligent functionaries of the state, as in the near future of Back to Methuselah, and in 1947 there was much talk about IQ tests, since the British youth was supposedly being newly divided into scientifically defined categories according to ‘intelligence’ rather than by class. Oscar Wilde had written of The Soul of Man under Socialism, but under the socialism of Attlee and Bevin words like ‘soul’ – supernatural or ‘soupy’ words as Bertrand Russell called them – could be left to bishops and pep talks about team spirit.
While many people might have reservations about the wisdom and beneficence of scientists, they were at last basking in the favour of government. The war had converted government to an interest in science, and a view once visionary, then progressive, was becoming orthodox. The scientists had emerged from the miserly corners in which they had done their despised ‘stinks’ before, and it seemed that their swords could be turned into ploughshares, or more precisely, that they would supply governments with scientific solutions to their problems. On one level Alan Turing belonged to this climate of opinion, and certainly rejected the idea that scientists, rather than generals and politicians, were to blame for the world’s current imperfections. Mermagen from Sherborne days, now a master at Radley, another public school, wrote to Alan at this time for advice about the place of mathematics and science in the post-war world, and Alan replied:41
On the subject of careers for mathematicians I am strongly inclined to think that the effect of ACE, guided projectiles, etc, etc, will be towards a considerably greater demand for mathematicians from a certain level upwards. For instance I am in need of a number who will be required to convert problems into a form which can be understood by the machine. The critical level may be described roughly as the degree of intelligence shown by the machine. We obviously do not want people who can take no responsibility at all. We just make the machine do the work which might have been given to them. At present of course this critical level is very low and I am sure you need not be afraid of encouraging boys that are keen and want to take up a mathematical career. The worst danger is probably an anti-scientific reaction (Scientists instead of goats at Bikini etc) but this is a digression.*
Alan Turing: The Enigma: The Book That Inspired the Film The Imitation Game Page 57