Actually one could communicate with these machines in any language provided it was an exact language, i.e. in principle one should be able to communicate in any symbolic logic, provided that the machines were given instruction tables which would enable it to interpret that logical system. This should mean that there will be much more practical scope for logical systems than there has been in the past. Some attempts will probably be made to get the machines to do actual manipulations of mathematical formulae. To do so will require the development of a special logical system for the purpose. This system should resemble normal mathematical procedure closely, but at the same time should be as unambiguous as possible.
Rather, in speaking of a computer ‘simulating human activities’, he had in mind the simulation of learning, in such a way that after a point the machine would not merely be doing ‘whatever we know how to order it to perform,’ as Lady Lovelace had claimed, for no one would know how it was working:
It has been said that computing machines can only carry out the purposes that they are instructed to do. This is certainly true in the sense that if they do something other than what they were instructed then they have just made some mistake. It is also true that the intention in constructing these machines in the first instance is to treat them as slaves, giving them only jobs which have been thought out in detail, jobs such that the user of the machine fully understands in principle what is going on all the time. Up till the present machines have only been used in this way. But is it necessary that they should always be used in such a manner? Let us suppose we have set up a machine with certain initial instruction tables, so constructed that these tables might on occasion, if good reason arose, modify those tables. One can imagine that after the machine had been operating for some time, the instructions would have altered out of recognition, but nevertheless still be such that one would have to admit that the machine was still doing very worthwhile calculations.
It was in this passage that he drew first attention to the richness inherent in a stored-program universal machine. He was well aware that strictly speaking, exploitation of the ability to modify the instructions could not enlarge the scope of the machine, later writing:39
How can the rules of a machine change? They should describe completely how the machine will react whatever its history might be, whatever changes it might undergo. The rules are thus quite time-invariant. …The explanation of the paradox is that the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity. The reader may draw a parallel with the Constitution of the United States.
But with that strictly logical reservation, he held the process of changing instructions to be significantly close to that of human learning, and deserving of emphasis. He imagined the progress of the machine altering its own instructions, as like that of a ‘pupil’ learning from a ‘master’. (It was a typically quick shift to the ‘states of mind’ idea of the machine from the ‘instruction note’ view.) A learning machine, he went on to explain:
might still be getting results of the type desired when the machine was first set up, but in a much more efficient manner. In such a case one would have to admit that the progress of the machine had not been foreseen when its original instructions were put in. It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence. As soon as one can provide a reasonably large memory capacity it should be possible to begin to experiment on these lines. The memory capacity of the human brain is of the order of ten thousand million binary digits. But most of this is probably used in remembering visual impressions, and other comparatively wasteful ways. One might reasonably hope to be able to make some real progress with a few million digits, especially if one confined one’s investigation to some rather limited field such as the game of chess.
The ACE, as planned, would have at most 200,000 digits in store, so to speak of a ‘few million’ was looking well into the future. He described the storage planned for the ACE as ‘comparable with the memory capacity of a minnow’. But even so, he perceived the development of ‘learning’ programs as something that would be feasible within a short period: not merely a hypothetical possibility, but affecting current research in a practical way. On 20 November 1946 he had replied to an enquiry from W. Ross Ashby, a neurologist eager to make progress with mechanical models of cerebral function, in the following terms:40
The ace will be used, as you suggest, in the first instance in an entirely disciplined manner, similar to the action of the lower centres, although the reflexes will be extremely complicated. The disciplined action carries with it the disagreeable feature, which you mentioned, that it will be entirely uncritical when anything goes wrong. It will also be necessarily devoid of anything that could be called originality. There is, however, no reason why the machine should always be used in such a manner: there is nothing in its construction which obliges us to do so. It would be quite possible for the machine to try out variations of behaviour and accept or reject them in the manner you describe and I have been hoping to make the machine do this. This is possible because, without altering the design of the machine itself, it can, in theory at any rate, be used as a model of any other machine, by making it remember a suitable set of instructions. The ace is in fact, analogous to the ‘universal machine’ described in my paper on computable numbers. This theoretical possibility is attainable in practice, in all reasonable cases, at worst at the expense of operating slightly slower than a machine specially designed for the purpose in question. Thus, although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model, within the ace, in which this possibility was allowed for, but in which the actual construction of the ace did not alter, but only the remembered data, describing the mode of behaviour applicable at any time. I feel that you would be well advised to take advantage of this principle, and do your experiments on the ace, instead of building a special machine.
Enlarging in his talk upon the favourite example of chess-playing, Alan claimed that
It would probably be quite easy to find instruction tables which would enable the ace to win against an average player. Indeed Shannon of Bell Telephone Laboratories tells me that he has won games playing by rule of thumb; the skill of his opponents is not stated.
This was probably a misunderstanding. Shannon had been thinking about mechanising chess-playing, since about 1945, by a minimax strategy requiring the ‘backing up’ of search trees – the same basic idea as Alan and Jack Good had formalised in 1941. But he had not claimed to have produced a winning program. In any case, however, Alan
would not consider such a victory very significant. What we want is a machine that can learn from experience. The possibility of letting the machine alter its own instructions provides the mechanism for this, but this of course does not get us very far.
Alan next turned a little aside from this central idea in order to consider the objection to the idea of machine ‘intelligence’ that was raised by the existence of problems insoluble by a mechanical process – by the discovery of Computable Numbers, in fact. In the ‘ordinal logics’ he had invested the business of seeing the truth of an unprovable assertion, with the psychological significance of ‘intuition’. But this was not the view that he put forward now. Indeed, his comments verged on saying that such problems were irrelevant to the question of ‘intelligence’. He did not probe far into the significance of Gödel’s theorem and his own result, but instead cut the Gordian knot:
I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In ot
her words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.
This was very true. Gödel’s theorem and his own result were concerned with the machine as a sort of papal authority, infallible rather than intelligent. But his real point lay in the imitation principle, couched in traditional British terms of ‘fair play for machines’, when it came to ‘testing their IQ’, a point which brought him back to the idea of mechanical learning by experience:
A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge. Why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards. The game of chess may perhaps be rather suitable for this purpose, as the moves of the opponent will automatically provide this contact.
At the end of this talk there was a moment of stunned incredulity, during which his audience looked round with disbelief. This was probably much to Alan’s delight. He knew perfectly well that he was upsetting the conventional armistice between science and religion, and it was all the more grist to his mill. He had thought it all out since reading Eddington while in the sixth form, and he was not now going to toe this official line that separated the ‘unconscious automatic machine’ from the ‘higher realms of the intellect’. There was no such line – that was his thesis.
At heart it was the same problem of mind and matter that Eddington had tried to rescue for the side of the angels by invoking the Heisenberg Uncertainty Principle. But there was a difference. Eddington had addressed himself to the determinism of physical law, in order to deal with the kind of Victorian scientific world-picture that Samuel Butler had parodied in Erewhon:
If it be urged that the action of the potato is chemical and mechanical only, and that it is due to the chemical and mechanical effects of light and heat, the answer would seem to lie in an enquiry whether every sensation is not chemical and mechanical in its operation? …Whether there be not a molecular action of thought, whence a dynamical theory of the passions shall be deducible? Whether strictly speaking we should not ask what kinds of levers a man is made of rather than what is his temperament? How are they balanced? How much of such and such will it take to weigh them down so as to make him do so and so?
It was a picture drawn from nineteenth century physics, chemistry and biology. But the Turing challenge was on a different level of deterministic description, that of the abstract logical machine, as he had defined it. There was another difference. Victorians like Butler, Shaw and Carpenter had concerned themselves with identifying a soul, a spirit or life force. Alan Turing was talking about ‘intelligence’.
Alan did not define what he meant by this word, but the chess-playing paradigm, to which he constantly returned, would make it the faculty of working out how to achieve some goal, and the reference to IQ tests would indicate some measurable kind of performance of this skill. Coming from Bletchley, this kind of ‘intelligence’ was of burning and obvious significance. Intelligence had won the war. They had solved countless chess problems, and had beaten the Germans at the game. And more broadly, for his scientific generation, life had been a battle for ‘intelligence’, fought against stupid out-of-date schools, a stupid economic system, and stupid Blimps from ‘a profession for fools’ during the war – not to mention the Nazis, who had elevated stupidity into a religion. There was the influence of a Webbsian vision of socialism, in which society was going to be administered by intelligent functionaries of the state, as in the near future of Back to Methuselah, and in 1947 there was much talk about IQ tests, since the British youth was supposedly being newly divided into scientifically defined categories according to ‘intelligence’ rather than by class. Oscar Wilde had written of The Soul of Man under Socialism, but under the socialism of Attlee and Bevin words like ‘soul’ – supernatural or ‘soupy’ words as Bertrand Russell called them – could be left to bishops and pep talks about team spirit.
While many people might have reservations about the wisdom and beneficence of scientists, they were at last basking in the favour of government. The war had converted government to an interest in science, and a view once visionary, then progressive, was becoming orthodox. The scientists had emerged from the miserly corners in which they had done their despised ‘stinks’ before, and it seemed that their swords could be turned into ploughshares, or more precisely, that they would supply governments with scientific solutions to their problems. On one level Alan Turing belonged to this climate of opinion, and certainly rejected the idea that scientists, rather than generals and politicians, were to blame for the world’s current imperfections. Mermagen from Sherborne days, now a master at Radley, another public school, wrote to Alan at this time for advice about the place of mathematics and science in the post-war world, and Alan replied:41
On the subject of careers for mathematicians I am strongly inclined to think that the effect of ACE, guided projectiles, etc, etc, will be towards a considerably greater demand for mathematicians from a certain level upwards. For instance I am in need of a number who will be required to convert problems into a form which can be understood by the machine. The critical level may be described roughly as the degree of intelligence shown by the machine. We obviously do not want people who can take no responsibility at all. We just make the machine do the work which might have been given to them.· At present of course this critical level is very low and I am sure you need not be afraid of encouraging boys that are keen and want to take up a mathematical career. The worst danger is probably an anti-scientific reaction (Scientists instead of goats at Bikini etc) but this is a digression.*
But what was the intelligence for? – that was the unasked question. What was the goal towards which the technicians and managers were now working? There was a vacuum at the centre in 1947, as the convictions of the 1930s and the enforced unity of the war, evaporated away. The great opponent in the chess game had been beaten, and no one had yet taken his place.
By speaking of the mind in terms of puzzle-solving intelligence, of finding efficient means for undiscussed ends, Alan Turing superficially epitomised the technocratic outlook of 1947 social management. But it was only on the surface. For he had no interest in applying computers or related ‘Wellsian developments’ to the problems of society. He had wisely put examples of the usefulness of the computer into his report, to get it paid for. But his picture of the imagined installation simply copied what he had seen in operation at Bletchley. He knew it could be done, but had little interest in it nor indeed the ability to organise this side of it himself. The ACE would need a Travis to keep it going. Even in this letter, superficially about the usefulness of mathematics, his interest lay in comparing the intelligence of the planned computer with that of boys – a favourite comparison, in fact. His whole enterprise was still motivated by a fascination with knowledge itself, in this case with an understanding of the magic of the human mind. He was not a Babbage, with an interest in the more efficient division of labour. His interest in the ACE had little to do with the ‘mechanization, rationalization, modernization’ that Orwell foresaw, although the computer might be funded for this purpose. It was much closer to an undiminished wonder at ‘the glory and beauty of Nature’, and an almost erotic longing to encompass it. Indeed his letter to W.R. Ashby had stated baldly that
In working on the ACE I am more interested in the possibility of producing models of the action of the brain than in the practical applications to computing.
If, furthermore, he had left something out by confining his account of mind to a discussion of puzzle-solving, an omission tha
t reflected the temper of the times, this was not because he thought that this kind of ‘intelligence’ was one of vast superiority to other human characteristics. In fact it was almost the reverse.
Perhaps this was the most surprising thing about Alan Turing. Despite all he had done in the war, and all the struggles with stupidity, he still did not think of intellectuals or scientists as forming a superior class. The intelligent machine, taking over the role of the ‘masters’, would be a development which would cut the intellectual expert down to size. As Victorian technology had mechanised the work of the artisans, the computer of the future would automate the trade of intelligent thinking. The craft jealousy displayed by human experts only delighted him. In this way he was an anti-technocrat, subversively diminishing the authority of the new priests and magicians of the world. He wanted to make intellectuals into ordinary people. This was not at all calculated to please Sir Charles Darwin.
Alan’s talk was, as it happened, given on the same day that the British government announced its rapid withdrawal from India. The lessons of the war were at last sinking in, accentuated by the fuel crisis which the new management of the National Coal Board were powerless to control. Britain was no longer one of the ‘Big Three’, her role in the Mediterranean being quickly taken by the United States. It was a moment of truth, in which Britain appeared as a giant desert island. Germany had forced the truly Big Two out of an artificial isolation, and neither of them had fought to preserve British interests or markets. If there was a silver lining in the clouds, it was the fond belief that Britain could do better than ‘the American tradition of solving one’s difficulties by means of much equipment rather than by thought,’ in Alan’s words.
Alan Turing: The Enigma The Centenary Edition Page 57