There are several current areas of research in which that same misconception is common. In chatbot-based AI research it sent the whole field down a blind alley, but in other fields it has merely caused researchers to attach overambitious labels to genuine, albeit relatively modest, achievements. One such area is artificial evolution.
Recall Edison’s idea that progress requires alternating ‘inspiration’ and ‘perspiration’ phases, and that, because of computers and other technology, it is increasingly becoming possible to automate the perspiration phase. This welcome development has misled those who are overconfident about achieving artificial evolution (and AI). For example, suppose that you are a graduate student in robotics, hoping to build a robot that walks on legs better than previous robots do. The first phase of the solution must involve inspiration – that is to say, creative thought, attempting to improve upon previous researchers’ attempts to solve the same problem. You will start from that, and from existing ideas about other problems that you conjecture may be related, and from the designs of walking animals in nature. All of that constitutes existing knowledge, which you will vary and combine in new ways, and then subject to criticism and further variation. Eventually you will have created a design for the hardware of your new robot: its legs with their levers, joints, tendons and motors; its body, which will hold the power supply; its sense organs, through which it will receive the feedback that will allow it to control those limbs effectively; and the computer that will exercise that control. You will have adapted everything in that design as best you can to the purpose of walking, except the program in the computer.
The function of that program will be to recognize situations such as the robot beginning to topple over, or obstacles in its path, and to calculate the appropriate action and to take it. This is the hardest part of your research project. How does one recognize when it is best to avoid an obstacle to the left or to the right, or jump over it or kick it aside or ignore it, or lengthen one’s stride to avoid stepping on it – or judge it impassable and turn back? And, in all those cases, how does one specifically do those things in terms of sending countless signals to the motors and the gears, as modified by feedback from the senses?
You will break the problem down into sub-problems. Veering by a given angle is similar to veering by a different angle. That allows you to write a subroutine for veering that takes care of that whole continuum of possible cases. Once you have written it, all other parts of the program need only call it whenever they decide that veering is required, and so they do not have to contain any knowledge about the messy details of what it takes to veer. When you have identified and solved as many of these sub-problems as you can, you will have created a code, or language, that is highly adapted to making statements about how your robot should walk. Each call of one of its subroutines is a statement or command in that language.
So far, most of what you have done comes under the heading of ‘inspiration’: it required creative thought. But now perspiration looms. Once you have automated everything that you know how to automate, you have no choice but to resort to some sort of trial and error to achieve any additional functionality. However, you do now have the advantage of a language that you have adapted for the purpose of instructing the robot in how to walk. So you can start with a program that is simple in that language, despite being very complex in terms of elementary instructions of the computer, and which means, for instance, ‘Walk forwards and stop if you hit an obstacle.’ Then you can run the robot with that program and see what happens. (Or you can run a computer simulation of the robot.) When it falls over or anything else undesirable happens, you can modify your program – still using the high-level language you have created – to eliminate the deficiencies as they arise. That method will require ever less inspiration and ever more perspiration.
But an alternative approach is also open to you: you can delegate the perspiration to a computer, but using a so-called evolutionary algorithm. Using the same computer simulation, you run many trials, each with a slight random variation of that first program. The evolutionary algorithm subjects each simulated robot automatically to a battery of tests that you have provided – how far it can walk without falling over, how well it copes with obstacles and rough terrain, and so on. At the end of each run, the program that performed best is retained, and the rest are discarded. Then many variants of that program are created, and the process is repeated. After thousands of iterations of this ‘evolutionary’ process, you may find that your robot walks quite well, according to the criteria you have set. You can now write your thesis. Not only can you claim to have achieved a robot that walks with a required degree of skill, you can claim to have implemented evolution on a computer.
This sort of thing has been done successfully many times. It is a useful technique. It certainly constitutes ‘evolution’ in the sense of alternating variation and selection. But is it evolution in the more important sense of the creation of knowledge by variation and selection? This will be achieved one day, but I doubt that it has been yet, for the same reason that I doubt that chatbots are intelligent, even slightly. The reason is that there is a much more obvious explanation of their abilities, namely the creativity of the programmer.
The task of ruling out the possibility that the knowledge was created by the programmer in the case of ‘artificial evolution’ has the same logic as checking that a program is an AI – but harder, because the amount of knowledge that the ‘evolution’ purportedly creates is vastly less. Even if you yourself are the programmer, you are in no position to judge whether you created that relatively small amount of knowledge or not. For one thing, some of the knowledge that you packed into that language during those many months of design will have reach, because it encoded some general truths about the laws of geometry, mechanics and so on. For another, when designing the language you had constantly in mind what sorts of abilities it would eventually be used to express.
The Turing-test idea makes us think that, if it is given enough standard reply templates, an Eliza program will automatically be creating knowledge; artificial evolution makes us think that if we have variation and selection, then evolution (of adaptations) will automatically happen. But neither is necessarily so. In both cases, another possibility is that no knowledge at all will be created during the running of the program, only during its development by the programmer.
One thing that always seems to happen with such projects is that, after they achieve their intended aim, if the ‘evolutionary’ program is allowed to run further it produces no further improvements. This is exactly what would happen if all the knowledge in the successful robot had actually come from the programmer, but it is not a conclusive critique: biological evolution often reaches ‘local maxima of fitness’. Also, after attaining its mysterious form of universality, it seemed to pause for about a billion years before creating any significant new knowledge. But still, achieving results that might well be due to something else is not evidence of evolution.
That is why I doubt that any ‘artificial evolution’ has ever created knowledge. I have the same view, for the same reasons, about the slightly different kind of ‘artificial evolution’ that tries to evolve simulated organisms in a virtual environment, and the kind that pits different virtual species against each other.
To test this proposition, I would like to see an experiment of a slightly different kind: eliminate the graduate student from the project. Then, instead of using a robot designed to evolve better ways of walking, use a robot that is already in use in some real-life application and happens to be capable of walking. And then, instead of creating a special language of subroutines in which to express conjectures about how to walk, just replace its existing program, in its existing microprocessor, by random numbers. For mutations, use errors of the type that happen anyway in such processors (though in the simulation you are allowed to make them happen as often as you like). The purpose of all that is to eliminate the possibility that human knowledge is being fe
d into the design of the system, and that its reach is being mistaken for the product of evolution. Then, run simulations of that mutating system in the usual way. As many as you like. If the robot ever walks better than it did originally, then I am mistaken. If it continues to improve after that, then I am very much mistaken.
One of the main features of the above experiment, which is lacking in the usual way of doing artificial evolution, is that, for it to work, the language (of subroutines) would have to evolve along with the adaptations that it was expressing. This is what was happening in the biosphere before that jump to universality that finally settled on the DNA genetic code. As I said, it may be that all those previous genetic codes were only capable of coding for a small number of organisms that were all rather similar. And that the overwhelmingly rich biosphere that we see around us, created by randomly varying genes while leaving the language unchanged, is something that became possible only after that jump. We do not even know what kind of universality was created there. So why should we expect our artificial evolution to work without it?
I think we have to face the fact, both with artificial evolution and with AI, that these are hard problems. There are serious unknowns in how those phenomena were achieved in nature. Trying to achieve them artificially without ever discovering those unknowns was perhaps worth trying. But it should be no surprise that it has failed. Specifically, we do not know why the DNA code, which evolved to describe bacteria, has enough reach to describe dinosaurs and humans. And, although it seems obvious that an AI will have qualia and consciousness, we cannot explain those things. So long as we cannot explain them, how can we expect to simulate them in a computer program? Or why should they emerge effortlessly from projects designed to achieve something else? But my guess is that when we do understand them, artificially implementing evolution and intelligence and its constellation of associated attributes will then be no great effort.
TERMINOLOGY
Quale (plural qualia) The subjective aspect of a sensation.
Behaviourism Instrumentalism applied to psychology. The doctrine that science can (or should) only measure and predict people’s behaviour in response to stimuli.
SUMMARY
The field of artificial (general) intelligence has made no progress because there is an unsolved philosophical problem at its heart: we do not understand how creativity works. Once that has been solved, programming it will not be difficult. Even artificial evolution may not have been achieved yet, despite appearances. There the problem is that we do not understand the nature of the universality of the DNA replication system.
8
A Window on Infinity
Mathematicians realized centuries ago that it is possible to work consistently and usefully with infinity. Infinite sets, infinitely large quantities and also infinitesimal quantities all make sense. Many of their properties are counter-intuitive, and the introduction of theories about infinities has always been controversial; but many facts about finite things are just as counter-intuitive. What Dawkins calls the ‘argument from personal incredulity’ is no argument: it represents nothing but a preference for parochial misconceptions over universal truths.
In physics, too, infinity has been contemplated since antiquity. Euclidean space was infinite; and, in any case, space was usually regarded as a continuum: even a finite line was composed of infinitely many points. There were also infinitely many instants between any two times. But the understanding of continuous quantities was patchy and contradictory until Newton and Leibniz invented calculus, a technique for analysing continuous change in terms of infinite numbers of infinitesimal changes.
The ‘beginning of infinity’ – the possibility of the unlimited growth of knowledge in the future – depends on a number of other infinities. One of them is the universality in the laws of nature which allows finite, local symbols to apply to the whole of time and space – and to all phenomena and all possible phenomena. Another is the existence of physical objects that are universal explainers – people – which, it turns out, are necessarily universal constructors as well, and must contain universal classical computers.
Most forms of universality themselves refer to some sort of infinity – though they can always be interpreted in terms of something being unlimited rather than actually infinite. This is what opponents of infinity call a ‘potential infinity’ rather than a ‘realized’ one. For instance, the beginning of infinity can be described either as a condition where ‘progress in the future will be unbounded’ or as the condition where ‘an infinite amount of progress will be made’. But I use those concepts interchangeably, because in this context there is no substantive difference between them.
There is a philosophy of mathematics called finitism, the doctrine that only finite abstract entities exist. So, for instance, there are infinitely many natural numbers, but finitists insist that that is just a manner of speaking. They say that the literal truth is only that there is a finite rule for generating each natural number (or, more precisely, each numeral) from the previous one, and nothing literally infinite is involved. But this doctrine runs into the following problem: is there a largest natural number or not? If there is, then that contradicts the statement that there is a rule that defines a larger one. If there is not, then there are not finitely many natural numbers. Finitists are then obliged to deny a principle of logic: the ‘law of the excluded middle’, which is that, for every meaningful proposition, either it or its negation is true. So finitists say that, although there is no largest number, there is not an infinity of numbers either.
Finitism is instrumentalism applied to mathematics: it is a principled rejection of explanation. It attempts to see mathematical entities purely as procedures that mathematicians follow, rules for making marks on paper and so on – useful in some situations, but not referring to anything real other than the finite objects of experience such as two apples or three oranges. And so finitism is inherently anthropocentric – which is not surprising, since it regards parochialism as a virtue of a theory rather than a vice. It also suffers from another fatal flaw that instrumentalism and empiricism have in regard to science, which is that it assumes that mathematicians have some sort of privileged access to finite entities which they do not have for infinite ones. But that is not the case. All observation is theory-laden. All abstract theorizing is theory-laden too. All access to abstract entities, finite or infinite, is via theory, just as for physical entities.
In other words finitism, like instrumentalism, is nothing but a project for preventing progress in understanding the entities beyond our direct experience. But that means progress generally, for, as I have explained, there are no entities within our ‘direct experience’.
The whole of the above discussion assumes the universality of reason. The reach of science has inherent limitations; so does mathematics; so does every branch of philosophy. But if you believe that there are bounds on the domain in which reason is the proper arbiter of ideas, then you believe in unreason or the supernatural. Similarly, if you reject the infinite, you are stuck with the finite, and the finite is parochial. So there is no way of stopping there. The best explanation of anything eventually involves universality, and therefore infinity. The reach of explanations cannot be limited by fiat.
One expression of this within mathematics is the principle, first made explicit by the mathematician Georg Cantor in the nineteenth century, that abstract entities may be defined in any desired way out of other entities, so long as the definitions are unambiguous and consistent. Cantor founded the modern mathematical study of infinity. His principle was defended and further generalized in the twentieth century by the mathematician John Conway, who whimsically but appropriately named it the mathematicians’ liberation movement. As those defences suggest, Cantor’s discoveries encountered vitriolic opposition among his contemporaries, including most mathematicians of the day and also many scientists, philosophers – and theologians. Religious objections, ironically, were in effect based on the Principl
e of Mediocrity. They characterized attempts to understand and work with infinity as an encroachment on the prerogatives of God. In the mid twentieth century, long after the study of infinity had become a routine part of mathematics and had found countless applications there, the philosopher Ludwig Wittgenstein still contemptuously denounced it as ‘meaningless’. (Though eventually he also applied that accusation to the whole of philosophy, including his own work – see Chapter 12.)
I have already mentioned other examples of the principled rejection of infinity. There was the strange aversion of Archimedes, Apollonius and others to universal systems of numerals. There are doctrines such as instrumentalism and finitism. The Principle of Mediocrity sets out to escape parochialism and to reach for infinity, but ends up confining science to an infinitesimal and unrepresentative bubble of comprehensibility. There is also pessimism, which (as I shall discuss in the following chapter) wants to attribute failure to the existence of a finite bound on improvement. One instance of pessimism is the paradoxical parochialism of Spaceship Earth – a vehicle that would be far better suited as a metaphor for infinity.
Whenever we refer to infinity, we are making use of the infinite reach of some idea. For whenever an idea of infinity makes sense, that is because there is an explanation of why some finite set of rules for manipulating finite symbols refers to something infinite. (Let me repeat that this underlies our knowledge of everything else as well.)
In mathematics, infinity is studied via infinite sets (meaning sets with infinitely many members). The defining property of an infinite set is that some part of it has as many elements as the whole thing. For instance, think of the natural numbers:
The Beginning of Infinity Page 20