The Beginning of Infinity

Home > Other > The Beginning of Infinity > Page 2
The Beginning of Infinity Page 2

by David Deutsch


  It is no defence of inductivism to point out that in all those cases the future still does ‘resemble the past’ in the sense that it obeys the same underlying laws of nature. For that is an empty statement: any purported law of nature – true or false – about the future and the past is a claim that they ‘resemble’ each other by both conforming to that law. So that version of the ‘principle of induction’ could not be used to derive any theory or prediction from experience or anything else.

  Even in everyday life we are well aware that the future is unlike the past, and are selective about which aspects of our experience we expect to be repeated. Before the year 2000, I had experienced thousands of times that if a calendar was properly maintained (and used the standard Gregorian system), then it displayed a year number beginning with ‘19’. Yet at midnight on 31 December 1999 I expected to have the experience of seeing a ‘20’ on every such calendar. I also expected that there would be a gap of 17,000 years before anyone experienced a ‘19’ under those conditions again. Neither I nor anyone else had ever observed such a ‘20’, nor such a gap, but our explanatory theories told us to expect them, and expect them we did.

  As the ancient philosopher Heraclitus remarked, ‘No man ever steps in the same river twice, for it is not the same river and he is not the same man.’ So, when we remember seeing sunrise ‘repeatedly’ under ‘the same’ circumstances, we are tacitly relying on explanatory theories to tell us which combinations of variables in our experience we should interpret as being ‘repeated’ phenomena in the underlying reality, and which are local or irrelevant. For instance, theories about geometry and optics tell us not to expect to see a sunrise on a cloudy day, even if a sunrise is really happening in the unobserved world behind the clouds. Only from those explanatory theories do we know that failing to see the sun on such days does not amount to an experience of its not rising. Similarly, theory tells us that if we see sunrise reflected in a mirror, or in a video or a virtual-reality game, that does not count as seeing it twice. Thus the very idea that an experience has been repeated is not itself a sensory experience, but a theory.

  So much for inductivism. And since inductivism is false, empiricism must be as well. For if one cannot derive predictions from experience, one certainly cannot derive explanations. Discovering a new explanation is inherently an act of creativity. To interpret dots in the sky as white-hot, million-kilometre spheres, one must first have thought of the idea of such spheres. And then one must explain why they look small and cold and seem to move in lockstep around us and do not fall down. Such ideas do not create themselves, nor can they be mechanically derived from anything: they have to be guessed – after which they can be criticized and tested. To the extent that experiencing dots ‘writes’ something into our brains, it does not write explanations but only dots. Nor is nature a book: one could try to ‘read’ the dots in the sky for a lifetime – many lifetimes – without learning anything about what they really are.

  Historically, that is exactly what happened. For millennia, most careful observers of the sky believed that the stars were lights embedded in a hollow, rotating ‘celestial sphere’ centred on the Earth (or that they were holes in the sphere, through which the light of heaven shone). This geocentric – Earth-centred – theory of the universe seemed to have been directly derived from experience, and repeatedly confirmed: anyone who looked up could ‘directly observe’ the celestial sphere, and the stars maintaining their relative positions on it and being held up just as the theory predicts. Yet in reality, the solar system is heliocentric – centred on the sun, not the Earth – and the Earth is not at rest but in complex motion. Although we first noticed a daily rotation by observing stars, it is not a property of the stars at all, but of the Earth, and of the observers who rotate with it. It is a classic example of the deceptiveness of the senses: the Earth looks and feels as though it is at rest beneath our feet, even though it is really rotating. As for the celestial sphere, despite being visible in broad daylight (as the sky), it does not exist at all.

  The deceptiveness of the senses was always a problem for empiricism – and thereby, it seemed, for science. The empiricists’ best defence was that the senses cannot be deceptive in themselves. What misleads us are only the false interpretations that we place on appearances. That is indeed true – but only because our senses themselves do not say anything. Only our interpretations of them do, and those are very fallible. But the real key to science is that our explanatory theories – which include those interpretations – can be improved, through conjecture, criticism and testing.

  Empiricism never did achieve its aim of liberating science from authority. It denied the legitimacy of traditional authorities, and that was salutary. But unfortunately it did this by setting up two other false authorities: sensory experience and whatever fictitious process of ‘derivation’, such as induction, one imagines is used to extract theories from experience.

  The misconception that knowledge needs authority to be genuine or reliable dates back to antiquity, and it still prevails. To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know . . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.

  The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism. To believers in the justified-true-belief theory of knowledge, this recognition is the occasion for despair or cynicism, because to them it means that knowledge is unattainable. But to those of us for whom creating knowledge means understanding better what is really there, and how it really behaves and why, fallibilism is part of the very means by which this is achieved. Fallibilists expect even their best and most fundamental explanations to contain misconceptions in addition to truth, and so they are predisposed to try to change them for the better. In contrast, the logic of justificationism is to seek (and typically, to believe that one has found) ways of securing ideas against change. Moreover, the logic of fallibilism is that one not only seeks to correct the misconceptions of the past, but hopes in the future to find and change mistaken ideas that no one today questions or finds problematic. So it is fallibilism, not mere rejection of authority, that is essential for the initiation of unlimited knowledge growth – the beginning of infinity.

  The quest for authority led empiricists to downplay and even stigmatize conjecture, the real source of all our theories. For if the senses were the only source of knowledge, then error (or at least avoidable error) could be caused only by adding to, subtracting from or misinterpreting what that source is saying. Thus empiricists came to believe that, in addition to rejecting ancient authority and tradition, scientists should suppress or ignore any new ideas they might have, except those that had been properly ‘derived’ from experience. As Arthur Conan Doyle’s fictional detective Sherlock Holmes put it in the short story ‘A Scandal in Bohemia’, ‘It is a capital mistake to theorize before one has data.’

  But that was itself a capital mistake. We never know any data before interpreting it through theories. All observations are, as Popper put it, theory-laden,* and hence fallible, as all our theories are. Consider the nerve signals reaching our brains from our sense organs. Far from providing direct or untainted access to reality, even they themselves are never experienced for what they really are – namely crackles of electrical activity. Nor, for the most part, do we experience them as being where they really are – inside our brains. Instead, we place them in the reality beyond. We do not just see blue: we see a blue sky up there,
far away. We do not just feel pain: we experience a headache, or a stomach ache. The brain attaches those interpretations – ‘head’, ‘stomach’ and ‘up there’ – to events that are in fact within the brain itself. Our sense organs themselves, and all the interpretations that we consciously and unconsciously attach to their outputs, are notoriously fallible – as witness the celestial-sphere theory, as well as every optical illusion and conjuring trick. So we perceive nothing as what it really is. It is all theoretical interpretation: conjecture.

  Conan Doyle came much closer to the truth when, during ‘The Boscombe Valley Mystery’, he had Holmes remark that ‘circumstantial evidence’ (evidence about unwitnessed events) is ‘a very tricky thing . . . It may seem to point very straight to one thing, but if you shift your own point of view a little, you may find it pointing in an equally uncompromising manner to something entirely different . . . There is nothing more deceptive than an obvious fact.’ The same holds for scientific discovery. And that again raises the question: how do we know? If all our theories originate locally, as guesswork in our own minds, and can be tested only locally, by experience, how is it that they contain such extensive and accurate knowledge about the reality that we have never experienced?

  I am not asking what authority scientific knowledge is derived from, or rests on. I mean, literally, by what process do ever truer and more detailed explanations about the world come to be represented physically in our brains? How do we come to know about the interactions of subatomic particles during transmutation at the centre of a distant star, when even the tiny trickle of light that reaches our instruments from the star was emitted by glowing gas at the star’s surface, a million kilometres above where the transmutation is happening? Or about conditions in the fireball during the first few seconds after the Big Bang, which would instantly have destroyed any sentient being or scientific instrument? Or about the future, which we have no way of measuring at all? How is it that we can predict, with some non-negligible degree of confidence, whether a new design of microchip will work, or whether a new drug will cure a particular disease, even though they have never existed before?

  For most of human history, we did not know how to do any of this. People were not designing microchips or medications or even the wheel. For thousands of generations, our ancestors looked up at the night sky and wondered what stars are – what they are made of, what makes them shine, what their relationship is with each other and with us – which was exactly the right thing to wonder about. And they were using eyes and brains anatomically indistinguishable from those of modern astronomers. But they discovered nothing about it. Much the same was true in every other field of knowledge. It was not for lack of trying, nor for lack of thinking. People observed the world. They tried to understand it – but almost entirely in vain. Occasionally they recognized simple patterns in the appearances. But when they tried to find out what was really there behind those appearances, they failed almost completely.

  I expect that, like today, most people wondered about such things only occasionally – during breaks from addressing their more parochial concerns. But their parochial concerns also involved yearning to know – and not only out of pure curiosity. They wished they knew how to safeguard their food supply; how they could rest when tired without risking starvation; how they could be warmer, cooler, safer, in less pain – in every aspect of their lives, they wished they knew how to make progress. But, on the timescale of individual lifetimes, they almost never made any. Discoveries such as fire, clothing, stone tools, bronze, and so on, happened so rarely that from an individual’s point of view the world never improved. Sometimes people even realized (with somewhat miraculous prescience) that making progress in practical ways would depend on progress in understanding puzzling phenomena in the sky. They even conjectured links between the two, such as myths, which they found compelling enough to dominate their lives – yet which still bore no resemblance to the truth. In short, they wanted to create knowledge, in order to make progress, but they did not know how.

  This was the situation from our species’ earliest prehistory, through the dawn of civilization, and through its imperceptibly slow increase in sophistication – with many reverses – until a few centuries ago. Then a powerful new mode of discovery and explanation emerged, which later became known as science. Its emergence is known as the scientific revolution, because it succeeded almost immediately in creating knowledge at a noticeable rate, which has increased ever since.

  What had changed? What made science effective at understanding the physical world when all previous ways had failed? What were people now doing, for the first time, that made the difference? This question began to be asked as soon as science began to be successful, and there have been many conflicting answers, some containing truth. But none, in my view, has reached the heart of the matter. To explain my own answer, I have to give a little context first.

  The scientific revolution was part of a wider intellectual revolution, the Enlightenment, which also brought progress in other fields, especially moral and political philosophy, and in the institutions of society. Unfortunately, the term ‘the Enlightenment’ is used by historians and philosophers to denote a variety of different trends, some of them violently opposed to each other. What I mean by it will emerge here as we go along. It is one of several aspects of ‘the beginning of infinity’, and is a theme of this book. But one thing that all conceptions of the Enlightenment agree on is that it was a rebellion, and specifically a rebellion against authority in regard to knowledge.

  Rejecting authority in regard to knowledge was not just a matter of abstract analysis. It was a necessary condition for progress, because, before the Enlightenment, it was generally believed that everything important that was knowable had already been discovered, and was enshrined in authoritative sources such as ancient writings and traditional assumptions. Some of those sources did contain some genuine knowledge, but it was entrenched in the form of dogmas along with many falsehoods. So the situation was that all the sources from which it was generally believed knowledge came actually knew very little, and were mistaken about most of the things that they claimed to know. And therefore progress depended on learning how to reject their authority. This is why the Royal Society (one of the earliest scientific academies, founded in London in 1660) took as its motto ‘Nullius in verba’, which means something like ‘Take no one’s word for it.’

  However, rebellion against authority cannot by itself be what made the difference. Authorities have been rejected many times in history, and only rarely has any lasting good come of it. The usual sequel has merely been that new authorities replaced the old. What was needed for the sustained, rapid growth of knowledge was a tradition of criticism. Before the Enlightenment, that was a very rare sort of tradition: usually the whole point of a tradition was to keep things the same.

  Thus the Enlightenment was a revolution in how people sought knowledge: by trying not to rely on authority. That is the context in which empiricism – purporting to rely solely on the senses for knowledge – played such a salutary historical role, despite being fundamentally false and even authoritative in its conception of how science works.

  One consequence of this tradition of criticism was the emergence of a methodological rule that a scientific theory must be testable (though this was not made explicit at first). That is to say, the theory must make predictions which, if the theory were false, could be contradicted by the outcome of some possible observation. Thus, although scientific theories are not derived from experience, they can be tested by experience – by observation or experiment. For example, before the discovery of radioactivity, chemists had believed (and had verified in countless experiments) that transmutation is impossible. Rutherford and Soddy boldly conjectured that uranium spontaneously transmutes into other elements. Then, by demonstrating the creation of the element radium in a sealed container of uranium, they refuted the prevailing theory and science progressed. They were able to do that because that earl
ier theory was testable: it was possible to test for the presence of radium. In contrast, the ancient theory that all matter is composed of combinations of the elements earth, air, fire and water was untestable, because it did not include any way of testing for the presence of those components. So it could never be refuted by experiment. Hence it could never be – and never was – improved upon through experiment. The Enlightenment was at root a philosophical change.

  The physicist Galileo Galilei was perhaps the first to understand the importance of experimental tests (which he called cimenti, meaning ‘trials by ordeal’) as distinct from other forms of experiment and observation, which can more easily be mistaken for ‘reading from the Book of Nature’. Testability is now generally accepted as the defining characteristic of the scientific method. Popper called it the ‘criterion of demarcation’ between science and non-science.

  Nevertheless, testability cannot have been the decisive factor in the scientific revolution either. Contrary to what is often said, testable predictions had always been quite common. Every traditional rule of thumb for making a flint blade or a camp fire is testable. Every would-be prophet who claims that the sun will go out next Tuesday has a testable theory. So does every gambler who has a hunch that ‘this is my lucky night – I can feel it’. So what is the vital, progress-enabling ingredient that is present in science, but absent from the testable theories of the prophet and the gambler?

 

‹ Prev