Ideas
Page 126
Charles Peirce was not one of them. He believed that he could see spontaneous life around him at every turn. (And he attacked Laplace in print.) He argued that, by definition, the laws of nature themselves must have evolved.49 He was Darwinian enough to believe in contingency, indeterminacy, and his ultimate philosophy was designed to steer a way through the confusion.50 In 1812, in his Théorie analytique des probabilités, Laplace had said ‘We must…imagine the present state of the universe as the effect of its prior state and as the cause of the state that will follow.’ This is Newton’s billiard-ball theory of matter, applied generally, even to human beings, and where chance has no part.51 Against this, in his Theory of Heat, published in 1871, the Scottish physicist James Clerk Maxwell had argued that the behaviour of molecules in a gas could be understood probabilistically. (Peirce met Maxwell on a visit to Cambridge in 1875.)52 The temperature of a gas in a sealed container is a function of the velocity of the molecules–the faster they move, the more they collide and the higher the temperature. But, and most importantly from a theoretical point of view, the temperature is related to the average velocity of the molecules, which vary in their individual speeds. How was this average to be arrived at, how was it to be understood? Maxwell argued that ‘the velocities are distributed among the particles according to the same law as the errors are distributed among the observations in the theory of the “method of least squares”’. (This had first been observed among astronomers: see above, page 657.)53 Maxwell’s point, the deep significance of his arguments, for the nineteenth century, was that physical laws are not Newtonian, not absolutely precise. Peirce grasped the significance of this in the biological, Darwinian realm. In effect, it created the circumstances where natural selection could operate. Menand asks us to consider birds as an example. In any particular species, of finch say, most individuals will have beaks within the ‘normal distribution’, but every so often, a bird with a beak outside the range will be born, and if this confers an evolutionary advantage it will be ‘selected’. To this extent, evolution proceeds by chance, not on an entirely random basis but according to statistical laws.54
Peirce was very impressed by such thinking. If even physical events, the smallest and in a sense the most fundamental occurrences, are uncertain, and if even the perception of simple things, like the location of stars, is fallible, how can any single mind ‘mirror’ reality? The awkward truth was: ‘reality doesn’t stand still long enough to be accurately mirrored’. Peirce therefore agreed with Wendell Holmes and William James: experience was what counted and even in science juries were needed. Knowledge was social.55
All this may be regarded as ‘deep background’ to pragmatism (a word that, for some strange reason, Peirce hardly ever used; he said it was ‘ugly enough to be safe from kidnappers’).56 This was, and remains, far more important than it seems at first sight, and more substantial than the everyday use of the word ‘pragmatic’ makes it appear. It was partly the natural corollary of the thinking that had helped create America in the first place, and is discussed in Chapter 28 above. It was partly the effect of the beginnings of indeterminacy in science, which was to be such a feature of twentieth-century thought, and it was partly–even mainly–a further evolution of thought, yet another twist, on the road to individualism.
Here is a classic pragmatic problem, familiar to Holmes, made much use of by James, and highlighted by Menand. Assume that a friend tells you something but in the strictest confidence. Later, in discussions with a second friend, you discover two things. One, that he or she isn’t aware of the confidence that has been shared with you; and second, that he is, in your opinion, about to make a bad mistake which could be avoided if he knew what you know. What do you do? Do you stay loyal to your first friend and keep the confidence? Or do you break the confidence to help out the second friend, so that he avoids injury or embarrassment? James said that the outcome might well depend on which friend you actually preferred, and that was part of his point. The romantics had said that the ‘true’ self was to be found within, but James was saying that, even in a simple situation like this, there were several selves within–or none at all. In fact, he preferred to say that, until one chose a particular course of action, until one behaved, one didn’t know which self one was. ‘In the end, you will do what you believe is right but “rightness” will be, in effect, the compliment you give to the outcome of your deliberations.’57 We can only really understand thinking, said James, if we understand its relationship to behaviour. ‘Deciding to order lobster in a restaurant helps us determine that we have a taste for lobster; deciding that the defendant is guilty helps us establish the standard of justice that applies in this case; choosing to keep a confidence helps us make honesty a principle and choosing to betray it helps confirm the value we put on friendship.’58 Self grows out of behaviour, not the other way round. This directly contradicts romanticism.
James was eager to say that this approach didn’t make life arbitrary or that someone’s motivation was always self-serving. ‘Most of us don’t feel that we are always being selfish in our decisions regarding, say, our moral life.’ He thought that what we do carry within us is an imperfect set of assumptions about ourselves and our behaviour in the past, and about others and their behaviour, which informs every judgement we make.59 According to James, truth is circular: ‘There is no noncircular set of criteria for knowing whether a particular belief is true, no appeal to some standard outside the process of coming to the belief itself. For thinking just is a circular process, in which some end, some imagined outcome, is already present at the start of any train of thought…Truth happens to an idea, it becomes true, is made true by events.’60
At about the time James was having these ideas, there was a remarkable development in the so-called New [Experimental] Psychology. Edward Thorndike, at Berkeley, had placed chickens in a box which had a door that could be opened if the animals pecked at a lever. In this way, the chickens were given access to a supply of food pellets, out through the door. Thorndike observed ‘that although at first many actions were tried, apparently unsystematically (i.e., at random), only successful actions performed by chickens who were hungry were learned’.61 James wasn’t exactly surprised by this, but it confirmed his view, albeit in a mundane way. The chickens had learned that if they pecked at the lever the door would open, leading to food, a reward. James went one step further. To all intents and purposes, he said, the chickens believed that if they pecked at the lever the door would open. As he put it, ‘Their beliefs were rules for action.’ And he thought that such rules applied more generally. ‘If behaving as though we have free will, or as if God exists, gets us the results we want, we will not only come to believe those things; they will be, pragmatically, true…“The truth” is the name of whatever proves itself to be good in the way of belief.’62 In other words, and most subversively, truth is not ‘out there’, it has nothing to do with ‘the way things really are’. This is not why we have minds, James said. Minds are adaptive in a Darwinian sense: they help us to get by, which involves being consistent, between thinking and behaviour.
Most controversially of all, James applied his reasoning to intuition, to innate ideas. Whereas Locke had said that all our ideas stem from sensory experience, Kant had insisted that some fundamental notions–the idea of causation being one–could not arise from sensory experience, since we never ‘see’ causation, but only infer it. Therefore, he concluded, such ideas ‘must be innate, wired in from birth’.63 James took Kant’s line (for the most part), that many ideas are innate, but he didn’t think that there was anything mysterious or divine about this.64 In Darwinian terms, it was clear that ‘innate’ ideas are simply variations that have arisen and been naturally selected. ‘Minds that possessed them were preferred over minds that did not.’ But this wasn’t because those ideas were more ‘true’ in an abstract or theological sense; instead, it was because they helped organisms to adapt.65 The reason that we believed in God (when we did believe in God)
was because experience showed that it paid to believe in God. When people stopped believing in God (as they did in large numbers in the nineteenth century–see next chapter), it was because such belief no longer paid.
America’s third pragmatic philosopher, after Peirce and James, was John Dewey. A professor in Chicago, Dewey boasted a Vermont drawl, rimless eyeglasses and a complete lack of fashion sense. In some ways he was the most successful pragmatist of all. Like James he believed that everyone has his own philosophy, their own set of beliefs, and that such philosophy should help people to lead happier and more productive lives. His own life was particularly productive. Through newspaper articles, popular books, and a number of debates conducted with other philosophers, such as Bertrand Russell or Arthur Lovejoy, author of The Great Chain of Being, Dewey became known to the general public in a way that few philosophers are.66 Like James, Dewey was a convinced Darwinist, someone who believed that science and the scientific approach needed to be incorporated into other areas of life. In particular, he believed that the discoveries of science should be adapted to the education of children. For Dewey, the start of the twentieth century was an age of ‘democracy, science and industrialism’ and this, he argued, had profound consequences for education. At that time, attitudes to children were changing fast. In 1909 the Swedish feminist Ellen Key published her book The Century of the Child, which reflected the general view that the child had been rediscovered–rediscovered in the sense that there was a new joy in the possibilities of childhood and in the realisation that children were different from adults and from one another.67 This seems no more than common sense to us, but in the nineteenth century, before the victory over a heavy rate of child mortality, when families were much larger and many children died, there was not–there could not be–the same investment in children, in time, in education, in emotion, as there was later. Dewey saw that this had significant consequences for teaching. Hitherto, schooling, even in America, which was in general more indulgent to children than in Europe, had been dominated by the rigid authority of the teacher, who had a concept of what an educated person should be and whose main aim was to convey to his or her pupils the idea that knowledge was the ‘contemplation of fixed verities’.68 Dewey was one of the leaders of a movement which changed such thinking, and in two directions. The traditional idea of education, he saw, stemmed from a leisured and aristocratic society, the type of society that was disappearing fast in European societies and had never existed in America. Education now had to meet the needs of democracy. Second, and no less important, education had to reflect the fact that children were very different from one another in abilities and interests. In order for children to make the best contribution to society that they were capable of, education should be less about ‘drumming in’ hard facts which the teacher thought necessary, and more about drawing out what the individual child was capable of. In other words, pragmatism applied to education.
The ideas of Dewey, along with those of Freud, were undoubtedly influential in helping attach far more importance to childhood than before. The notion of personal growth and the drawing back of traditional, authoritarian conceptions of what knowledge is, and what education should seek to do, were liberating ideas for many people. (Dewey’s frank aim was to make society, via education, more ‘worthy, lovely and harmonious’.)69 In America, with its many immigrant groups and wide geographical spread, the new education helped to create many individualists. At the same time, the ideas of the ‘growth movement’ always risked being taken too far–with children left to their own devices too much. In some schools where teachers believed that ‘No child should ever know failure…’, examinations and grades were abolished.70
Dewey’s view of philosophy agreed very much with James and the Peirces. It should be concerned with living in this world, now.71 Both thinking and behaviour are different sides of the same coin. Knowledge is part of nature. We all make our way in the world, as best we can, learning as we go as to what works and what doesn’t: behaviour is not pre-ordained at birth.72 This approach, he felt, should be applied to philosophy where, traditionally, people had been obsessed by the relation between mind and world. Because of this, the celebrated philosophical mystery, How do we know?, was in a sense the wrong question. Dewey illustrated his argument by means of an analogy which Menand highlights: no one has ever been unduly bothered by the no less crucial question, the relation between, for example, the hand and the world. ‘The function of the hand is to help the organism cope with the environment; in situations in which a hand doesn’t work, we try something else, such as a foot or a fish-hook, or an editorial.’73 His point was that nobody worries about those situations where the hand doesn’t ‘fit’, doesn’t ‘relate to the world’. We use hands where they are useful, feet where they are useful, tongues where they are useful.
Dewey was of the opinion that ideas are much like hands: they are instruments for dealing with the world. ‘An idea has no greater metaphysical stature than, say, a fork. When your fork proves inadequate to eating soup, you don’t worry about the inherent shortcomings in the nature of forks; you reach for a spoon.’ Ideas are much the same. We have got into difficulty because ‘mind’ and ‘reality’ don’t exist other than as abstractions, with all the shortcomings that we find in any generalisation. ‘It therefore makes as little sense to talk about a “split” between the mind and the world as it does to talk about a split between the hand and the environment, or the fork and the soup.’ ‘Things,’ he wrote, ‘…are what they are experienced as.’74 According to Menand, Dewey thought that philosophy had got off on the wrong foot right at the start, and that we have arrived where we are largely as a result of the class structure of classical Greece. Pythagoras, Plato, Socrates, Aristotle and the other Greek philosophers were for the most part a leisured, ‘secure and self-possessed’ class, and it was pragmatically useful for them to exalt reflection and speculation at the expense of making and doing. Since then, he thought, philosophy had been dogged by similar class prejudices, which maintained the same separation of values–stability above change, certainty above contingency, the fine arts above the useful arts, ‘what minds do over what hands do’.75 The result is there for us all to see. ‘While philosophy pondered its artificial puzzles, science, taking a purely instrumental and experimental approach, had transformed the world.’ Pragmatism was a way for philosophy to catch up.
That pragmatism should arise in America is not so surprising, not surprising at all in fact. The mechanical and materialist doctrines of Hegel, Laplace, Malthus, Marx, Darwin and Spencer were essentially deterministic whereas for James and Dewey the universe–very much like America–was still in progress, still in the making, ‘a place where no conclusion is foregone and every problem is amenable to the exercise of what Dewey called intelligent action’. Above all, he felt that–like everything else–ethics evolve. This was a sharp deduction from Darwin, quickly reached and still not often enough appreciated. ‘The care of the sick has taught us how to protect the healthy.’76
William James, as we have seen, was a university man. In one capacity or another, he was linked to Harvard, Johns Hopkins and the University of Chicago. Like some nine thousand other Americans in the nineteenth century, he studied at German universities. At the time that Emerson, Holmes, the Peirces and the Jameses were developing their talents, the American universities were in the process of formation and so, it should be said, were the German and the British. Particularly in Britain, universities are looked upon fondly as ancient institutions, dating from medieval times. So they are, in one sense, but that should not blind us to the fact that universities, as we know them now, are largely the creation of the nineteenth century.
One can see why. Until 1826 there were just the two universities in existence in England–Oxford and Cambridge–and offering a very restricted range of education.77 At Oxford the intake was barely two hundred a year and many of those did not persevere to graduation. The English universities were open only to Anglicans, base
d on a regulation which required acceptance of the Thirty-Nine Articles. Both seats of learning had deteriorated in the eighteenth century, with the only recognised course, at Oxford at least, being a narrow classics curriculum ‘with a smattering of Aristotelian philosophy’, whereas in Cambridge the formal examination was almost entirely mathematical. There was no entrance examination at either place and, moreover, peers could get a degree without examination. Examinations were expanded and refined in the first decades of the nineteenth century but more to the point, in view of what happened later, were the attacks mounted on Oxford and Cambridge by a trio of Scotsmen in Edinburgh–Francis Jeffrey, Henry Brougham and Sydney Smith. Two of these were Oxford graduates and in the journal they founded, the Edinburgh Review, they took Oxford and Cambridge to task for offering an education which, they argued, was far too grounded in the classics and, as a result, very largely useless. ‘The bias given to men’s minds is so strong that it is no uncommon thing to meet with Englishmen, whom, but for their grey hair and wrinkles, we might easily mistake for school-boys. Their talk is of Latin verses; and, it is quite clear, if men’s ages are to be dated from the state of their mental progress, that such men are eighteen years of age and not a day older…’78 Sydney Smith, the author of this attack, went on to criticise Oxbridge men for having no knowledge of the sciences, of economics or politics, of Britain’s geographical and commercial relations with Europe. The classics, he said, cultivated the imagination but not the intellect.