The World's Greatest Idea

Home > Other > The World's Greatest Idea > Page 23
The World's Greatest Idea Page 23

by Farndon, John


  This is perhaps the crucial thing about the history of ideas, especially technology. There is a lot of truth to the old adage ‘necessity is the mother of invention’. Ideas tend not to just pop up randomly from nothing; they are typically a response to a need. Or if they do emerge when they are not really needed, they are usually forgotten rather than setting a new agenda. Back in the first century AD, for instance, Hero of Alexander experimented with steam power, but it was another 1,800 years before its benefits were really appreciated and it turned from a vague idea into a solid technology.

  The same is partially true of the wheel. Wheels were widely used in the Roman and Middle Ages, both in watermills and for carts, but they weren’t necessarily key technologies. Civilisations could manage perfectly well without the wheel – a fact to which the early civilisations of the Americas bear witness. None of the great civilisations of the Americas used wheeled transport at all. It wasn’t even as if they didn’t know about the wheel. Charming little toy dogs and other animals on wheels have been found there dating back to 200 BC. But for the early Americans, wheels were just playthings. For real work, legs (both human and animal) were far more practical.

  It’s not that wheels are not a great idea. It’s just that their true worth has really become apparent only in the last two centuries, some 5,000–10,000 years after they were invented. Wheels came into their own in the Industrial Age, and it is no coincidence, perhaps, that the beginning of this age is called the Industrial Revolution. People drew their image for this dramatic change from the whirring, turning wheels which suddenly seemed to be everywhere, in every factory and every machine, and carrying along the trains which whisked goods to the markets and people to the new cities of the age.

  Wheels, crucially, enabled the shift from muscle power to machine power that marks out our modern industrial era, and with the present machine age, wheels have at last found their role. Wheels can be driven round by water, by steam power, by electric power, by petrol engines. In machines, their rotary motion means that an engine, which can push only in one or two directions, can go on and on pushing or pulling continuously, without moving from the spot. In transport, their rotary motion translates the push and pull of the engine into continuous movement along the ground.

  There is barely a machine that does not have a wheel somewhere. It may be the tiny wheel that keeps the fan in your computer blowing. It may be the invisible roller that snatches your credit card when you withdraw money from the ATM. Or it may be something much bigger, like the giant wheels on the earth shifters in Alberta’s tar sands pits, or the world’s biggest tourist wheel in Singapore (165 metres high). Yet perhaps the most crucial of all, nowadays, are the rotors of electricity generators. Without these wheels, we’d have no electric power at all. The transport of the future might run on cushions of magnetic levitation, and not wheels, but the electric motors and generators that drive them and generate the levitation still need to turn.

  [1] Johnny Hart’s B.C. cartoon strip brilliantly satirises this with his prehistoric wheel. A prehistoric inventor invents a stone wheel, but lacking any purpose, any roads, anything to transport, or any draught animals to pull him, he simply stands on the axles as if on a strange unicycle without pedals.

  [2] A cartwheel is not simply a potter’s wheel turned vertically. Scientifically, they are very different. A potter’s wheel simply keeps the clay always at the same place relative to the potter’s hand as it rotates. The cartwheel is essentially a lever. Imagine sticking a crowbar under a heavy box. You could lever it forward bit by bit. A cartwheel works by allowing you to do this continuously. Moreover, it allows you to use the cart’s own weight to help it along, because the wheel ‘falls’ forward once it is moving. It’s sometimes said that a wheel reduces friction. That isn’t true. A wheel relies on friction between the rim and the ground to keep turning. If there was no friction, it would simply skid. There is also friction between the wheel and the axle (hence the importance of greased bearings). But what it does is concentrate the friction into one small spot where the rim touches the ground, and use the full leverage of the wheel to overcome it. The bigger the wheel, the greater the leverage.

  #12 Logic

  In the Star Trek TV series, all the human characters laughed at the Vulcan Spock indulgently because he was able to see things only from a logical point of view. The implication was that logic is a dull trap which humans with their illogical bent can escape with brilliant insight. It’s nonsense, of course. Logic has been deeply involved in pretty much every advance in human knowledge. Logic and reason are also the soundest of all guides against folly.

  As American jurist Oliver Wendell Holmes put it memorably: ‘Reason means truth and those who are not governed by it take the chance that someday the sunken fact will rip the bottom out of their boat.’ Actually he was wrong to say that reason means ‘truth’, since you can only arrive at the right answer by reason and logic, not the truth, but his thrust was memorably clear. Logic and reason are the best possible protection against mistakes. You might believe that you won’t get run over if you stand in front of a speeding train because maybe the train will swerve around you … but logic will tell you that it won’t.

  We actually use logic all the time, sometimes well, sometimes badly, to help us make decisions, to persuade people, to try to understand things. You might say, for instance, to chivvy along someone who believes they have all the time in the world to get ready: ‘If we don’t go in five minutes, we’re going to miss the start of the show.’ But there is a big difference between this kind of casual logic and the much more disciplined logic that philosophers and mathematicians aim for.

  This disciplined or formal logic has rules that need to be followed carefully. Formal in this context doesn’t mean the opposite of casual; it means that it follows forms, or rather patterns, that guard you against error. Logic is essentially the science of analysing forms of argument and finding principles on which inferences can be made.

  An argument uses inference to move from accepted beginnings or ‘premises’ to a particular conclusion. If the premises are valid, then the validity of the conclusion depends on the strength of the inference. Logic’s central task is to weed out the weak inference from the strong.

  Over 2,000 years ago, Aristotle set out the basic rules that guided logic until the last century. The central model of Aristotle’s logic was the syllogism. Syllogism is an argument containing three elements or ‘propositions’ – two premises and a conclusion. To cite the classic example: ‘All men are mortal; the Greeks are men; therefore Greeks are mortal.’ The content is irrelevant; it’s the form that matters. It could just as easily be expressed as symbols: ‘All Xs are Y; Zs are Xs; therefore Zs are Y.’

  There is a crucial distinction between ‘deductive’ logic and ‘inductive’ logic, however. Deductive logic is concerned only with the form of an argument, never with the content. The idea is to find forms under which, if the premise is true, the conclusion is guaranteed to be, regardless of the subject matter. If it works, the argument is said to be valid and, if the premises are also true, the conclusion is said to be sound. Some logicians argue that this is the only kind of formal logic.

  Inductive logic infers from things that have been observed in the world. You might infer, for instance, that because you had only ever observed white swans all swans are white. This seems a reasonable proposition, based on what you’ve seen, but it is not a valid inference because the conclusion is not inevitable. Of course, further observation would show that some swans are black. So there is always scope for error with inductive logic.

  Inductive logic always seeks to extend premises, whereas deductive logic draws all its information from the premises alone, simply reforming the terms. If your train to work has arrived five minutes late every day for the last six years, for instance, it is logical (inductively) to infer that it will arrive five minutes late today – but of course it is not, as it should be with deductive logic, inevitable. Sod’s Law
will tell you that the one time in five years you are a few minutes late reaching the station, your train will actually be on time …

  Despite its shortcomings, inductive reasoning is pretty much essential for science and most other intellectual disciplines. Indeed, the vast majority of scientific laws are derived inductively. The eighteenth-century Scottish philosopher David Hume argued that in fact the whole idea of scientific laws is inductive rather than deductive.[1] We don’t know for certain, Hume pointed out, that nature is uniform and that things we’ve observed to be true will stay as they are. It’s essentially about probabilities. Just because things fall to Earth with the acceleration due to gravity of 9.8 m/s, they may not always do so – but they probably will. The validity of reasoning inductively still provokes arguments among scientists and philosophers, but if theoretically it is flawed, in practice it is indispensable.

  It’s a mistake to think that because inductive logic doesn’t have to be internally consistent like deductive logic, the rules are loose. In some ways, its demands are even more exacting, because there are so many more potential pitfalls. Science, for instance, may start with observations, but the inferences a scientist makes from those observations must be logical, or the process fails. A meteorologist might observe that the driest months of the year are June and July, but it would be wrong, of course, to infer that it was because both months begin with the letter J.

  Logic can be used both to correct fallacies in scientific theories, when flaws in an argument are detected, and to create entirely new theories by logical extension. Even the most well-established and seemingly incontrovertible scientific truths can be adjusted like this. When Einstein spotted the limitations of Newton’s laws of motion and gravitation and then devised his theories of relativity, for instance, he did so entirely on the basis of logical argument. Only later were his arguments vindicated by actual evidence.[2]

  What makes logic so powerful and so crucial is that, if used well, it is an entirely objective teacher. The limitation is that we cannot always see the flaws in the logic. Indeed, many of us either consciously or subconsciously avoid looking for them if they seem to contradict our world view or make life difficult. That’s why it took 1,800 years after the Greek thinker Aristarchus first suggested it for the world to catch up and accept that the Earth travels around the Sun.

  It’s not just in science, of course, that we use and misuse logic. Logic is the way we order our thoughts and make sense of the world. As we grow up, our eyes learn to interpret things we see, from recognising particular arrangements of lines and shadows as three-dimensional boxes to identifying faces as friendly or hostile. So logic helps us find our way through the world. Logical argument is, also, if our use of logic is good, an invaluable way of checking if our insights, however arrived at, are actually correct. And, of course, it can be a very effective way of justifying our ideas to other people. Indeed, logic can seem so powerfully persuasive that all of us bring it into our everyday conversation all the time.[3]

  Immanuel Kant insisted that even casual reasoning should be considered logic. Logic, he argued, is simply the science of good judgement. Others, however, insist that only the formal discipline is real logic. Most people, in their everyday reasoning, are influenced by Aristotle’s formal logic with its syllogisms. Over the last two centuries, however, formal logic has developed in new directions that are much less familiar to those outside academic circles, but have taken it into new and important areas.

  One of the key breakthroughs was German mathematician Gottlob Frege’s attempts to deal with the problem of ‘multiple generality’ in the 1870s. Classical logic can deal well with unambiguous quantities and connections such as if, then, and and not, but it was not really understood just why it often broke down with generalities such as some and every. We can instantly see there’s something wrong with the following proposition: If every boy loves games of football, then some games of football are loved by every boy. But classical logic finds it hard to say just why this doesn’t work. The problem is partly with the language. Frege’s solution was his idea of Begriffsschrift (concept writing) in which he developed a system of symbols that were entirely independent of language, such as ‘quantifiers’ (symbols for words such as every, some, most and so on).

  At the same time, English mathematician George Boole was developing an entirely mathematical system of logic, now called Boolean logic, which was to form the basis of operations in all modern computers. Without this kind of logic, there would be no computers. The logic of Boole and Frege also had a profound influence on the direction of modern philosophy, and in particular the development of an approach to philosophy called analytic philosophy, with its emphasis on complete clarity and logic in arguments, that dominated the English-speaking world in the last century.

  It’s easy to see the limitations of logic. As Blaise Pascal said: ‘The last function of reason is to recognize that there are an infinity of things which surpass it.’ It is easy to see the absurdities, too, as Tweedledee demonstrates so eloquently in Lewis Carroll’s Through the Looking Glass. ‘“Contrariwise,” continued Tweedledee, “if it was so, it might be: and if it were so it would be; but as it isn’t, it ain’t. That’s logic.”’ Yet it remains at the heart of human thought and progress, and the best possible corrective to some of the wilder follies of the imagination.

  [1] There is a philosophical argument beyond this about even the validity of logic. In the seventeenth century, Descartes intended to set a bedrock, the ultimate irreducible logical statement about existence with his famous, cogito ergo sum, ‘I think therefore I am’. But philosophers since have criticised his logic. They say that he infers too much from his statement. Descartes is, for instance, not entitled logically to infer that it is he who is thinking; only that there is thinking going on. And could you be fooled into thinking that other people are thinking when in fact it’s just you? Moreover, Descartes goes on from this to assert that the clarity of logic is evidence that it must be true, and that logic is true because God would not deceive us so fundamentally. This, many people have argued, is a circular argument, which is why Descartes’ cogito ergo sum has in some ways become a symbol, ultimately, of uncertainty rather than the basic bedrock he intended.

  [2] Interestingly, even the brilliant Einstein failed to follow through with his logic in his General Theory of 1915 when it seemed to contradict his ‘common sense’ view that the universe is unchanging, adding an abstract ‘cosmological constant’ to balance the inward pull of gravity in the universe. In 1922, the young Russian mathematician Alexander Friedmann did follow the logic of Einstein’s theory through and rather than adding the illogical cosmological constant concluded that the reason that the universe is not collapsing under its own gravity is because it is being flung apart. By logical extension – classic inductive reasoning – Friedmann reasoned that the universe must have started as a tiny point, then expanded. Even Einstein pooh-poohed this at the time. But nine years later, observations by the astronomer Hubble showed that the universe really is expanding, just as Friedmann had argued, and the Big Bang theory is now well established. Einstein was the first to admit that he had been wrong. Sadly, Friedmann had died of typhoid six years earlier.

  [3] This everyday logic is prone to fallacies which a trained logician would spot instantly, but most of us slip into (or get away with) all the time. There are false dilemmas: ‘Either we put a cap on immigration or we’ll all be out of jobs’, which wrongly implies that there are only two alternatives. There are slippery slopes: ‘If we legalise cannabis, youngsters will move on to hard drugs and the streets will soon be littered with drug addicts’, which of course wrongly implies one as a logical extension of the other. There are straw men: ‘Global warming cannot be happening because it is colder this summer than it has been for a century’, which sets up a false target in order to knock it down. But if some of these pitfalls can be avoided, logic is one of the best ways of finding common ground and finding good, practical solution
s to problems that work for all.

  #11 Hope

  In the myth of Pandora’s box, the Ancient Greeks pinpointed Hope as the gods’ one crucial gift to help us deal with all the evil and suffering that she had unwittingly unleashed on the world. But hope is a slippery customer. You might hope that it’s going to be sunny this afternoon, only to be soaked to the skin when you go out without a coat. You might hope that the ring on the door means the postman is bringing your birthday present, only to find it’s just a bill. You may hope that the financial crisis is going to end, only to discover that the stock market has plummeted and the currency is nose-diving.

  Hope, then, is a deceiver. It fills you with false expectations and then fails to deliver. So why should we think hope a good thing? Surely it is better to be realistic, accept that things are as they are, and deal with the hard present rather than getting diverted by a fanciful future or, worse still, a sentimental past?

  Many of the Ancient Greeks believed fate decreed that life is unchangeable. So hope must be an illusion, and evil. To the playwright Aeschylus, hope was ‘the food of exiles’. To Euripides, it was ‘man’s curse’. Two and a half thousand years later, Friedrich Nietzsche infamously said with startling bleakness in Human, All Too Human (1878): ‘In reality, hope is the worst of all evils, because it prolongs man’s torments.’

  Ever since, Nietzsche has been seen as a dismal black raven of despair – or a comic Cassandra croaking, ‘Doomed, we’re all doomed!’ But actually, this is unfair to Nietzsche. He wasn’t saying hope is pointless at all. What he was railing against was what he saw as the false hope played out by Paul in the Christian Bible; the Paul who says that the three great gifts given to mankind are faith, hope and love. That hope, Nietzsche said, was a delusion which prevented action in the present because of promises for the future – and actually hope simply masked a desire for revenge on those who had oppressed the Christians. Worse, it destroyed real hope by its deceit.

 

‹ Prev