Whether you agree with Nietzsche or not, he alerts us to a key problem with hope. Pinning hopes on the wrong thing appears to be more damaging than no hope at all. At best, you can appear a misguided fantasist, or a foolish dreamer like Don Quixote, who hopes his vision of the world is a reality. At worst, it can make you into a Hitler.
The dilemma is how to tell what is good hope and what is bad. This may be a question of evaluation. Millions of people enter the lotteries around the world on a regular basis. Of course, when they buy their ticket, they hope to win. The odds vary from lottery to lottery, but typically if they buy one ticket a week they might expect to win the jackpot once every 250,000 years – after which time, they would have spent far, far more than the jackpot is worth (even if they had the gifts of immortality and staggering patience). It is possible to calculate these odds, and so evaluate how realistic the hope of winning is. You might know these odds, and still choose to go ahead, because you had no other hope of escaping poverty, or because the cost of a ticket is, for you, negligible.
However, it is not always easy to evaluate hope. In hindsight, Leonardo da Vinci with his fantastic drawings of machines such as his ornithopter (flying machine) seems like a prescient genius. But what if such machines had never come to pass? His contemporaries must have thought some of his ideas rather nutty, if they ever got to see them, and maybe they’d be right.
Perhaps, though, the crucial point about hope is not so much whether it’s realistic or unrealistic, but its effect on motivation and confidence. A recent study showed that college students who were low in hope in their first year performed significantly worse in their degree exams three years later, even after controlling for intelligence, other personality traits and previous performance, than other students. It’s no accident that the word ‘hopeless’ is also used to mean useless.
Similarly, a medical study of lung cancer patients reported in March 2010 seemed to confirm what many doctors had thought – that patients do better if they are hopeful. The study showed that over five years, optimistic patients diagnosed with lung cancer survived over six months longer than pessimistic patients. Other studies have shown that people who are low on hope tend to be anxious and depressed.
What this seems to show is that looking at things entirely pragmatically doesn’t work for us. It makes us ill, or unable to perform. T.S. Eliot put it poignantly in Burnt Norton: ‘… human kind / Cannot bear very much reality.’ But maybe it’s simpler than that; we need to believe things can get better to motivate us to do anything different – to give us the will to act.
Sometimes, it’s almost impossible to assess future possibilities accurately. Sometimes, the future possibilities seem to suggest no chance of change for the better. Sometimes a plan or idea seems too absurd to be realistic. But hope can at least provide the motivation to do something.
What hope does on a day-to-day level is give you a sense of a better time to come and thereby brings that better time into the present. It’s very hard to be motivated by a plan, however logical and well worked out. We are driven by our emotions. What hope does is create the emotion in the present that makes us take the first step to realising that plan.
Interestingly, when Barack Obama was elected as US president in 2008, a key strand in his campaign was the banner of hope. His slogan, ‘Yes We Can’, was the embodiment of hope, and it was a great motivator. Millions of people in the USA who had never voted before went to the ballot box because Obama created a real sense of hope of change for the better. When the award of the Nobel Peace Prize to Obama quite early in his presidency was criticised, the Minnesota Post defended it like this: ‘The spirit of hope is a powerful thing. To ridicule it is to diminish us as human beings. The Nobel Prize was awarded this year for a message of hope that essentially fed the world’s commonality – its soul.’
As Martin Luther King said: ‘Everything that is done in the world is done by hope.’ He didn’t mean everything, of course; he just meant the righting of wrongs. But that’s quite a lot.
#10 Computer Programming
The computer was one of the great breakthrough inventions of the last century, and new ways in which its amazing power can be exploited are being found every day. Today we take for granted the kind of basic processing that gives our mobile phones the power to find a nearby restaurant in seconds, or create an image to send instantly to a friend on the far side of the world. Now computers are being built to completely model the flow of the world’s oceans or create virtual avatars into which one day we may be able to download our entire personalities to bring us a weird electronic immortality.
The computer actually combines two different technologies: calculators and automation, and it is automation which has really given the computer wings, and automation which takes computer technology into fields far wider than simply desktop mathematics.
People have been using calculators for thousands of years. Counting with fingers or stones, tally sticks, abacuses and all kinds of things have all been used to aid calculation – and in the right hands an abacus could reckon complex sums much faster than early computers. In the seventeenth century, Blaise Pascal constructed the first mechanical calculator, called the Arithmetic Machine, to help his father do accounts, though it could only add and subtract. In 1671, Gottfried Leibniz created the Staffelwalze (‘step-reckoner’) so that ‘excellent men [would not] lose hours like slaves in the labour of calculation, which could be safely relegated to anyone else if machines were used’.
Some of these early calculating devices were exquisite and ingenious but were limited in scope and prone to error, both because they could give a wrong reading and because human input was needed at every step. That’s where the early nineteenth-century genius of Charles Babbage came in. Babbage’s idea was to create a calculating machine that worked completely automatically, and so did away with human error. He wasn’t necessarily the first to think of this, but he was the first to try to make it a practical reality with his Difference Engine – although even the Difference Engine never got further than a small-scale demonstration model.
Automation, too, had a long history before Babbage’s day, as inventive minds used clever arrangements of wheels, cogs, levers and water pipes to make machines and toys perform tasks without any human intervention. Hero of Alexander was making automatic devices 1,800 years ago, while back in ninth-century Baghdad, the colourful Banu Musa brothers were creating amazing fountains that changed shape by the minute, clocks with all kinds of little gimmicks, trick jugs, flutes that played by themselves, water jugs that served drinks automatically, and even a full-size mechanical tea girl that actually served tea. By making clever use of one- or two-way self-closing and opening valves, devices for delaying action and responding to feedback, and simple mechanical memories, they created automatic control systems which are no different in essence from modern automatic machines. They used mainly water under pressure rather than electronics, but many of the operating principles are the same.
For a long while, though, automation seemed nothing more than a gimmick, creating clever toys to amuse the rich. It wasn’t until the Industrial Revolution that it began to come into its own, with things like the punch card for the Jacquard loom, which had a pattern of holes to guide weaving machines to automatically weave particularly complex patterns in silk.
This is where Babbage comes in again. While the construction of Babbage’s Difference Engine was proceeding in fits and starts in the 1830s, he was working on ideas for another machine, which he called the Analytic Engine. The Analytic Engine, though never more than an idea on paper, was a huge conceptual leap from the Difference Engine.
The Difference Engine was essentially a sophisticated calculator with cogs and wheels to perform the operations; the Analytical Engine was a programmable ‘mind’ that could solve problems and learn from the solution how to solve them better in future. If it had ever been built, the Analytical Engine would have combined calculation with automation for the first time,
and so created the first programmable computer. What Babbage needed was a way to programme his machine with instructions – and in the punch cards of the Jacquard loom, he realised in 1836, he had the answer. Punch cards could be used not only to control the working of his calculating machine, but to record results and calculation sequences permanently. In other words, it could be a memory.
Few people grasped the significance of Babbage’s ideas. So it was with some gratitude that in 1843 he found a fan in Ada Lovelace, daughter of the poet Lord Byron. Ada was a self-proclaimed genius at maths and has been called the first computer programmer. It’s hard to say just how much she actually contributed, but she clearly grasped the significance of Babbage’s programmable ‘mechanical brain’ and its full possibilities, writing: ‘Many persons … imagine that because the Engine is to give its results in numerical notation, the nature of its processes must consequently be arithmetical and numerical … This is an error. The engine can arrange and combine its numerical quantities exactly as if they were letters or any other general symbols.’ She suggested it could ultimately be used for anything from playing chess to writing music.
In the end, both Babbage’s and Ada Lovelace’s ideas had to wait another century before they became anything like reality, both because there was no real demand for such a machine, since human ‘computers’ could be hired far more cheaply to do routine calculations, and because mechanical cogs and punch cards just didn’t have the speed and accuracy needed, which was finally achieved with the electronic transistors and microprocessors that turned the computer into a reality in the mid-twentieth century.
It would be wrong to give sole credit to Babbage and Lovelace for the invention of computing, or even computer programming. There were many, many people who contributed, both before and since. But the nub of the central idea in computing – the idea embodied in Babbage’s version of the punch card, the combination of automation and memory – is all there. It’s this combination that gives the computer and its spin-offs such tremendous possibilities.
The idea can be summarised as the computer program, but that small term gives only a hint at the scope of the idea. Of course, programming has come a long way since Babbage’s punch card, and now involves both pre-printed circuits and a slew of ever more sophisticated programming languages that direct a computer through binary code, the sequences of 0s and 1s, or ons and offs in its electronic circuitry.[1] But the principles are the same. On a very basic level, programming means giving a machine instructions so that it can run itself. But if the program includes instructions for the machine to react to new input in particular ways, then it can learn and start to create its own instructions. Then the real potential of the computer and automated machines begins to be unleashed.
The scope of such programs is, in many ways, limited only by the imagination of the programmer, and the capabilities of the materials and operating systems involved. We already have robot vehicles that can explore distant planets and remote oceans. We have the internet that can instantly link up data from billions of computers around the world to act as a giant brain. Scientists are developing minute nano-machines that may be able to ferry drugs through the blood vessels to their target in the body, or carry out operations from the inside of the body.
In 2009, designers created a ‘reconfigurable’ supercomputer, the Novo-G, which can rearrange its internal circuitry to suit the task in hand, unlike the ‘fixed logic’ pattern of ordinary computers. That means it can turn its hand from controlling satellites to predicting the world’s climate instantly, all at very high speed. Some computers are becoming so adept at language recognition that translators and secretaries may soon find themselves out of a job. Other computers are being programmed to write music or books or run rail networks or drive cars, or create imaginary worlds so convincing that we may soon find it hard to tell reality from the virtual.
Programs can create not just computers but machines of all kinds that can, in theory, do almost anything we want, and the true scope is only just beginning to be realised. Perhaps the big limitation, though, remains the programmers. Bill Bryson summed up the pitfalls perfectly in Notes from a Big Country: ‘For a long time it puzzled me how something so expensive, so leading edge, could be so useless. And then it occurred to me that a computer is a stupid machine with the ability to do incredibly smart things, while computer programmers are smart people with the ability to do incredibly stupid things. They are, in short, a perfect match.’ And maybe it is worth asking just what we really need, or want, from these stupid machines.
[1] The binary code of the computer, with its 0s and 1s, is sometimes known as ‘machine code’, betraying its wider potential in all kinds of machines. A computer is, essentially, a machine that can respond only to very simple instructions – it can, ultimately, only be told to switch on or off in the right sequence. That’s what machine code does.
With the first programmable computers in the 1940s, the designers had to enter every 0 and every 1 laboriously in exactly the right order to programme the computer. Very soon, though, computer designers developed ‘assembly codes’ which allowed programmers to work with shorthand versions of the machine code that could be entered using text commands. But even assembly code meant the programmer had to spell out every procedure in exact and exhaustive detail, making the process very slow and prone to error, and every kind of computer had to be programmed individually.
The breakthrough was the development of ‘higher level’ languages which allow the programmer to write the program, known as the source, in more familiar, abstract terms. A ‘compiler’ program then translates the instructions into the machine code that the computer can understand. One of the first higher level languages was FORTRAN, created by John Backus in the late 1950s. FORTRAN stood for ‘formula translation’ because it allowed programmers to simply input mathematical formulae, rather than spelling them out in binary code. The beauty of these higher level languages is that they worked on many different computers, because they could be fed through the computer’s compiler program. (Interestingly, these early programs were still fed into the computer using punch cards, just as instructions for Babbage’s Analytical Engine would have been back in the 1840s.)
Since FORTRAN, computers have made giant leaps in their processing power, enabling programming languages to become more and more sophisticated, and more and more abstracted from machine code and the computer’s basic hardware. Computers, of course, are now supplied with a huge range of ‘software’, programs that give their hardware particular instructions. Although advances in technology are crunching ever more processing power and speed into smaller units, and new developments such as ‘quantum’ and ‘light’ computing promise even more amazing capabilities, these still remain simple machines. It is up to the instructions – the programs and software – to realise these simple machines’ staggering potential.
#9 Sewerage
No one who lived near the Thames in London in the baking summer of 1858 would have doubted that sewerage was the greatest idea ever. Thousands of newly installed flush toilets all over the city were spilling millions of gallons of ordure-filled water directly into the Thames, and cooked by the June heatwave, the river began to smell absolutely foul. So bad that the summer was remembered as the Great Stink. ‘Gentility of speech is at an end – it stinks; and whoso once inhales the stink can never forget it and can count himself lucky if he lives to remember it’, the newspapers reported.
People living near the river left for the country if they could or covered every window and door in perfume-soaked curtains. In the Houses of Parliament, right by the river, the stench was unbearable, despite the sheets soaked in chloride of lime covering the windows, and MPs contemplated moving the entire government to the country. As they held their noses and tried to debate, a new law to create a massive sewerage system to solve the problem was drafted and passed in just a little more than a fortnight. ‘Parliament was all but compelled to legislate upon the great London nuisan
ce by the force of sheer stench’, The Times wrote.
The scheme they voted for was the groundbreaking work of Joseph Bazalgette. London’s sewage had previously fed into ditches, some covered, some open, which slurped directly into the Thames. In places south of the river, these ditches were much lower than high tide, so the stinking soup backed up for several hours each day. Bazalgette’s great scheme was to intercept these ditches before they got into the Thames with massive drains, including two running either side of the river under the newly built Victoria and Albert embankments, to carry the sewage far to the east of the city to discharge from vast tanks into the Thames estuary at low tide.
Building these great sewers was a vast undertaking, involving 318 million bricks, 880,000 cubic yards of concrete and mortar, and the excavation of 3.5 million cubic yards of earth. To them was added over 450 miles of smaller main sewers, each receiving the contents of 13,000 miles of local drains, flushing through half a million gallons of waste a day. It was an astonishing engineering feat, and worked remarkably well, removing entirely the incredible stench of ordure that had emanated through London’s houses and streets for centuries, and it became a model for sewerage systems in cities around the world.
Living in modern cities today, where human waste is quickly flushed down the toilet apparently never to be seen or smelled again, it is astonishing to think just how much filth people literally lived with. It was bad enough back in the Middle Ages, when people walking through the city streets in the morning regularly had to duck to avoid the contents of night pots emptied from upstairs windows, and the streets were slippery with human and animal manure. But as cities like London grew, the problem got worse.
The World's Greatest Idea Page 24