By Henry VIII’s time, the open drains that ran down the streets, known as sewers because they ran seawards, were so smelly that the dumping of human waste into them was forbidden by law. Henceforth they were meant only for rainwater, and people were expected to deal with their own mess by building cesspits in their houses. And this of course brought its own problems. In 1660, Samuel Pepys wrote with masterly understatement that his neighbours’ ‘house of office’ had overflowed into his cellar, ‘which doth trouble me’.
By 1810, London had 200,000 cesspits and an army of euphemistically dubbed ‘nightsoil men’ whose job it was to empty them and sell the waste to farmers for manure. The space inside some cesspits and sewers was so cramped that the task was often delegated to young children – an occupation unimaginably more disgusting and dangerous than even shinning up and down a chimney. It must have been thoroughly unpleasant emptying chamber pots into these cesspits – a task left by many wealthier households to the unfortunate servants – and the smell emanating from the pits wafted through every house in the city.
No wonder then that the introduction of flush toilets in the early nineteenth century seemed like a godsend. No more chamber pots to empty, and the smell and the waste washed away cleanly in an instant. Every household that could afford it had a flush toilet installed as soon as possible. But there was a problem. Those flush toilets were emptying into the cesspits, and the massive amount of water they added soon filled the pits to overflowing. In some households, the foul soup simply bubbled up through the floorboards. From most it overflowed into the open drains, meant only for rainwater, and then on into the Thames.
It wasn’t just the smell and the sight that became unbearable. All this sewage was flowing into the very rivers and streams that supplied the city’s drinking water. Cholera and typhoid became rife. The link between cholera and contaminated drinking water was confirmed by the pioneering work of Dr John Snow after a terrible outbreak of cholera in the London district of Soho in 1854, which Snow traced to a water pump on Broad Street. After the completion of Bazalgette’s sewerage system, cholera, once a major killer in London, virtually disappeared from the city. The tragic effects of this disease and typhoid elsewhere in the world today, all associated with sewage-contaminated water supplies, are perhaps the most powerful testaments to the benefits of a good sewerage system.
Bazalgette’s scheme for London was not the first in the world, by any means. There is evidence of large-scale earthenware sewer pipes and brick-lined sewage drains in the Indus valley cities of 4,500–5,000 years ago. Harappa and Mohenjo-daro even had outdoor flush toilets linked to the sewage network. The Romans also had a complex sewage system designed for them by the Etruscans in the sixth century BC, which focused on the great underground drain, the Cloaca Maxima, which still exists today, and was flushed clean continually by seven rivers. The Roman historian Pliny writes that many Romans regarded the city’s sewage system as its finest achievement.
Medieval Paris, too, had its own infamous underground drains, where Valjean hides so memorably in Victor Hugo’s Les Miserables: ‘Let the reader imagine Paris lifted off like a cover, the subterranean network of sewers, from a bird’s eye view, will outline on the banks a species of large branch grafted on the river. On the right bank, the belt sewer will form the trunk of this branch, the secondary ducts will form the branches, and those without exit the twigs.’
But it is the combination of flush toilets and extensive networks of long, wide tunnels and branching networks of smaller drains pioneered by Bazalgette that finally took the ordure out of city life. Sewage systems like this make cities hugely more pleasant places to live. Pongs are fleeting, occasioning a laugh or a distasteful sniff – not a continuous and often overwhelming fact of life. The physical bulk of ordure has vanished too, flushed away almost the instant it appears into hidden spaces beneath the city and then floated invisibly away beyond the city confines. And all the diseases linked to living with excrement are largely things of the past.
Even with just a few hundred thousand people, excremental problems were thoroughly unpleasant without good sewage. Today’s megalopolises would be unimaginable.
Of course, sewage systems haven’t actually solved the sewage problems; they’ve simply moved them elsewhere. The mantra of sewage companies is ‘the solution to ablutions is dilution’ but it doesn’t quite work. Some cities flush untreated sewage straight into the sea where they cause algal blooms that can suffocate marine life as well as putting swimmers and wildlife in danger of disease. But even those cities that treat sewage thoroughly before discharging it are left with the problem of how to dispose of the remaining moist ‘sludge’ which, unlike the nightsoil of the past, cannot be used as manure because it is so mixed in with industrial waste, chemicals from cleaning products, and much more. Some dump sludge far out to sea, but the detrimental effect on marine wildlife has led many countries to ban the practice. Sometimes it is put into landfill sites. No one yet has the perfect solution.
Despite these problems, good sewerage remains the blessing of modern cities, the thing which as much as anything distinguishes them from their filthy, smelly historic counterparts. As Forrest Gump fictitiously said, ‘Shit happens’ – but sewerage makes sure you’re not stuck with it.
#8 The Scientific Method
At school, science students are often taught the Scientific Method as if there is a single, definitive way in which science can be approached – one which will guarantee the answer you find is scientifically valid. It’s a four-step process:
1. Make observations
2. Formulate a hypothesis
3. Test your hypothesis with experiments
4. Draw conclusions
With luck you can move to a Theory, which is a hypothesis that’s passed its experimental test a few times, and maybe you might even find a Law that can be proved by experiment again and again. It’s presented as a foolproof system, and the only proper way to do science.
The great philosopher of science Karl Popper (1902–1994) refuted this approach entirely. Popper, who ironically was Professor of Scientific Method at the University of London, denied that there was such a thing as the scientific method – or rather a scientific method in this form.
Popper argued that we can never test a hypothesis to the point of proof. You might, for instance, notice that a few stars are hot and come up with a hypothesis that all stars are hot. You might even find that you observe thousands of stars, and that they all turn out to be hot. But you could never observe every star – or even if you could, you could never be certain that you had – so you can never prove anything. The best approach therefore is to disprove ideas, not to try and prove them. Find one cold star, for instance, and your hot star hypothesis crashes down – so you have learned something: that not all stars are hot.
Popper asserted that this emphasis on falsifying a hypothesis is not only the most practical approach to science; it actually provides the test of whether a hypothesis is scientific or not. If a hypothesis cannot be refuted or proved false, Popper said, it is not scientific: ‘… science,’ Popper wrote, ‘is a history of corrected mistakes.’
A contemporary of Popper, American physicist Thomas Kuhn (1922–1996), had equally iconoclastic things to say about the authority of the scientific method and the accuracy of theories. Kuhn wasn’t even as positive as Popper in suggesting that science moved forward by correcting mistakes. Science, Kuhn argued, is not a transition from error to truth, but a series of crises that lead to dramatic changes in assumptions, which he called paradigm shifts. The paradigm is the consensus on which questions to ask and how to answer them, and is determined at any one time by social and cultural factors which may have nothing to do with logic. These paradigms block any challenge to the consensus and take a real crisis to shift.
Criticisms like Popper’s and Kuhn’s might make you think the scientific method is fatally flawed – and some critics have assumed so, leaping on Kuhn’s ideas in particular to dis
credit science. In fact, both men were deeply committed to science, and their scrupulous examination is part of the amazing strength of the scientific approach.[1] Science works because it is critical, self-questioning, systematic and continually tested. Only when it becomes complacent or careless does it really fail.
The scientific method really came into its own about four centuries ago, in what some people call the scientific revolution, and since then it has demonstrated again and again its power not only to explain the world around us in a convincing way but to use those explanations to both make accurate predictions and open the way to some impressive technology, from antibiotic drugs to lasers. Every scientist might have his or her own particular way of working, but they are all working within the same basic scientific approach.
The origins of this approach date back to the time of the Ancient Greeks, as Aristotle emphasised the importance of observation – looking at the world as it actually is and trying to see what is going on, rather than relying on your imagination or divine inspiration. But the first clear exposition of a ‘method’ begins to emerge in the Islamic world between the eighth and eleventh centuries.
The alchemist Jabir ibn-Hayyan was the great pioneer of experiments, and in particular the importance of accurate measurements, devising sensitive scales to weigh chemicals before and after each stage in an experiment. The polymath ibn al-Haitham (also known as Alhazen) not only made important breakthroughs in the science of optics, among other things, but created the first clear outline of a system for finding scientific knowledge, going from observation to hypothesis to experiment to conclusion. The idea that our knowledge of the world could be developed by systematic investigation was a key breakthrough. It seems so self-evident now, but it was a revolutionary insight.
Gradually the Islamic ideas about science filtered through and inspired thinkers in Western Europe. Oxford University became a focus for new scientific thinking in the thirteenth century under the tutelage of Robert Grosseteste,[2] who emphasised how mathematics could be integrated into ibn al-Haitham’s method. Grosseteste’s student Roger Bacon pointed out how scientific study would not only provide insights into the natural world, but a way to master it, remarkably foreseeing submarines, cars and aircraft.
The great sea-change, though, seems to have occurred in the early seventeenth century. For the first time, thinkers began to question the wisdom of the Ancients such as Aristotle and Galen which had been rediscovered at the beginning of the Renaissance. They began to believe that they might find their own answers by using the power of their own rational brains to study the real world and learn from it.
The English philosopher Francis Bacon (1561–1626) was cited as an inspirational figure by most of the great scientific minds that followed, including Newton. Bacon was scornful of false knowledge arrived at without observation and introduced his own hugely influential version of the method, in which the scientist makes patient[3] and careful observations, forms a theory, then tests it by rigorous experiment. He also emphasised that scientists couldn’t come up with the answers alone; they should form communities to share methods and research and subject their findings to scrutiny. This suggestion is perhaps Bacon’s most important legacy, still seen today in scientific conferences, collaboration on projects and perhaps most crucially in the peer-review process to which findings are subject before they are published in scientific journals.
The second great contributor to the theory of the Method was René Descartes. Descartes’ strength was never accepting things without question. In his Discourse de la Méthode (Discourse on Method) (1637), he emphasises how he arrived at answers: first by always taking care to begin only with things he knew for certain; second to divide the problem up into parts; third to proceed only in the smallest and simplest steps; and fourth to review everything to ensure nothing was in error or omitted.
While Bacon and Descartes supplied the theory, it was Galileo Galilei who put it all into practice with the first great series of scientific experiments. The famous story of his plummeting cannonballs at Pisa – when he dropped two different cannonballs from the Leaning Tower to show they were both equally affected by gravity – is apocryphal, though one of his students tried it. But more significantly, he conducted a crucial series of experiments rolling balls down slopes to show that gravity accelerates things, and at a constant rate.
From Galileo’s time on, the scientific method was applied in various ways and with various degrees of diligence, but the real breakthrough was the change of attitude. Numerous people began to believe that they could learn by investigating the world, by thinking about their observations for themselves and by communicating their ideas with others to get feedback and suggestions. The scientific approach liberated men’s and women’s minds from thinking that they needed to be told answers – by revelation or by the scholars of old. They realised they had the power to find answers for themselves. It tapped directly into natural human curiosity and got many people tremendously excited.
It took a while for the full impact of this ‘revolution’ to sink in, which is why not everyone agrees that it was such a revolution – more an evolution. Throughout the seventeenth and eighteenth centuries, science remained the pastime of a few gentlemen ‘natural philosophers’. But as the Industrial Revolution began to transform the world, so the true meaning of the scientific approach became clear. You didn’t have to be a philosopher to study and learn about the world; you simply had to be a practical scientist who did practical research and pursued a career in science in academic and scientific institutions. The scientific method gave legitimacy to anyone who pursued it diligently. From the end of the eighteenth century onwards there was an explosion of scientific interest, a flood of a new breed of human beings called scientists and an accelerating avalanche of crucial scientific discoveries which have transformed both our world and our understanding of it.
[1] There was actually one more twentieth-century challenge to the scientific method which seems even more fundamentally damaging. That was Werner Heisenberg’s Uncertainty Principle of 1927. Heisenberg demonstrated that the scientist’s most basic method, observation, is deeply flawed. The very act of observing a subatomic particle changes it so that you can never be certain of two different properties, such as position and momentum, at the same time. Yet far from bringing down the science of subatomic particles, it proved to be the very cornerstone of the new science that became quantum mechanics, and has led to some fundamental insights into the nature of matter and energy.
[2] Grosseteste developed the method of induction; reasoning from observation. Peter Watson describes how, anticipating Descartes, Grosseteste advised students to break the phenomenon being studied into its principal parts, then build up knowledge from there. He cited the example of a rainbow, and how it could be seen in many places, not just in the sky, but in the spray from a water wheel, the splashes from oars. From this observation, Theodoric of Freiburg theorised that rainbows were caused by light refracting through individual drops of water.
[3] Patience is key, Bacon argued, and is something the best scientists such as Newton and Darwin showed in spades. Progress is made in small steps, from small details to ‘axioms’: ‘The understanding must not … be supplied with wings,’ Bacon wrote, ‘but rather hung with weights, to keep it from leaping and flying.’
#7 Evolution by Natural Selection
Charles Darwin’s theory of evolution by natural selection is shockingly simple. No two organisms ever come into the world quite alike, Darwin suggested. Occasionally, a slight difference, a special trait, gives a particular organism a crucial edge in the prevailing conditions – a better chance of surviving longer and passing on their characteristics to their progeny. And as this well-endowed organism and its descendants produce more progeny, others less well adapted to the conditions die out – a process vividly dubbed by Darwin’s contemporary Herbert Spencer as the ‘survival of the fittest’.
This simple process, Darwin suggested, explain
s just how all the wonderful, teeming variety of life on Earth has descended from a common ancestor and how all the myriad different species that have ever lived evolved over time as natural variations became accentuated. In short, it explains the history of life on Earth entirely as a basic natural process.
Darwin was by no means the first to come up with the idea of evolution, or even the idea of evolution by natural selection. Over 2,500 years ago, for instance, the Ancient Greek thinker Empedocles suggested that the life we see on Earth today came about because the most fit survived while the others perished. And in ninth-century Baghdad, the Islamic thinker al-Jahiz wrote: ‘Animals engage in a struggle for existence [and] for resources, to avoid being eaten and to breed … Environmental factors influence organisms to develop new characteristics to ensure survival, thus transforming into new species. Animals that survive to breed can pass on their successful characteristics to [their] offspring.’
Indeed, during the late eighteenth and early nineteenth centuries, the discovery of fossil after fossil of dinosaurs and other long-extinct animals built up a weight of evidence that life on Earth had a long history, and that many species had come and gone through time. French natural historians such as Étienne Geoffroy St-Hilaire, George Cuvier and the Chevalier de Lamarck all developed their own ideas on evolution half a century before Darwin. Even Darwin’s grandfather, Erasmus Darwin, had a theory of evolution.
What made Darwin’s work so groundbreaking was that he provided a comprehensive, well-worked-out theory of how it all happened and, crucially, put the mechanism of natural selection at its heart. Alfred Russel Wallace, working in the Far East, came up with a similar idea, and both men’s theories were introduced to the scientific community at the same meeting in 1858. But by that time Darwin had spent two decades diligently building up a huge weight of evidence and developed his theory so much more completely than Wallace that it is rightly Darwin, not Wallace, to whom most of the credit goes.
The World's Greatest Idea Page 25