by Morris, Ian;
The one I have chosen is urbanism. Perhaps that will seem odd; after all, the fact that London was a big place does not directly reflect Lord Melbourne’s revenue flows or the Royal Navy’s command structure. On further reflection, though, I hope the choice will seem less odd. It took astonishing organization to support a city of 3 million people. Someone had to get food and water in and waste products out, provide work, maintain law and order, put out fires, and perform all the other tasks that go on, day in, day out, in every great city.
It is certainly true that some of the world’s biggest cities today are dysfunctional nightmares, riddled with crime, squalor, and disease. But that, of course, has been true of most big cities throughout history. Rome had a million residents in the first century BCE; it also had street gangs that sometimes brought government to a halt and death rates so high that more than a thousand country folk had to migrate into Rome every month just to make up the numbers. Yet for all Rome’s foulness (brilliantly evoked in the 2006 HBO television series Rome), the organization needed to keep the city going was vastly beyond anything any earlier society could have managed—just as running Lagos (population 11 million) or Mumbai (population 19 million), let alone Tokyo (population 35 million), would have been far beyond the Roman Empire’s capabilities.
This is why social scientists regularly use urbanism as a rough guide to organizational capacity. It is not a perfect measure, but it is certainly a useful rough guide. In our case, the size of a society’s largest cities has the extra advantage that we can trace it not only in the official statistics produced in the last few hundred years but also in the archaeological record, allowing us to get an approximate sense of levels of organization all the way back to the Ice Age.
As well as generating physical energy and organizing it, the British of course also had to process and communicate prodigious amounts of information. Scientists and industrialists had to transfer knowledge precisely; gunmakers, shipbuilders, soldiers, and sailors increasingly needed to read written instructions, plans, and maps; letters had to move between Asia and Europe. Nineteenth-century British information technology was crude compared to what we now take for granted (private letters needed three months to get from Guangzhou to London; government dispatches, for some reason, needed four), but it had already advanced far beyond eighteenth-century levels, which, in turn, were well head of the seventeenth century. Information processing is critical to social development, and I use it as my third trait.
Last but sadly not least is the capacity to make war. However well the British extracted energy, organized it, and communicated, it was their ability to turn these three traits toward destruction that settled matters in 1840. I grumbled in Chapter 1 about Arthur C. Clarke equating evolution with skill at killing in his science-fiction classic 2001: A Space Odyssey, but an index of social development that did not include military power would be no use at all. As Chairman Mao famously put it, “Every Communist must grasp this truth: ‘Political power grows out of the barrel of a gun.’ “Before the 1840s, no society could project military power across the whole planet, and to ask who “ruled” was nonsense. After the 1840s, though, this became perhaps the most important question in the world.
Just as with the UN’s human development index, there is no umpire to say that these traits, rather than some other set, are the ultimate way to measure social development, and again like the UN index, any change to the traits will change the scores. The good news, though, is that none of the alternative traits I have looked at over the last few years changed the scores much, and none changed the overall pattern at all.*
If Eddington had been an artist he might have been an Old Master, representing the world at a level of detail painful to behold. But making an index of social development is more like chainsaw art, carving grizzly bears out of tree trunks. This level of roughness and readiness would doubtless have turned Einstein’s hair even whiter, but different problems call for different margins of error. For the chainsaw artist, the only important question is whether the tree trunk looks like a bear; for the comparative historian, it is whether the index shows the overall shape of the history of social development. That, of course, is something historians will have to judge for themselves, comparing the pattern the index reveals with the details of the historical record.
Provoking historians to do this may in fact be the greatest service an index can perform. There is plenty of scope for debate: different traits and different ways of assigning scores might well work better. But putting numbers on the table forces us to focus on where errors might have crept in and how they can be corrected. It may not be astrophysics, but it is a start.
HOW TO MEASURE?
Now it is time to come up with some numbers. It is easy enough to find figures for the state of the world in 2000 CE (since it is such a nice round number, I use this date as the end point for the index). The United Nations’ various programs publish annual statistical digests that tell us, for instance, that the average American consumes 83.2 million kilocalories of energy per year, compared to 38 million for the average person in Japan; that 79.1 percent of Americans live in cities, as against 66 percent of Japanese; that there are 375 Internet hosts per thousand Americans but only 73 per thousand Japanese; and so on. The International Institute for Strategic Studies’s annual Military Balance tells us, so far as it can be known, how many troops and weapons each country has, what their capabilities are, and how much they cost. We are drowning in numbers. They do not add up to an index, though, until we decide how to organize them.
Sticking to the simple-as-possible program, I set 1,000 points as the maximum social development score attainable in the year 2000 and divide these points equally between my four traits. When Raoul Naroll published the first modern index of social development in 1956 he also gave equal points to his three traits, if only, as he put it, “because no obvious reason appeared for giving one any more weight than another.” That sounds like a counsel of despair, but there is actually a good reason for weighting the traits equally: even if I thought up reasons to weight one trait more heavily than another in calculating social development, there would be no grounds to assume that the same weightings have held good across the fifteen-thousand-plus years under review or have applied equally to East and West.
Having set the maximum possible score for each trait in the year 2000 at 250 points, we come to the trickiest part, deciding how to award points to East and West at each stage of their history. I will not go step-by-step through every calculation involved (I summarize the data and some of the main complexities in the appendix at the end of this book, and I have posted a fuller account online),* but it might be useful to take a quick look inside the kitchen, as it were, and explain the procedure a bit more fully. (If you don’t think so, you can of course skip to the next section.)
Urbanism is probably the most straightforward trait, although it certainly has its challenges. The first is definitional: Just what do we mean by urbanism? Some social scientists define urbanism as the proportion of the population living in settlements above a certain size (say, ten thousand people); others, as the distribution of people across several ranks of settlements, from cities down to hamlets; others still, as the average size of community within a country. These are all useful approaches, but are difficult for us to apply across the whole period we are looking at here because the nature of the evidence keeps changing. I decided to go with a simpler measure: the size of the largest known settlement in East and West at each moment in time.
Focusing on largest city size does not do away with definitional problems, since we still have to decide how to define the boundaries of cities and how to combine different categories of evidence for numbers within them. It does, though, reduce the uncertainties to a minimum. When I played around with the numbers I found that combining largest city size with other criteria, such as the best guesses at the distribution of people between cities and villages or the average size of cities, hugely increased the difficulties of
the task but hardly changed the overall scores at all; so, since the more complicated ways of measuring produced roughly the same results but with a whole lot more guesswork, I decided to stick to simple city sizes.
In 2000 CE, most geographers classified Tokyo as the world’s biggest city, with about 26.7 million residents.* Tokyo, then, scores the full 250 points allotted to organization/urbanism, meaning that for all other calculations it will take 106,800 people (that is, 26.7 million divided by 250) to score 1 point. The biggest Western city in 2000 CE was New York, with 16.7 million people, scoring 156.37 points. The data from a hundred years ago are not as good, but all historians agree that cities were much smaller. In the West, London had about 6.6 million residents (scoring 61.80 points) in 1900 CE, while in the East Tokyo was still the greatest city, but with just 1.75 million people, earning 16.39 points. By the time we get back to 1800 CE, historians have to combine several different kinds of evidence, including records of food supply and tax payments, the physical area covered by cities, the density of housing within them, and anecdotal accounts, but most conclude that Beijing was the world’s biggest city, with perhaps 1.1 million souls (10.30 points). The biggest Western city was again London, with about 861,000 people (8.06 points).
The further we push back in time, the broader the margins of error, but for the thousand years leading up to 1700 the biggest cities were clearly Chinese (with Japanese ones often close behind). First Chang’an, then Kaifeng, and later Hangzhou came close to or passed a million residents (around 9 points) between 800 and 1200 CE. Western cities, by contrast, were never more than half that size. A few centuries earlier the situation was reversed: in the first century BCE Rome’s million residents undoubtedly made it the world’s metropolis, while Chang’an in China had probably 500,000 citizens.
As we move back into prehistory the evidence of course becomes fuzzier and the numbers become smaller, but the combination of systematic archaeological surveys and detailed excavation of smaller areas still gives us a reasonable sense of city sizes. As I mentioned earlier, this is very much chainsaw art. The most commonly accepted estimates might be as much as 10 percent off but are unlikely to be much wider of the mark than that; and since we are applying the same methods of estimation to Eastern and Western sites, the broad trends should be fairly reliable. To score 1 point on this system requires 106,800 people, so slightly more than one thousand people will score 0.01 points, the smallest number I felt was worth entering on the index. As we saw in Chapter 2, the biggest Western villages reached this level around 7500 BCE and the biggest Eastern ones around 3500 BCE. Before these dates, West and East alike score zero (you can see tables of the scores in the appendix).
It might be worth taking a moment here to talk about energy capture as well, since it poses very different problems. The simplest way to think about energy capture is in terms of consumption per person, measured in kilocalories per day. Following the same procedure as for urbanism, I start in the year 2000 CE, when the average American burned through some 228,000 kilocalories per day. That figure, certainly the highest in history, gets the West the full compliment of 250 points (as I said earlier in the chapter, I am not interested in passing judgment on our capacities to capture energy, build cities, communicate information, and wage war; only in measuring them). The highest Eastern consumption per person in 2000 CE was Japan’s 104,000 kilocalories per day, earning 113.89 points.
Official statistics on energy go back only to about 1900 CE in the East and 1800 in the West, but fortunately there are ways to work around that. The human body has some basic physiological needs. It will not work properly unless it gets about 2,000 kilocalories of food per day (rather more if you are tall and/or physically active, rather less if you are not; the current American average of 3,460 kilocalories of food per day is, as supersized waistbands cruelly reveal, well in excess of what we need). If you take in much less than 2,000 kilocalories per day your body will gradually shut down functions—strength, vision, hearing, and so on—until you die. Average food consumption can never have been much below 2,000 kilocalories per person per day for extended periods, making the lowest possible score about 2 points.
In reality, though, the lowest scores have always been above 2 points, because most of the energy humans consume is in nonfood forms. We saw in Chapter 1 that Homo erectus was probably already burning wood for cooking at Zhoukoudian half a million years ago, and Neanderthals were certainly doing so 100,000 years ago, as well as wearing animal skins. Since we know so little about Neanderthal lifestyles our guesses cannot be very precise, but by tapping into nonfood energy sources Neanderthals definitely captured on average another thousand-plus kilocalories per day on top of their food, earning them about 3.25 points altogether. Fully modern humans cooked more than Neanderthals, wore more clothes, and also built houses from wood, leaves, mammoth bones, and skins—all of which, again, were parasitic on the chemical energy that plants had created out of the sun’s electromagnetic energy. Even the technologically simplest twentieth-century-CE hunter-gatherer societies captured at least 3,500 calories per day in food and nonfood sources combined. Given the colder weather, their distant forebears at the end of the Ice Age must have averaged closer to 4,000 kilocalories per day, or at least 4.25 points.
I doubt that any archaeologist would quibble much over these estimates, but there is a huge gap between Ice Age hunters’ 4.25 points and the contemporary gasoline-and electricity-guzzling West’s 250. What happened in between? By pooling their knowledge, archaeologists, historians, anthropologists, and ecologists can give us a pretty good idea.
Back in 1971, the editors of the magazine Scientific American invited the geoscientist Earl Cook to contribute an essay that he called “The Flow of Energy in an Industrial Society.” He included in it a diagram, much reprinted since then, showing best guesses at per-person energy consumption among hunter-gatherers, early agriculturalists (by which he meant the farmers of southwest Asia around 5000 BCE whom we met in Chapter 2), advanced agriculturalists (those of northwest Europe around 1400 CE), industrial folk (western Europeans around 1860), and late-twentieth-century “technological” societies. He divided the scores into four categories of food (including the feed that goes into animals whose meat is eaten), home and commerce, industry and agriculture, and transport (Figure 3.1).
Cook’s guesstimates have stood up remarkably well to nearly forty years of comparison with the results gathered by historians, anthropologists, archaeologists, and economists.* They only provide a starting point, of course, but we can use the detailed evidence surviving from each period of Eastern and Western history to tell us how far the actual societies departed from these parameters. Sometimes we can draw on textual evidence, but in most periods up to the last few hundred years archaeological finds—human and animal bones; houses; agricultural tools; traces of terracing and irrigation; the remains of craftsmen’s workshops and traded goods, and the carts, ships, and roads that bore them—are even more important.
Sometimes help comes from surprising directions. The ice cores that featured so prominently in Chapters 1 and 2 also show that airborne pollution increased sevenfold in the last few centuries BCE, mostly because of Roman mining in Spain, and in the last ten years, studies of sediments from peat bogs and lakes have confirmed this picture. Europeans apparently produced nine or ten times as much copper and silver in the first century CE as in the thirteenth century CE, with all the energy demands that implies—people to dig the mines, and animals to cart away the slag; more of both to build roads and ports, to load and unload ships, and carry metals to cities; watermills to crush the ores; and above all wood, as timber to shore up mineshafts and fuel to feed forges. This independent source of evidence also lets us compare levels of industrial activity in different periods. Not until the eleventh century CE—when Chinese documents say that the relentless demands of ironworkers stripped the mountains around Kaifeng so bare of trees that coal, for the first time in history, became an important power source
—did pollution in the ice return to Roman-era levels, and only with the belching smokestacks of nineteenth-century Britain did pollution push seriously beyond Roman-era levels.
Figure 3.1. The Great Chain of Energy in numbers: the geoscientist Earl Cook’s estimates of energy capture per person per day, from the time of Homo habilis to 1970s America
Once again, I want to emphasize that we are doing chainsaw art. For instance, I estimate per-person energy capture at the height of the Roman Empire, in the first century CE, around 31,000 kilocalories per day. That is well above Cook’s estimate of 26,000 calories for advanced agricultural societies, but archaeology makes it very clear that Romans ate more meat, built more cities, used more and bigger trading ships (and so on, and so on) than Europeans would do again until the eighteenth century. That said, Roman energy capture could certainly have been 5 percent higher or lower than my estimate. For reasons I address in the appendix, though, it was probably not more than 10 percent higher or lower, and definitely not 20 percent. Cook’s framework and the detailed evidence constrain guesstimates pretty tightly, and as with the urbanism scores, the fact that the same person is doing the guessing in all cases, applying the same principles, should mean that the errors are at least consistent.
Information technology and war-making raise their own difficulties, discussed briefly in the appendix and more fully on my website, but the same principles apply as with urbanism and energy capture, and probably the same margins of error too. For reasons I discuss in the appendix, the scores will need to be systematically wrong by 15 or even 20 percent to make a real difference to the fundamental pattern of social development, but such big margins of error seem incompatible with the historical evidence. In the end, though, the only way to know for sure is for other historians, perhaps preferring other traits and assigning scores in other ways, to propose their own numbers.