Plagues and Peoples

Home > Other > Plagues and Peoples > Page 6
Plagues and Peoples Page 6

by William H. McNeill


  These initial limitations were transcended in the ancient Near East by the invention of plowing, not long before 3000 B.C. Plowing allowed effective weed control, year in and year out, so that fields could be cultivated indefinitely. The secret was simple. By substituting animal for human muscles, the plow allowed ancient Near Eastern farmers to cultivate twice the area they needed for cropland, so that when the extra land was fallowed (i.e., plowed during the growing season so as to destroy weeds before seeds had formed), it created a suitably empty ecological niche into which next year’s crop might safely move without being too severely infested by locally formidable weed species.

  It is a testimony to humanity’s animistic propensities that most textbooks still explain how fallowing allows the earth to restore fertility by having a rest. A moment’s thought will convince anyone that whatever processes a geological weathering and consequent chemical change occur in a single season would make no noticeable difference for the following year’s plant growth. To be sure, in the case of “dry farming,” soil kept in a bare fallow can store moisture that would otherwise be dispersed into the air by passage of water from the soil through the roots and leafy parts of plants. In regions where deficient moisture limits crop yields, a year’s fallowing can, therefore, increase fertility by letting subsoil moisture accumulate. Elsewhere, however, where moisture is not the critical limit to plant growth, the great advantage of fallowing is that it allows farmers to keep weeds at bay by interrupting their natural life cycle with the plow

  Digging (or flooding) would of course achieve similar results; but in most environments human muscles did not alone suffice to break up enough land in a year to allow a family to subsist on the crop that could be harvested from only half of the cultivated area, while the rest was fallowed. Special soils and ecological conditions did allow some exceptions. The two most significant were (1) North China, where friable and fertile loess soil permitted human populations to subsist on crops of millet without the assistance of animal strength hitched to the plow; and (2) the Americas, where the high calorie yield per acre of maize and potatoes as compared with the Old World crops like wheat, barley, and millet, led to similar results even on soils less easily tilled than the loess of China.3

  One must admire the skill with which humankind discovered and exploited the possibilities inherent in remodeling natural landscapes in these radical ways, increasing human food supply many times over, even though it meant permanent enslavement to an unending rhythm of work. To be sure, the plow used animal strength to pull the share through the soil, and the plowman’s life was generally less toilsome than the lot that fell to the rice farmer of East Asia, who used his own muscles for most of the tasks of water and soil engineering required to create and maintain paddy fields. But toil—persistent, unending, and fundamentally at odds with humankind’s propensities as shaped by the hunting experience—was nevertheless the lot of all farming populations. Only so could man the farmer successfully distort natural ecological balances, shorten the food chain, magnify human consumption and multiply human numbers until what had been a relatively rare creature in the balance of nature became the dominant large-bodied species throughout the broad regions of the earth susceptible to agriculture.

  The struggle with weeds (including what we may call weed animals, like weevils, rats, and mice) was conducted with the help of tools, intelligence, and experiment; and though unending, it led to a series of victories for humanity. There was, however, another side to the agriculture distortion of natural ecological balances. Shortening the food chain and multiplying a restricted number of domesticated species of plants and animals also created dense concentrations of potential food for parasites. Since most successful parasites were too small to be seen, for many centuries human intelligence could not cope very effectively with their ravages.

  Prior to the dawn of modern science and the invention of the microscope, therefore, our ancestors’ victories over weeds and rival large-bodied predators, remarkable as they were, met a counterforce in the extended opportunities small-bodied parasitic predators found in the altered landscapes successful farmers created. Hyperinfestation by a single or a very few species is, indeed, a normal response to any abrupt and far-reaching alteration of natural balances in the web of life. Weed species live by exploiting the gaps disasters create in normal ecological systems. Weeds remain rare and inconspicuous amid undisturbed natural vegetation, but are able rapidly to occupy any niche created by destruction of local climax cover. Since few species are equipped to exploit such opportunities efficiently, the result is hyperinfestation of the denuded landscape by a limited number of different kinds of weeds. Yet weeds do not prevail for long in nature. Complex compensatory adjustments soon manifest themselves, and in the absence of fresh “external” disturbances of a far-reaching sort, a more or less stable and variegated flora will re-establish itself, usually looking much like what had been destroyed at the start.

  But as long as human beings continued to expend effort to alter natural landscapes and fit them for agriculture, they prevented re-establishment of natural climax ecosystems, and thereby kept open the door for hyperinfestation.4 As we have seen, when dealing with relatively large-bodied organisms that humans could see and manipulate, observation and experiment soon allowed early farmers to keep weeds (as well as animal pests like mice) under control. But human intelligence remained for thousands of years only fumblingly effective in dealing with disease-causing micro-organisms. As a result, the ravages of disease among crops, herds, and peoples played a significant part in human affairs throughout historic time. In fact, the effort to understand what happened in a way that humans could not do before modern medical discoveries made clear some of the important patterns of disease propagation is the raison d’être of this book.

  So far so good. But when one seeks to descend from this level of generalization and ask what sorts of disease arose or extended their sway in what parts of the world and at what times and with what consequences for human life and culture, uncertainty blocks any adequate answer. Even if one excludes diseases affecting crops and domestic animals, exact information is lacking wherewith to create a history of human infections.

  It is easy to see that settling down to prolonged or permanent occupancy of a single village site involved new risks of parasitic invasion. Increased contact with human feces as they accumulated in proximity to living quarters, for instance, could allow a wide variety of intestinal parasites to move safely from host to host. By contrast, a hunting band, perpetually on the move with only a brief sojourn in any location, would risk little from this kind of infectious cycle. We should expect that human populations living in sedentary communities were therefore far more thickly infested with worms and similar parasites than their hunting predecessors or contemporaries in the same climatic zones. Other parasitic organisms must have found it easy to move from host to host via contaminated water supplies. This, too, was far more likely to happen when human communities remained in one location permanently and had to rely on the same water sources for all household needs year in and year out.

  All the same, the small village communities characteristic of earliest agriculture may not always have fallen prey to particularly heavy parasitic invasion. Near Eastern slash-and-burn cultivators moved from place to place several times in a lifetime; Chinese millet farmers and Amerindian cultivators of maize, beans, and potatoes were scattered rather thinly and lived in small hamlets during pre-civilized times. Infections and infestations of various sorts presumably established themselves in these communities, and, although the parasite population must have differed from place to place, within each village or hamlet nearly everyone probably acquired about the same assortment of parasites in youth. Such, at any rate, is the case today among primitive cultivators.5 Yet such infections cannot have been a very heavy biological burden, since they failed to inhibit human population growth of unexampled magnitude. Within only a few hundred years, in all the historically significant regions
where valuable food crops were successfully domesticated, human population density became ten to twenty times greater than hunting densities had ever been in the same areas.6

  Insofar as early agriculture depended on irrigation, as was the case in Mesopotamia and Egypt as well as in the Indus River valley and in the Peruvian coastal region, more elaborate social controls than those ordinarily needed in a simple, more or less isolated village, were required. Planning of canals and dikes, cooperation in their maintenance, and above all, allocation of irrigation water among competing users, all invited or required some sort of authoritative leadership. Cities and civilization resulted, characterized by far wider co-ordination of effort and specialization of skills than anything village life permitted.

  But irrigation farming, especially in relatively warm climates, came near to recreating the favorable conditions for the transmission of disease parasites that prevailed in tropical rain forests whence humanity’s remote ancestors had presumably emerged. Abundant moisture—even more abundant than that commonly available in rain forest environments—facilitated transfer of parasites from host to host. Where suitably warm and shallow water, in which potential human hosts constantly waded about, provided a satisfactory transfer medium, parasites did not need resistant cysts, or other life forms that could withstand dry conditions for lengthy periods of time.

  Ancient forms of parasitism may have differed slightly from those of today, but organic evolution moves very slowly when measured by human and historical standards. A mere five thousand years ago, therefore, parasitic forms of life exploiting the special conditions created by irrigation agriculture were probably almost identical with those that still make life difficult for modern irrigation and rice paddy farmers.

  A good deal is known about these parasites. The most important of them is the blood fluke that causes schistosomiasis, a nasty, debilitating disease, affecting perhaps as many as 100 million people today. The fluke’s life cycle involves mollusks and men as alternate hosts; and the organism moves from one to the other through water, in tiny free-swimming forms.7 The infection is sometimes fatal to snails (the commonest mollusk host), but among chronically exposed human populations it peaks in childhood and persists in less acute form thereafter. As in the case of malaria, the parasitic life cycle is remarkably elaborate. The fluke has two distinct free-swimming forms that seek their respective hosts, mollusk or man as the case may be, only to undertake extraordinary migrations within the host’s body after initial penetration. This complexity, as well as the chronic character of the disease it produces in its human hosts, suggests that a lengthy evolution lies behind the modern blood fluke’s behavior. The parasitic pattern, like malaria, may have originated in African or Asian rain forests; but the modern distribution of the disease, being very broad, does not offer any firm basis for deciding when and where it may have spread to the regions of the world where it now flourishes.8 Ancient Egyptian irrigators suffered from the infection as early as 1200 B.C., and probably long before then.9 Whether ancient Sumer and Babylonia were similarly infected cannot be said for sure, though contacts between the two river valleys would make such a condition probable.10 In distant China, too, a recently discovered and unusually well-preserved corpse buried in the second century B.C. carried a complement of blood flukes and worms, even though the actual cause of death was a heart attack.11 In view of modern experience of how swiftly the infection builds up in irrigated landscapes where human cultivators spend long hours wading in the shallows, it seems probable that ancient irrigation and schistosomiasis were closely linked throughout the Old World from very early times.12

  Whatever the ancient distribution of schistosomiasis and similar infections may have been, one can be sure that wherever they became widespread they tended to create a listless and debilitated peasantry, handicapped both for sustained work in the fields and digging irrigation channels, and for the no less muscularly demanding task of resisting military attack or throwing off alien political domination and economic exploitation. Lassitude and chronic malaise, in other words, of the kind induced by blood fluke and similar parasitic infections, conduces to successful invasion by the only kind of large-bodied predators human beings have to fear: their own kind, armed and organized for war and conquest.13 Although historians are unaccustomed to thinking of state building, tax collection, and booty raids in such a context, this sort of mutuai support between micro- and macroparasitism is, assuredly, a normal ecological phenomenon.

  How important parasitic infection of agricultural field workers may have been in facilitating the erection of the social hierarchies of early river valley civilizations cannot be estimated very plausibly. But it seems reasonable to suspect that the despotic governments characteristic of societies dependent on irrigation agriculture may have owed something to the debilitating diseases that afflicted field workers who kept their feet wet much of the time, as well as to the technical requirements of water management and control which have hitherto been used to explain the phenomenon.14 The plagues of Egypt, in short, may have been connected with the power of Pharaoh in ways the ancient Hebrews never thought of and modern historians have never considered.

  As long as their invisibility prevented parasites from being recognized, human intelligence was quite literally blindfolded in trying to cope with the manifestation of infectious disease. Yet men did sometimes work out dietary and sanitary codes that may have reduced the risk of infection. The most familiar case is the Jewish and Moslem prohibition of pork. This appears inexplicable until one realizes that hogs were scavengers in Near Eastern villages, quite capable of eating human feces and other “unclean” material. If eaten without the most thorough cooking, their flesh was easily capable of transferring a number of parasites to human beings, as modern encounters with trichinosis attest. Nonetheless, the ancient prohibition of pork presumably rested rather on an intuitive horror of the hogs’ behavior than on any sort of trial and error; and any benefit to human health that may have resulted from observing the taboo cannot be detected from available records.

  Similar sentiments lay behind the expulsion of lepers from ordinary society.15 This was another ancient Jewish rule that must have reduced exposure to disease transmitted by skin-to-skin contact. Washing, whether in water or with sand, plays a prominent part in Moslem as well as Hindu ritual; and that, too, may sometimes have had the effect of checking the spread of infections.

  On the other hand, ceremonial bathing shared by thousands of pilgrims gathered to celebrate some holy festival offers human parasites a specially favorable chance to find new hosts. In Yemen, for example, ablution pools attached to a mosque were found to harbor snails infected with schistosomiasis; and in India the propagation of cholera was (and is) largely a function of religious pilgrimage.16 Traditional rules even when sanctified by religion and immemorial practice were not, therefore, always effective in checking the propagation of diseases; and practices that actually conduced to their spread could and did become just as holy as other rules that had positive health value.17

  It was, of course, not merely worms and other multicelled parasites that found conditions created by agriculture propitious for their spread among humankind. Protozoan, bacterial, and viral infections also had an expanded field for their propagation as flocks, crops, and human populations all multiplied. Effects were characteristically indirect, unforeseen and unforeseeable; and save in rare instances it is impossible to reconstruct all the circumstances that may have allowed a new disease pattern to assert itself.

  There are, however, some exceptions. In western Africa, for instance, when agriculture began to spread into rain forest environments, slash-and-burn methods of cultivation clearly put new strains on older ecological balances. An unexpected result was to give malaria a new, epidemic intensity. What seems to have happened is this: clearings multiplied breeding places for a kind of mosquito, Anopheles gambiae, that feeds by preference on human blood. Indeed, Anopheles gambine can properly be described as a weed species tha
t proliferates enormously in the gashes human agriculture creates in the African rain forest. With the advance of agriculture, it supplants other mosquito species accustomed to feeding on creatures other than man. As a result, the man-mosquito malarial cycle attains an unexampled intensity, affecting practically every human being who ventures into these forest clearings.18

  African cultivators were nevertheless able to persist in their effort to tame the rain forest for agriculture; not, however, without genetic adaptation whereby the frequency of a gene that produces sickle-shaped red corpuscles in heterozygous individuals increased markedly. Such cells are less hospitable to the malarial plasmodium than normal red blood cells. Consequently, the debilitating effects of malarial infection are reduced in individuals who have this kind of red corpuscle.

  But the cost of such protection was very high. Individuals who inherit the sickling gene from both parents die young. Resulting heavy child mortality is further increased by the fact that those born entirely without the sickling gene are liable to lethal malarial infection. Indeed, in the most intensively malarial regions of West Africa, half the infants born among populations bearing the sickle-cell trait are biologically vulnerable. Since the agricultural penetration of the rain forest is still in process, the contemporary distribution of malaria, Anopheles gambiae and the sickling trait permit a plausible reconstruction of the unusually drastic consequences the alteration of older ecological patterns entailed, and continues to entail, in that environment.19

  In central and eastern Africa, events in the nineteenth and twentieth centuries connected with ill-conceived efforts by European colonial administrators to alter traditional patterns of herding and cultivation also illustrate the unexpected side effects that sometimes arise from agricultural expansions into new regions. These efforts, in fact, precipitated veritable epidemics of sleeping sickness in parts of Uganda, the Belgian Congo, Tanganyika, Rhodesia, and Nigeria; and the end result, as colonial regimes came to an end, was a land more thickly infested with death-dealing tsetse flies than before government policy set out to utilize what looked like good agricultural land more effectively.20

 

‹ Prev