The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor

Home > Other > The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor > Page 24
The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor Page 24

by David S. Landes


  The crux of disagreement in this instance has been what has been presented by some as an unrevolutionary (“evolutionary”) revolution. However impressive the growth of certain branches of production, the overall performance of the British economy (or British industry) during the century 1760-1860 that emerges from some recent numerical exercises has appeared modest: a few percent per year for industry; even less for aggregate product. And if one deflates these data for growth of population (so, income or product per head), they reduce to 1 or 2 percent a year.17 Given the margin of error intrinsic to this kind of statistical manipulation, that could be something. It could also be nothing.

  But why believe the estimates? Because they are more recent? Because the authors assure us of their reliability? The methods employed are less than convincing. One starts with the aggregate construct (figment) and then shoehorns the component branches to fit. One recent exercise found that after adding up British productivity gains in a few major branches—cotton, iron, transport, agriculture—no room was left for further gains in the other branches: other textiles, pottery, paper, hardware, machine building, clocks and watches. What to do? Simple. The author decided that most British industry “experienced low levels of labor productivity and slow productivity growth—it is possible that there was virtually no advance during 1780-1860.”18 This is history cart before horse, results before data, imagination before experience. It is also wrong.

  What is more, these estimates, based as they are on assumptions of homogeneity over time—iron is iron, cotton is cotton—inevitably underestimate the gain implicit in quality improvements and new products. How can one measure the significance of a new kind of steel (crucible steel) that makes possible superior timekeepers and better files for finishing and adjusting machine parts if one is simply counting tons of steel? How appreciate the production of newspapers that sell for a penny instead of a shilling thanks to rotary power presses? How measure the value of iron ships that last longer than wooden vessels and hold considerably more cargo? How count the output of light if one calculates in terms of lamps rather than the light they give off? A recent attempt to quantify the downward bias of the aggregate statistics on the basis of the price of lumens of light suggests that in that instance the difference between real and estimated gains over two hundred years is of the order of 1,000 to 1.19

  In the meantime, the new, quantitative economic historians (“cliometricians”) have triumphantly announced the demolition of doctrine received. One economic historian has called in every direction for abandonment of the misnomer “industrial revolution,” while others have begun to write histories of the period without using the dread name—a considerable inconvenience for both authors and students.20 Some, working on the border between economic and other kinds of history or simply outside the field, have leaped to the conclusion that everyone has misread the British story. Britain, they would have us believe, never was an industrial nation (whatever that means); the most important economic developments of the eighteenth century took place in agriculture and finance, while industry’s role, much exaggerated, was in fact subordinate.21 And some have sought to argue that Britain changed little during these supposedly revolutionary years (there went a century of historiography down the drain), while others, acknowledging that growth was in fact more rapid, nevertheless stressed continuity over change. They wrote of “trend growth,” or “trend acceleration,” and asserted that there was no “kink” in the factitious line that traced the increase in national product or income. And when some scholars refused to adopt this new dispensation, one historian dismissed them as “a dead horse that is not altogether willing to lie down.”22 Who says the ivory tower of scholarship is a quiet place?

  The Advantage of Going Round and Round

  Rotary motion’s great advantage over reciprocating motion lies in its energetic efficiency: it does not require the moving part to change direction with each stroke; it continues round and round. (It has of course its own constraints, arising largely from centrifugal force, which is subject to the same laws of motion.) Everything is a function of mass and velocity: work slowly enough with light equipment, and reciprocating motion will do the job, though at a cost. Step up to big pieces and higher speeds, and reciprocating motion becomes unworkable.

  Nothing illustrates the principle better than the shift from reciprocating to rotary steam engines in steamships. Both merchant marines and navies were pressing designers and builders for ever larger and faster vessels. For Britain, the world’s leading naval power, the definitive decision to go over to the new technology came with the building of Dreadnought, the first of the big-gun battleships. This was in 1905. The Royal Navy wanted a capital ship that could make 21 knots, a speed impossible with reciprocating engines. Although earlier vessels had been designed for 18 or 19 knots, they could do this only for short periods; eight hours at even 14 knots, and the engine bearings would start heating up and breaking down. A hard run could mean ten days in port to readjust—not a recipe for combat readiness.

  Some of the naval officers were afraid to take chances with the new technology. It was one thing to use turbines on destroyers, but on the Navy’s largest, most powerful ship!? What if the innovators were wrong? Philip Watts, Director of Naval Construction, settled the issue by pointing to the cost of old ways. Fit reciprocating engines, he said, and the Dreadnought would be out of date in five years.

  The result more than justified his hopes. The ship’s captain, Reginald Bacon, who had previously commanded the Irresistible (the Royal Navy likes hyperbole), marveled at the difference:

  [The turbines] were noiseless. In fact, I have frequently visited the engine room of the Dreadnoughtwhen at sea steaming 17 knots and have been unable to tell whether the engines were revolving or not. During a full speed run, the difference between the engine room of the Dreadnought and that of the Irresistible was extraordinary. In the Dreadnought, there was no noise, no steam was visible, no water or oil splashing about, the officers and men were clean; in fact, the ship to all appearances might have been in harbor and the turbines stopped. In the Irresistible, the noise was deafening. It was impossible to make a remark plainly audible and telephones were useless. The deck plates were greasy with oil and water so that it was difficult to walk without slipping. Some gland [valve] was certain to be blowing a little which made the atmosphere murky with steam. One or more hoses would be playing on a bearing which threatened trouble. Men constantly working around the engine would be feeling the bearings to see if they were running cool or showed signs of heating; and the officers would be seen with their coats buttoned up to their throats and perhaps in oilskins, black in the face, and with their clothes wet with oil and water.23

  The next step would be liquid fuel, which burned hotter, created higher pressures, and drove shafts and propellers faster. The older coal bins took up too much space, and the stokers ate huge amounts of bulky food—human engines also need fuel. As coal stocks fell, more men had to be called in to shovel from more distant bunkers to those closer to the engines: hundreds of men never saw the fires they fed. In contrast, refueling with oil meant simply attaching hoses and a few hours of pumping, often at sea; with coal, the ship had to put into port for days.

  Incidentally, much of this improvement would not be captured by the conventional measures of output and productivity. These would sum the cost of the new equipment, but not the change in the quality of work.

  14

  Why Europe? Why Then?

  If we were to prophesy that in the year 1930 a population of fifty million, better fed, clad, and lodged than the English of our time, will cover these islands, that Sussex and Huntingdonshire will be wealthier than the wealthiest parts of the West Riding of Yorkshire now are…that machines constructed on principles yet undiscovered will be in every house…many people would think us insane.

  —MACAULAY, “Southey’s Colloquies on Society” (1830)1

  Why Industrial Revolution there and then? The question is really
twofold. First, why and how did any country break through the crust of habit and conventional knowledge to this new mode of production? After all, history shows other examples of mechanization and use of inanimate power without producing an industrial revolution. One thinks of Sung China (hemp spinning, ironmaking), medieval Europe (water-and windmill technologies), of early modern Italy (silk throwing, shipbuilding), of the Holland of the “Golden Age.” Why now, finally, in the eighteenth century?

  Second, why did Britain do it and not some other nation?

  The two questions are one. The answer to each needs the other. That is the way of history.

  Turning to the first, I would stress buildup—the accumulation of knowledge and knowhow; and breakthrough—reaching and passing thresholds. We have already noted the interruption of Islamic and Chinese intellectual and technological advance, not only the cessation of improvement but the institutionalization of the stoppage. In Europe, just the other way: we have continuing accumulation. To be sure, in Europe as elsewhere, science and technology had their ups and downs, areas of strength and weakness, centers shifting with the accidents of politics and personal genius. But if I had to single out the critical, distinctively European sources of success, I would emphasize three considerations:

  (1) the growing autonomy of intellectual inquiry;

  (2) the development of unity in disunity in the form of a common, implicitly adversarial method, that is, the creation of a language of proof recognized, used, and understood across national and cultural boundaries; and

  (3) the invention of invention, that is, the routinization of research and its diffusion.

  Autonomy: The fight for intellectual autonomy went back to medieval conflicts over the validity and authority of tradition. Europe’s dominant view was that of the Roman Church—a conception of nature defined by holy scripture, as reconciled with, rather than modified by, the wisdom of the ancients. Much of this found definition in Scholasticism, a system of philosophy (including natural philosophy) that fostered a sense of omnicompetence and authority.

  Into this closed world, new ideas necessarily came as an insolence and a potential subversion—as they did in Islam. In Europe, however, acceptance was eased by practical usefulness and protected by rulers who sought to gain by novelty an advantage over rivals. It was not an accident, then, that Europe came to cultivate a vogue for the new and a sense of progress—a belief that, contrary to the nostalgia of antiquity for an earlier grace (Paradise Lost), the Golden Age (utopia) actually lay ahead; and that people were now better off, smarter, more capable than before. As Fra Giordano put it in a sermon in Pisa in 1306 (we should all be remembered as long): “But not all [the arts] have been found; we shall never see an end of finding them…and new ones are being found all the time.”2

  Of course, older attitudes hung on. (A law of historical motion holds that all innovations of thought and practice elicit an opposite if not always equal reaction.) In Europe, however, the reach of the Church was limited by the competing pretensions of secular authorities (Caesar vs. God) and by smoldering, gathering fires of religious dissent from below. These heresies may not have been enlightened in matters intellectual and scientific, but they undermined the uniqueness of dogma and, so doing, implicitly promoted novelty.

  Most shattering of authority was the widening of personal experience. The ancients, for example, thought no one could live in the tropics: too hot. Portuguese navigators soon showed the error of such preconceptions. Forget the ancients, they boasted; “we found the contrary.” Garcia d’Orta, son of converso parents and himself a loyal but of course secret Jew, learned medicine and natural philosophy in Salamanca and Lisbon, then sailed to Goa in 1534, where he served as physician to the Portuguese viceroys. In Europe, intimidated by his teachers, he never dared to question the authority of the ancient Greeks and Romans. Now, in the nonacademic environment of Portuguese India, he felt free to open his eyes. “For me,” he wrote, the testimony of an eye-witness is worth more than that of all the physicians and all the fathers of medicine who wrote on false information” and further, “you can get more knowledge now from the Portuguese in one day than was known to the Romans after a hundred years.”3

  Method: Seeing alone was not enough. One must understand and give nonmagical explanations for natural phenomena. No credence could be given to things unseen. No room here for unicorns, basilisks, and salamanders. Where Aristotle thought to explain phenomena by the “essential” nature of things (heavenly bodies travel in circles; terrestrial bodies move up or down), the new philosophy proposed the converse: nature was not in things; things were (and moved) in nature. Early on, moreover, these searchers came to see mathematics as immensely valuable for specifying observations and formulating results. Thus Roger Bacon at Oxford in the thirteenth century: “All categories depend on a knowledge of quantity, concerning which mathematics treats, and therefore the whole power of logic depends on mathematics.”4 This marriage of observation and precise description, in turn, made possible replication and verification. Nothing so effectively undermined authority. It mattered little who said what, but what was said; not perception but reality. Do I see what you say you saw?

  Such an approach opened the way to purposeful experiment. Instead of waiting to see something happen, make it happen. This required an intellectual leap, and some have argued that it was the renewal and dissemination of magical beliefs (even Isaac Newton believed in the possibility of alchemy and the transmutation of matter) that led the scientific community to see nature as something to be acted upon as well as observed.5 “In striking contrast to the natural philosopher,” writes one historian, “the magician manipulated nature.”6

  Well, at least he tried. I am skeptical, however, of this effort to conflate personal confusions with larger causation. The leap from observation to experiment, from passive to active, was hard enough, and the temptations of magic, this “world of profit and delight, of power, of honor, of omnipotence,” were diversion and obstacle. If anything, the world of magic was a parody of reality, a shrinking residual of ignorance, a kind of intellectual antimatter. Magic’s occasional successes were serendipitous by-products of hocus-pocus. Its practitioners were easily seen as crazies, if not as agents of the devil, in part because of their frequently eccentric manner and occasionally criminal behavior.* Such practices went back to the dawn of time; they are still with us and always will be, because, like people who play the lottery, we want to believe. That they revived and flourished in the rush of new knowledge, of secrets uncovered, of mysteries revealed, should come as no surprise. Magic was more response than source, and insofar as it played a role, it was less as stimulant than as allergenic.7

  Note that for some, this is cause for regret, as at a self-imposed impoverishment: “…the new quantitative and mechanistic approach eventually established a metaphysics which left no room for essences, animism, hope, or purpose in nature, thus making magic something ‘unreal,’ or supernatural in the modern sense.”8 Not to feel bad: the road to truth and progress passed there. As David Gans, an early seventeenth-century popularizer of natural science, put it, one knows that magic and divining are not science because their practitioners do not argue with one another. Without controversy, no serious pursuit of knowledge and truth.9

  This powerful combination of perception with measurement, verification, and mathematized deduction—this new method—was the key to knowing. Its practical successes were the assurance that it would be protected and encouraged whatever the consequences. Nothing like it developed anywhere else.10

  How to experiment was another matter. One first had to invent research strategies and instruments of observation and measurement, and almost four centuries would elapse before the method bore fruit in the spectacular advances of the seventeenth century. Not that knowledge stood still. The new approach found early application in astronomy and navigation, mechanics and warfare, optics and surveying—all of them practical matters. But it was not until the late sixteenth century,
with Galileo Galilei, that experiment became a system. This entailed not only repeated and repeatable observation, but deliberate simplification as a window on the complex. Want to find the relations between time, speed, and distance-covered of falling objects? Slow them by rolling them down an inclined plane.

  Scientists had to see better and could do so once the telescope and microscope were invented (c. 1600), opening new worlds comparable for wonder and power to the earlier geographical discoveries. They needed to measure more precisely, because the smallest shift of a pointer could make all the difference. So Pedro Nunez, professor of astronomy and mathematics in the University of Coimbra (Portugal), invented in the early sixteenth century the nonius (from his latinized name), to give navigational and astronomical readings to a fraction of a degree. This was later improved by the vernier scale (Pierre Vernier, 1580-1637), and this in turn was followed by the invention of the micrometer (Gascoigne, 1639, but long ignored; and Adrien Auzout, 1666), which used fine wires for reading and a screw (rather than a slide) to achieve close control. The result was measures to the tenth and less of a millimeter that substantially enhanced astronomical accuracy.11 (Note that just learning to make precision screws was a major achievement; also that the usefulness of these instruments depended partly on eyeglasses and magnifying lenses.)

  The same pursuit of precision marked the development of time measurement. Astronomers and physicists needed to time events to the minute and second, and Christian Huygens gave that to them with the invention of the pendulum clock in 1657 and the balance spring in 1675. Scientists also needed to calculate better and faster, and here John Napier’s logarithms were as important in their day as the invention of the abacus in an earlier time, or of calculators and computers later.12 And they needed more powerful tools of mathematical analysis, which they got from Rene Descartes’s analytic geometry and, even more, from the new calculus of Isaac Newton and Gottfried Wilhelm von Leibniz. These new maths contributed immensely to experiment and analysis.

 

‹ Prev