Creating the Twentieth Century

Home > Other > Creating the Twentieth Century > Page 36
Creating the Twentieth Century Page 36

by Vaclav Smil


  Consequently, no comparisons of economic and social indicators of the late 1860s with those of the immediate pre-WWI years should be taken as representative measures of advances attributable solely to the great technical saltation of the age. Inevitably, most of the economic gains of the 1870s had their investment and infrastructural roots in years preceding the beginning of this period, while most of the innovations introduced after 1900 made their full socioeconomic mark only after WWI. That is why in this section I proceed along both quantitative and qualitative lines.

  This book’s primary concern is to detail the extent and the lasting consequences of the unprecedented number of fundamental pre-WWI technical advances. There is no adequate means of quantifying that process, but the history of patenting in the country that came to dominate this modernization process offers a valuable proxy account of these accomplishments. The first numbered U.S. patent (for traction wheels) was issued in 1836, and the subsequent steady growth brought the total of successful applications granted to U.S. residents to almost 31,000 by the end of 1860. After a three-year dip in the early 1860s came a steep ascent, from 4,638 patents in 1864 to 12,301 patents in 1867, and then the annual grants reached a new plateau of high sustained inventiveness at 12,000–13,000 cases a year, a fact that provides additional support for my timing of the beginning of the Age of Synergy (USPTO 2002; figure 6.10).

  FIGURE 6.10. U.S. patents issued between 1836 and 1914. Notice the unprecedented rise in the annual rate of patenting that took place during the mid-1860s and the second wave of acceleration that began in 1881. Plotted from data in USPTO (2002).

  Rapid growth of patenting recommenced by 1881, and soon afterward it formed a new, and somewhat more uneven, plateau (mostly with 19,000–23,000 grants per year) that lasted until the end of the 1890s. U.S. Patent 500,000, for a combined flush-tank and manhole, was issued in June 1893. The annual rate of 30,000 grants was surpassed for the first time in 1903; U.S. Patent 1,000,000 was issued in August 1911 (to Francis Holton for his vehicle tire), and before the beginning of WWI the total surpassed 1,100,000. Annual grants for the period 1900–1914 averaged more than 32,000 or more than three times the mean of the late 1860s, a convincing sign of intensifying inventive activity.

  I do not hasten to undercut the conclusion I have just made, but I must reiterate (see chapter 1) that this simple quantitative focus may be somewhat misleading. The era’s flood of unprecedented inventiveness also included a large number of not just trivial but also outright ridiculous patents that should not have ever been granted (Brown and Jeffcott 1932). Among the choicest examples of the latter category are mad combinations: of match-safe, pincushion, and trap (U.S. Patent 439,467 in 1890) and (yes, this is an exact citation) of grocer’s package, grater, slicer, and a mouse and fly trap (U.S. Patent 586,025 in 1897). Chewing-gum locket, tapeworm trap, device for producing dimples, and electrical bedbug exterminator (“electricity will be sent through the bodies of the bugs, which will either kill them or startle them, so that they will leave the bedstead” according to the U.S. Patent 616,049 of February 7, 1898) are among my other favorites.

  But there is also no doubt that many technical advances of the era had fairly prompt and profound economic effects that were reflected in some impressive growth and productivity gains. Aggregate national accounts are the favorite indicators of economic growth, but such figures have obvious limitations of coverage and comparability. Consequently, they should be seen merely as indicators of basic trends, not as accurate reflections of all economic activity. Standard series recalculated in constant monies show that between 1870 and 1913 the U.S. gross domestic product grew roughly 5.3 times and that the multiples were 3.3 for Germany, 2.2 for the United Kingdom, and 2.0 for France (Maddison 1995). This growth translated, respectively, to 3.9%, 2.8%, 1.9%, and 1.6% a year. Population growth—already fairly slow in Europe but rapid in the United States (mainly due to large immigration)—made per capita rates much more similar, with annual means of 1.8% for the United States, 1.6% for Germany, 1.5% for France, and just 1.0% for the United Kingdom.

  Faster growth recorded by countries that started to industrialize aggressively only after 1850 is also shown by comparing the gains of gross domestic product per hour worked. Between 1870 and 1913, this indicator averaged i.2% a year in the United Kingdom, while the German rate was i.8% and the Japanese and U.S. averages reached 1.9% (Broadberry 1992). Rising productivity was accompanied by shorter work hours: their annual total in Western economies began to decline after 1860, from nearly 3000 to about 2500 by 1913, and by the 1970s they were below 1800 in all affluent countries (Maddison 1991). This trend was strongly influenced by declining employment in farming activities, but just before WWI agriculture still contributed about a third of gross domestic products in most Western countries and employed more people than did services sectors. And the term “service” called to mind the still very common household help rather than an array of activities that now account for the bulk of the Western economic product.

  Average American wages rose rather slowly during the decades between the 1860s and WWI (BLS 1934), but their relatively modest progress must be seen against the falling cost of living. After a period of pronounced inflation, the American index of general price levels began to fall in 1867; then, three decades of deflation lowered it by about 45% by 1896, and renewed inflation drove it up by 40% by 1913 (NBER 2003). French industrial wages nearly doubled between 1865 and 1913, and German wages grew 2.6-fold, but the British earnings went up by only 40% (Mitchell 1998). Rising economic tides of the two pre-WWI generations lifted all the boats—but even in the relatively most affluent Western countries, average disposable incomes were still very low when measured by the standards of the late 20th century. Real incomes averaged no more than 10–15% of today’s levels, and despite the falling cost of food, typical expenditures on feeding a family still claimed a large share (around half) of average disposable urban income.

  But none of these indicators conveys adequately the epochal nature of the post-1860s developments. This is done best by focusing on the two trends that became both the key drivers and the most characteristic markers of the Age of Synergy. Most fundamentally, for the first time in human history the age was marked by the emergence of high-energy societies whose functioning, be it on mundane or sophisticated levels, became increasingly dependent on incessant supplies of fossil fuels and on rising need for electricity. Even more important, this process entailed a number of key qualitative shifts, and more than a century later it still has not run its full course even in the countries that were its pioneers. The same conclusion is true about the other fundamental trend that distinguishes the new era: mechanized mass production that has resulted in mass consumption and in growing global interdependence.

  Mass production of industrial goods and energy-intensive agricultures that yield surpluses of food have brought unprecedented improvements to the overall quality of life, whether they are judged by such crass measures as personal possessions or by such basic existential indicators as morbidity and mortality. An increasing portion of humanity has been able to live in societies where a large share, even most, of the effort and time is allocated to providing a wealth of services and filling leisure hours rather than to producing food and goods. Both the rising energy needs and the mass production provided strong stimuli for the emergence of intensifying global interdependence, and this resulted in both positive feedbacks and negative socioeconomic and environmental effects.

  High-Energy Societies

  Thermodynamic imperatives and historical evidence are clear: rising levels of energy consumption do not guarantee better economic performance and higher quality of life—gross mismanagement of Russia’s enormous energy wealth is perhaps the most obvious illustration of the fact—but they are the most fundamental precondition of such achievements. As long as human-controlled energy flows could only secure basic material necessities of life, there was no possibility of reliable food surpluses, larger-scale indus
trial production, mass consumption, prolonged education opportunities, high levels of personal mobility, and increased time for leisure. A high correlation between energy use and economic performance is the norm as individual countries go through successive stages of development, and the link applies to broader social achievements as well.

  New sources of primary energy and new prime movers were essential to initiate this great transition. The shift began inauspiciously in the United Kingdom during the 17th century, and accelerated during the 18th century with the rising use of coal, invention of metallurgical coke for iron smelting, and James Watt’s radical improvement of Newcomen’s inefficient steam engine (Smil 1994). Even so, by 1860 coal production remained limited as the United Kingdom was the only major economy that was predominantly energized by that fossil fuel. Traditional biomass fuels continued to supply about 80% of the world’s primary energy, and by 1865 the United States still derived more than 80% of its energy needs from wood and charcoal (Schurr and Netschert 1960).

  Typical combustion efficiencies of household fireplaces and simple stoves were, respectively, below 5% and 15%. Steam engines, which were diffusing rapidly both in stationary industrial applications and in land and sea transportation, converted usually less than 5% of coal’s chemical energy into reciprocating motion, and small water turbines were the only new mechanical prime movers that were fairly efficient. Although English per capita consumption of coal approached 3 t/year during the late 1860s (Humphrey and Stanislaw 1979), and the U.S. supply of wood and coal reached nearly 4 t of coal equivalent during the same time (Schurr and Netschert 1960), less than 10% of these relatively large flows were converted into space and cooking heat, light, and motion.

  The Age of Synergy changed all of that, and the United States was the trendsetter. Expansion of the country’s industrial production and the growth of cities demanded more coal, and in turn, industrial advances provided better means to extract more of it more productively. Internal combustion engines created a potentially huge market for liquid fuels, and newly introduced electricity generation was ready to use both coals and hydrocarbons (as well as water power) in order to satisfy a rapidly rising demand for the most convenient form of energy. Deviation-amplifying feedbacks of these developments resulted in an unprecedented increase of primary energy consumption, the category that includes all fossil fuels, hydroelectricity, and all biomass energies. Its U.S. total rose more than fivefold during the two pre-WWI generations, but the country’s rapid population growth, from about 36 million people in 1865 to just more than 97 million in 1913, reduced this to less than a twofold (1.8 times) increase in per capita terms, a rise that prorates to annual growth of merely 1.2%.

  This was a significant but hardly spectacular rise—and also a very misleading conclusion because this simple quantitative contrast hides great qualitative gains that characterized the energy use during the Age of Synergy. What matters most is not the total amount of available energy but the useful power that actually provides desired energy services. Substantial post-1870 improvements in this rate came from the combination of better performance of traditional conversions and introduction of new prime movers and new energy sources. A few sectoral comparisons reveal the magnitude of these gains that accompanied the epochal energy transition.

  Higher efficiencies in household heating and cooking did not require any stunning inventions, merely better designs of stoves made with inexpensive steel, and large-scale replacement of wood by coal. Heat recirculation and tight structures of new coal, or multifuel, stoves of the early 20th century raised efficiencies commonly 40–50% above the designs of the 1860s. Typical efficiency of new large stationary steam engines rose from 6–10% during the 1860s to 12–15% after 1900, a 50% efficiency gain, and when small machines were replaced by electric motors, the overall efficiency gain was typically more than fourfold.

  Because of transmission (shafting and belting) losses, only about 40% of power produced by a small steam engine (having 4% efficiency) would do useful work (Hunter and Bryant 1991), and another 10% of available power would be wasted due to accidental stoppages. Useful mechanical energy was thus only about 1.4% (0.04 X 0.4 × 0.9) of coal’s energy content. Despite a relatively poor performance of early electricity generation (efficiencies of no more than 10% for a new plant built in 1913) and 10% transmission losses, a medium-sized motor (85% efficient) whose shafts were directly connected to drive a machine had overall energy efficiency of nearly 8% (0.1 X 0.9 × 0.85). Coal-generated electricity for a medium-size motor in the early 1910s thus supplied at least fives times as much useful energy as did the burning of the same amount of fuel to run a small steam engine of the 1860s.

  Installing internal combustion engines in place of small steam engines would have produced at least two to three times as much useful energy from the same amount of fuel, and post-1910 efficiencies of steam turbines (the best ones surpassed 25% by 1913) were easily three times as high as those of steam engines of the 1860s. Higher energy efficiencies also made the enormous expansion of American ironmaking (nearly 30-fold between 1865 and 1913) possible. By 1900 coke was finally dominant, and its production conserved two-thirds of energy present in the charged coking coal (Porter 1924). The shift from charcoal of the 1860s to coke of 1913 brought 50% gain in energy efficiency, and better designs and heat management nearly halved the typical energy intensity of blast furnaces, from about 3 kg of coal equivalent per kilogram of pig iron in 1860s to about i.6 kg by 1913 (Smil 1994).

  This means that the overall energy costs of American pig iron production were reduced by about two-thirds. Finally, a key comparison illustrating the impressive efficiency gains in lighting: candles converted just 0.01% of paraffin’s chemical energy into light, and illumination by coal gas (average yields of around 400 m3/t of coal, and typical luminosity of the gas at about 200 lm) turned no more than 0.05% of coal’s energy into light. By 1913 tungsten filaments in inert gas converted no less than 2% of electricity into light, and with 10% generation efficiency and 10% transmission losses, the overall efficiency of new incandescent electric lighting reached 0.18% (0.1 X 0.9 X 0.02), still a dismally low rate but one nearly four times higher than for the gas lighting of 1860s!

  Information available about pre-WWI sectoral energy consumption is not detailed enough to come up with accurate weighted average of the overall efficiency gain. But my very conservative calculations, using the best available disaggregations of final electricity use and the composition of prime movers, show that there was at least a twofold improvement of energy conversion in the U.S. economy between 1867 and 1913. America’s effective supply of commercial energy thus rose by an order of magnitude (at least 11-fold) during the two pre-WI generations, and average per capita consumption of useful commercial energy had roughly quadrupled.

  Such efficiency gains were unprecedented in history, and such rates of useful energy consumption provided the foundation for the country’s incipient affluence and for its global economic dominance. In 1870 the United States consumed about 15% of the world’s primary commercial energy and the country’s output accounted for roughly 10% of the world’s economic product; by 1913 the respective shares were about 45% and 20% (Maddison 1995; UNO 1956; Schurr and Netschert 1960). This means that the average energy intensity of the U.S. economic output rose during that period, an expected trend given the enormous investment in urban, industrial, and transportation infrastructures. Similar trends could be seen with energy intensities of Canadian or German economy.

  No less important, this transition was coupled with qualitative improvements of energy supply. As noted in chapter 1, commercial energies began supplying more than half of the world’s energy use sometime during the early 1890s; for the United States that milestone was reached during the early 1880s, and by 1914 less than 10% of the country’s primary energy was from wood while about 15% of America’s fossil energies came from crude oil and natural gas. The United States pioneered the transition from coal to hydrocarbons that was driven bot
h by a rapid diffusion of a new prime mover (power installed in internal combustion engines surpassed that in all other prime movers before 1920) and by higher quality and greater flexibility of liquid fuels. Crude oil’s energy density is nearly twice as high as that of good steam coal (42 vs. 22 GJ/t), and the fuel, and its refined products, is easily transported, stored, and used in any conceivable conversions, including flying.

  At the beginning of the 20th century, oil resources of the pioneering fields of Pennsylvania (1859), California (1861), the Caspian Sea (Baku 1873; figure 6.ii), and Sumatra (1885) were augmented by new major discoveries in Texas and in the Middle East (Perrodon 1981). On January 10, 1901, the Spindletop well southwest of Beaumont gave the first sign of oil production potential in Texas; by the end of the 20th century, the state still produces a fifth of America’s oil (and more comes from its offshore fields in the Gulf of Mexico). The first giant oilfield in the Middle East was discovered on May 25, 1908, in Masjid-i-Suleiman in Iran; 90 years later the region was producing 30% of the world’s crude oil and had 65% of all petroleum reserves (BP 2003).

  FIGURE 6.11. Wooden structures of oil wells in Baku, one of the principal centers of early crude oil production. Reproduced from The Illustrated London News, June 19, 1886. More than a century later, the still considerable untapped oil reserves of the Caspian Sea are, once again, a center of international attention.

 

‹ Prev