Still the Iron Age

Home > Other > Still the Iron Age > Page 4
Still the Iron Age Page 4

by Vaclav Smil


  Chapter 2

  Rise of Modern Ferrous Metallurgy, 1700–1850

  Coke, Blast Furnaces, and Expensive Steel

  Abstract

  During the first decades of the eighteenth century standard metallurgical setups (blast furnaces located near iron ore deposits and close to wooded areas with streams), procedures (charcoal-fueled smelting producing pig iron, most of which got converted to wrought iron in forges), and products (cast iron, wrought iron, small shares of steel) differed little from practices that prevailed for most of the seventeenth century. Specifically, there was nothing unique about the practices of the British iron industry except for the fact that the country had to rely more on the imports of the Swedish iron. In many countries this pattern of pig iron production persevered for another century—but by 1800 the British iron industry was the world’s undisputed leader thanks to a combination of technical advances that made it more productive and more efficient and brought unprecedented expansion of aggregate output. The first half of the nineteenth century brought fewer major innovations, but the key technical advance of the period, the introduction and widespread adoption of hot blast, had tremendous impacts on both the efficiency and productivity of blast furnaces.

  Keywords

  Pig iron production; transition to coke; larger furnaces and hot blast; wrought iron; puddling process

  During the first decades of the eighteenth century standard metallurgical setups (blast furnaces located near iron ore deposits and close to wooded areas with streams), procedures (charcoal-fueled smelting producing pig iron, most of which got converted to wrought iron in forges), and products (cast iron, wrought iron, small shares of steel) differed little from practices that prevailed for most of the seventeenth century (Fig. 2.1). Specifically, there was nothing unique about the practices of the British iron industry except for the fact that the country had to rely more on the imports of the Swedish iron. In many countries this pattern of pig iron production persevered for another century—but by 1800 the British iron industry was the world’s undisputed leader thanks to a combination of technical advances that made it more productive and more efficient and brought unprecedented expansion of aggregate output. The first half of the nineteenth century brought fewer major innovations, but the key technical advance of the period, the introduction and widespread adoption of hot blast, had tremendous impacts on both the efficiency and productivity of blast furnaces.

  Figure 2.1 Steps in charcoalmaking illustrated in L’Encyclopédie ou Dictionnaire Raisonné des Sciences des Artes and des Métiers (1751–1780).

  Evans, Jackson, and Rydén (2002) correctly note that the term iron industry is anachronistic when describing the activities of the eighteenth century: the proper English term was iron trade, a chain of activities stretching from smelting to manufacturing of iron items and involving labor ranging from forge masters and slitting mill proprietors to international merchants and artisanal workers who turned the metal into an increasing variety of products. Only a small share of pig iron was used for direct casting to produce cannons, cannonballs, and shot as well as cooking pots; most of the pig iron was converted by forging into bar iron that was either turned into nails and spikes or that was fashioned by smiths into a wide range of tools and implements including such ubiquitous objects as hinges, locks, springs, knives, sickles, and scythes.

  European and British Ironmaking before 1750

  We have no reliable Chinese or Indian production data for the eighteenth century, but European aggregates are good enough to state with certainty that Sweden and Russia were the leading producers of pig iron, and that those two nations were also dominant exporters of bar iron, and that England and Wales were its leading importers. Competitively priced Swedish exports went initially through Danzig, after 1620 they were destined mostly for Dutch ports, and since the middle of the seventeenth century England became their leading buyer (Evans, Jackson, & Rydén, 2002). English imports reached nearly 14,000 t in 1700 when they amounted to about 80% of all bar iron brought to England, rose to just over 20,000 t by 1735, and then until the 1750s were close to the total domestic output of roughly 20,000 t of iron a year (Harris, 1988). Total Swedish exports averaged 42,500 t/year during the 1740s and they rose to 48,000 during the 1790s, still larger than the iron output in England and Wales.

  But by that time Russia replaced Sweden as the principal exporter, supplying nearly two-thirds of all imported bar iron during the 1780s, with Swedish shipments retaining high-quality market in Sheffield (Rydén & Ågren, 1993). During the second half of the seventeenth century Russia was an importer of Swedish iron, but this dependence was rapidly eliminated by the construction of large ironworks in the middle Urals region (with Nizhny Tagil as the center) during the reign of Peter I. Small-scale iron production by peasants was present in the region for decades, but between 1701 and 1730 33 large ironworks (13 of them state-owned) were completed, setting the foundations for Russia’s emergence as the world’s largest producer of pig iron later in the century (Minenko et al., 1993).

  The best iron came from the works owned by the Demidov family. Their ironmaking began with Nikita Demidov (1656–1725) in Tula. It then moved to the Urals where it was greatly expanded by his son Akinfiy Nikitich Demidov (1678–1745). The works were then inherited by his grandson Nikita Akinfyevich Demidov (1724–1789), and their brand of cast iron (Staryi sobol, Old Sable) was preferred by the English importers (Hudson, 1986). Cast iron output of the Urals region rose from less than 10,000 t in 1725 to nearly 123,000 t in 1800, and Russia’s total pig iron output during the last decade of the eighteenth century was just over 200,000 t a year, at least 20% ahead of the UK production (King, 2005). Aggregate production data for non-European countries are just estimates, but it appears that in 1750 both the Chinese and the Indian cast iron productions were similar to the Russian smelting, and that the worldwide production of liquid iron was on the order of 800,000 t (Pacey, 1992).

  Substantial exports of Russian bar iron to the United Kingdom started before 1720 and peaked before the century’s end: in 1794 Russia sold 63,600 t of iron bar abroad (Minenko et al., 1993). The best reconstruction of iron trade in early modern England and Wales shows three great reversals (King, 2005). In 1500 domestic output was only about 20% of total consumption; it surpassed imports of bar iron, mainly from Spain, by 1550, and a century later it accounted for more than 80% of the total use. But by 1690 exports, overwhelmingly from Sweden, surpassed the domestic output and until the time they peaked (at nearly 46,000 t in 1770) they supplied between half and two-thirds of total consumption. Imports from Sweden dominated until 1760, and the imports of Russian iron from the Urals (first moved by boats on rivers and canals to Sankt Peterburg) peaked during the 1770s when England covered two-thirds of its iron demand by foreign metal (King, 2005). The fame of Sheffield steel products rested to a large extent on pig iron from the Urals.

  Reconstruction of English pig iron production shows nearly two centuries of stagnating or only very slowly rising output: annual rates approached 10,000 t before the end of the sixteenth century, were only marginally higher a century later, and were no higher than 25,000 t by 1750. Only the elimination of the charcoal ceiling on English pig iron production brought a rapid elimination of imports (falling from nearly 50% in 1795 to less than 10% by 1810) and massive expansion of iron output (at 2.5 Mt in 1850 it was 100 times higher than in 1750). And there were two other factors besides switching from charcoal to coke (first used in 1709, but extensive conversion only after 1750): switching from water to steam power, beginning in 1776 when the first steam engines patented by James Watt (1736–1819) began commercial operation (Fig. 2.2), and adoption of coal-fired refining of the metal, beginning in 1784 when Henry Cort (1741–1800) got his patent for the puddling and rolling process.

  Figure 2.2 James Watt’s steam engine. Drawing from John Farley’s Treatise on the Steam Engine (1827).

  Reliance on two renewable flows of energy—on wood-based charcoal and stream flo
ws—imposed fundamental restrictions on capacities and productivities of early eighteenth-century blast furnace operations (Fig. 2.3). Although the efficiency of iron smelting kept on improving, overall capacities of charcoal-based metallurgy remained inherently limited by the access to wood, and capacities of individual furnaces were limited by charcoal’s structural properties. Water flows were the other key restrictions: annual smelting campaigns in Britain were limited to about 30 weeks between October and May because summer water flows were commonly inadequate to generate power needed for the bellows. Moreover, even with full stream flows the maximum power of commonly deployed early eighteenth-century wheels (with diameters up to 7 m in 1700 and 12 m by 1750, and with power ratings of up to 7 kW, an equivalent of more than nine horses, by the mid-eighteenth century) could not provide optimum blast for large charcoal-fueled furnaces, and the use of coke needed even stronger blast.

  Figure 2.3 Cross-section of an eighteenth-century blast furnace. Engraving from L’Encyclopédie ou Dictionnaire Raisonné des Sciences des Artes and des Métiers (1751–1780).

  During the eighteenth century ore smelting faced no wood limits in richly forested Sweden or Russia and hence their charcoal-fueled iron output kept on rising, but availability of suitable wood was a limiting factor in further expansion of English iron industry. Hammersley (1973) estimated that the maximum countrywide harvest of wood for charcoal would have been on the order of 1 Mt/year but the actual demand never surpassed that rate because English and Welsh ironmasters were producing annually less metal during the first four decades of the eighteenth century than they did between 1600 and 1640 (King, 2005).

  By 1720 the annual output of 60 British furnaces reached 17,000 tonnes of pig iron and (with about 40 kg of wood per kg of metal) it required about 680,000 tonnes of charcoaling wood. An additional 150,000 t of wood were needed to forge iron bars, and 830,000 t of wood coming from coppicing would have required nearly 1,700 km2 of trees, an equivalent of a square with sides of almost 41 km. Such a demand could have been supported by properly managed plantings in perpetuity and, indeed, English charcoal prices remained steady during the first half of the eighteenth century. But, as I have just shown, that was possible only because the domestic charcoal was fueling less than half of total metal consumption as imports from Sweden and Russia became more prominent.

  Charcoal is an excellent fuel: its energy content is almost identical to that of coke and hence it generates heat required for blast furnace reactions; its combustion releases CO, the gas that reduces iron oxides to hot metal. But besides generating heat and reducing gas, charcoal and coke also have key physical functions in blast furnace operation, to support the burden column and to create permeability that allows the ascent of heat and reducing gases and downward flow of slag and metal; fundamentally, a blast furnace is a counter-current reactor. But commonly used metallurgical charcoal cannot support heavy burdens because it is a rather friable material that could not maintain open spaces under heavier loads of ore and limestone and would get eventually crushed to dust, making iron smelting impossible.

  There are many modern methods used to measure the strength of coke at different temperatures (Yamazaki, 2012) but for the most relevant comparison with charcoal we have to look at the compressive strength of the two fuels: good charcoal made from solid wood has compressive strength of about 4 MPa compared to 15 MPa for typical metallurgical coke at 1000°C, with the value declining to about 12 MPa at 1600°C (Emmerich & Luengo, 1996; Haapakangas et al., 2011). This difference explains why the height of charcoal-fueled blast furnaces was not more than about 8 m, with annual metal output averaging only about 300 t (and exceptionally about 700 t) per furnace (Sexton, 1897). Consequently, the only way to increase aggregate output was to build more furnaces.

  British Transition to Coke

  Adoption of metallurgical coke for iron smelting was definitely one of the greatest technical innovations of the modern era as it severed the dependence on wood, opened the way toward a huge growth of furnace capacities and to multiplication of annual outputs, and freed smelting locations from the proximity to streams able to power furnace bellows. There were several reasons why the replacement of charcoal by coke in English and Welsh furnaces was a rather protracted affair. Initially it was the easy British access to affordable imports of the Baltic iron (although the Russian smelting was done largely in the Urals, the shipments came via Sankt Peterburg), and the first commercial uses of the fuel indicated that coke-based smelting was not financially attractive (Harris, 1988; Hyde, 1977).

  Coke was first used in England during the early 1640s for drying malt (a task that could not be done with coal as its combustion produced copious particulate and sulfur emissions), and unsuccessful attempts at its use (and also of coal and peat) in metal smelting took place during the latter half of the seventeenth century, but it was only in 1709 when Abraham Darby (1678–1717) became the lone pioneer of iron ore smelting with coke. Hyde (1977) offered a convincing explanation why English ironmasters of the first half of the eighteenth century did not follow Darby’s example (his two furnaces in Coalbrookdale used coke exclusively after 1720, and one in Wiley used only coke since 1733) before the early 1750s.

  Although some 25 charcoal-fueled furnaces were closed between 1720 and 1755, the aggregate output of charcoal-smelted iron rose from nearly 19,000 to almost 25,000 t during the intervening 35 years. The reason was neither any secrecy surrounding Darby’s innovation nor an inferior quality of coke-smelted iron, but significantly higher operating costs of coke-fueled furnaces and no major cost difference in capital cost of new furnaces. Hyde (1977) calculated that operating costs of the two processes may have become equal by the late 1730s, but because of the large amount of coke consumed the overall costs were in favor of charcoal furnaces until the early 1750s.

  Darby and his successors were able to make the coke-based smelting profitable “in spite of higher costs of the new process because they received higher than average revenues from a new by-product of coke pig iron—thin-walled castings” (Hyde, 1977, 40). This technique, patented in 1707 before Darby began his coke smelting, benefited from higher fluidity of Si-rich coke-smelted iron that could be used to produce much thinner pots (with half as much mass as those made of the charcoal-smelted iron) with fewer defects. Moreover, Hyde (1977) also concluded that making bar iron from coke pig iron was more expensive than making it from charcoal cast iron because the former liquid metal contained more silicon.

  King (2011) revisited Hyde’s (1977) explanations and his detailed examination of Coalbrookdale business records (extant in four account books) and confirmed the conclusion regarding the costs of pig iron smelted with coke, but found that the same argument did not apply to the production of bar iron. Account books show enormous coke consumption in Coalbrookdale furnaces during the 1720s and its gradual decline during the 1730s. But the accounts, and comparisons with other forges, show that the poor performance of Coalbrookdale enterprise was not due to inherent problems with coke-smelted pig iron but rather to a demonstrable fact that it was a small and inefficiently run enterprise.

  Delay in widespread coke adoption was thus largely a matter of bar iron price: “Whatever technical difficulties existed in the use of coke pig iron in forges in the early 1720s, these were evidently overcome by the end of that decade, but the depressed state of the iron trade discouraged the introduction to the market of coke-smelted forge pig iron, until the industry benefited from an economic upturn in the 1750s. That upturn can in part be attributed to the Swedish limitation on their iron production, which began a few years earlier” (King, 2011, 154). English producers responded almost immediately by building new coke-fueled furnaces after the mid-1750s. Nearly 30 coke-based furnaces were built between 1750 and 1770, and their share of pig iron output rose from just 10% to 46% (King, 2005).

  This was an epochal change, from the dependence on a resource that was renewable but already in short supply in many regions and whose maximum realistic exploitat
ion could not support the future expansion of iron production to the dependence on a nonrenewable fuel that could be produced inexpensively from abundant coal deposits and whose output could be scaled up to meet any foreseeable expansion of iron industry. And the substitution removed the pressure on continental forests: Madureira (2012) calculated that in 1820 52% of Belgium’s forested area was used to produce metallurgical charcoal, and that even in much larger and much more forested France and Sweden the shares were about 15% by 1840.

  Impossibility of long-term reliance on charcoal is easily illustrated with relevant calculations for the exceptionally wood-rich United States. US nationwide iron output statistics began in 1810 when the smelting of 49,000 t of pig iron consumed (assuming an average rate of 5 kg charcoal or least 20 kg of wood per kg of hot metal) about 1 Mt of wood. Even if all that wood would have come from natural old-growth hardwood forests storing around 250 t/ha (Brown, Schroder, & Birdsey, 1997), and even if all above-ground phytomass were used in charcoaling, an area of nearly 4000 km2 (a square with a side of almost 63 km) would have to be cleared every year to sustain that level of production. Rich US forests could support an even higher rate and by 1840 all US iron was still smelted with charcoal, but after a subsequent rapid switch to coke energized nearly 90% of iron production by 1880 and future increases in iron production could not be based on charcoal, in 1910—with iron output at 25 Mt and even with much reduced charges of 1.2 kg of charcoal and 5 kg of wood per kg of hot metal—the country would have required 125 Mt of wood a year.

 

‹ Prev