Still the Iron Age

Home > Other > Still the Iron Age > Page 6
Still the Iron Age Page 6

by Vaclav Smil


  Huntsman did not patent his process, and it was soon copied by other producers. Although the production of crucible steel, led by Sheffield enterprises, expanded during the latter half of the eighteenth and the first half of the nineteenth century, the resulting output was still used for relatively small items of artisanal provenience: as before, for expensive hand weapons, and increasingly for razors, cutlery, watch springs, and metal-cutting tools whose quality and dependability justified higher prices. That began to change rapidly once Bessemer introduced his method of large-scale steelmaking.

  Chapter 3

  Iron and Steel Before WW I, 1850–1914

  The Age of Affordable Steel

  Abstract

  The four key steps that raised the output and improved the efficiency of coke-fueled blast furnaces and resulted in a design that dominated for generations to come—introduction of hot blast, capping of the furnace top, freeing of the hearth to emplace more tuyères, and redesign of typical furnace contours—were all in place by 1850. But the second half of the nineteenth century saw gradual but continuing gains in performance as well as the first momentous passing of technical and market leadership from one country to another as the United States surpassed the United Kingdom in the production of pig iron and in innovative improvements in ferrous metallurgy, and as Germany’s iron industry developed rapidly and its output had also surpassed (in 1892) the British level.

  Keywords

  Blast furnaces; Bessemer steel; open hearth furnaces; regenerative/basic steelmaking; new markets for steel (in agriculture; energy industry; transportation; construction); reinforced concrete

  The four key steps that raised the output and improved the efficiency of coke-fueled blast furnaces and resulted in a design that dominated for generations to come—the introduction of hot blast, capping of the furnace top, freeing of the hearth to emplace more tuyères, and redesign of typical furnace contours—were all in place by 1850. The second half of the nineteenth century saw gradual but continuing gains in performance as well as the first momentous passing of technical and market leadership from one country to another as the United States surpassed the United Kingdom in the production of pig iron and in innovative improvements in ferrous metallurgy and as Germany’s iron industry developed rapidly and its output also surpassed (in 1892) the British level (Hogan, 1971; McCloskey, 1973; Paskoff, 1989; Wengenroth, 1994).

  Rising demand and an improving ability to meet it with larger and more efficient operations led to the doubling of the world’s pig iron output between 1850 and 1870 (to 10 Mt/year), a relative slowdown during the next three decades brought the total to nearly 30 Mt (28.3 Mt) in 1900, but by 1913 another acceleration of growth pushed the worldwide total to 79 Mt (Kelly & Matos, 2014). But, for the first time in history, this rising production of pig iron was not destined primarily either for casting of objects (ranging from pots to cannons) or for conversion to wrought iron by laborious puddling but for making steel, first in Bessemer converters and later in open-hearth furnaces.

  Steel, the most desirable group of ferrous alloys traditionally available only in limited quantities and reserved for a small range of special uses, came to be produced on a large scale and at affordable costs. Iron was used by traditional societies for millennia, but only this inexpensive steel turned the metal into a ubiquitous material that could be used in much larger quantities for many old applications and that rapidly found new markets. Of these, railways and shipping were the first beneficiaries of affordable steel but in a few decades steel also became an important building material, and before WW I it had also indispensable roles in the rise of new energy industries (oil and gas, electricity generation) and new modes of transportation (internal combustion engines, cars, and diesel-powered ships).

  Blast Furnaces

  The most important advance of the 1850s was the introduction of regenerative brick air-heating stoves by Edward Alfred Cowper (1819–1893), whose numerous inventions and improvements also included better manufacture of candles, the writing telegraph, and the modern bicycle wheel (wire-spoke suspension with a rubber tire). His 1857 patent provided for heating of air under pressure in a brick-lined regenerator, and the heat source could be a separate fireplace or gas directed from the blast furnace (Hartman, 1980). The first stove was installed in 1859, and Cowper made design improvements for the next three decades. In 1865, when he presented his invention to the annual meeting of the British Association for the Advancement of Science, he noted that a pair of his hot-blast stoves had worked very satisfactorily, resulting in 20% higher output and higher metal quality while saving up to 250 kg of coke per tonne of iron when the blast was at 620 °C (Cowper, 1866).

  Cowper’s was not a primary invention but rather a rewarding application of the regenerative principle invented by Carl Wilhelm Siemens (see the next section). Regenerators were

  enclosed in an iron casing lined with firebrick, and provided with valves to allow of the passage of ignited gas through the stove to heat it, and valves to allow the entrance of the cold blast and exit of the hot blast. The stoves are heated by the combustion of gas obtained from gas producers … but recent experiments have been made, with the view of separating the gas from the top of the blast-furnace, from the dust it commonly contains, so that such gas may be conveniently used …. (Cowper, 1866, 177)

  Cowper found that cleaning of the blast furnace gas was not difficult and burning this CO-rich stream (containing 3.4–3.7 MJ/m3) to produce heat in regenerative stoves provided a further boost to overall smelting efficiency. This practice soon became universal as tall columnar structures of Cowper’s stoves, rivalling in height their adjacent blast furnaces, became an integral part of all iron-smelting operations. This innovation helped England’s ironmasters to maintain their technical leadership as they operated the world’s most advanced blast furnaces concentrated in the country’s northeast (Yorkshire, Durham, Northumberland) and charged with Cleveland iron ores that were discovered in 1851 (Allen, 1981).

  These furnaces combined all the latest technical advances (including caps, waste gas reuse, and vertical blowing engines), and their height and hot blast made a few of them undisputed world record holders. The tallest Cleveland furnaces were significantly taller than elsewhere: the new ones built in the late 1870s were 25.5 m tall, compared to no more than 18 m in other parts of the United Kingdom and less than 20 m in Germany, and Lucy, America’s tallest blast furnace built in 1872, reached 22.5 m. The Cleveland furnace, relying on regenerative brick stoves, operated with exceptionally high blast temperatures of at least 540 °C to as much as 760°C. During the late 1870s and the early 1880s the technical leadership passed to Pennsylvanian smelters led by Carnegie’s Edgar Thomson Works (established in 1872 in North Braddock on the Monongahela southeast of Pittsburgh) whose hearths were more than 50% larger than those of typical British blast furnaces and whose operating pressures were twice as high.

  Lucy, blown-in in 1878, had internal volume of 431 m3 and daily output just over 100 t; the Edgar Thomson A furnace (originally a charcoal-fueled furnace from Michigan relocated in 1879 to Pennsylvania) was smaller (just 180 m3), but it operated with the record blast rate of 420 m3 a minute and used less than 1 tonne of coke per tonne of hot metal (King, 1948). In 1882 Edgar Thomson D was 24 m tall, its stack:height ratio was 0.38, and its volume surpassed 600 m3; during the 1880s blast rates were commonly close to or above 800 m3 a minute, and daily output of the Edgar Thomson F surpassed 300 t by 1889 while coke consumption fell to less than 800 kg/t. Before WW I America’s largest furnaces continued to grow higher while their contours began to approach fairly closely the shape of a cylinder (Boylston, 1936; Fig. 3.1). The largest prewar furnace, South Works No. 9 blown-in in 1909, was 29.8 m tall with a 12.4 m stack (stack:height ratio of 0.41) but the diameter of its bosh was only 29% larger than that of its hearth.

  Figure 3.1 New large American blast furnace from the cover of Scientific American of March 8, 1902.

  But thanks to America’s r
esource abundance—first, plenty of wood for charcoaling and then plenty of Pennsylvania anthracite—its ironmakers lagged Britain in fueling their furnaces with coke. Until the late 1850s (decades after British smelting was converted to coke) they relied primarily on charcoal, but the introduction of hot blast allowed the use of anthracite whose best Pennsylvanian kinds were nearly pure carbon. By the early 1860s 60% of US iron was smelted in anthracite-fueled furnaces, and although their share began to decline as iron production moved westward out of Pennsylvania, it was still just over 40% in 1880 and then decreased to 12% by 1900 (Hogan, 1971). Coke has been dominant since 1875, but before 1900 about 95% of its production had been done in closed beehive ovens, circular (up to 4 m diameter) domed (height of 2.1 m) brick structures. They discharged distillation and flue gases through a central chimney, and the heat required for pyrolysis was supplied by partial combustion of coal, an inefficient process that wasted about 45% of the charge fuel.

  By-product coking ovens owe their name to the capture of gases released during the coking process. They were introduced first in Europe where Carlos Otto and Albert Hüssener offered their design in 1881, refuting the prevailing opinion that by-product recovery would produce coke of inferior quality (Hoffmann, 1953). Just a year later Otto further improved the design by introducing Gustav Hoffmann’s preheating of air with exhaust gases, and since 1884 Otto-Hoffmann regenerative by-product ovens lined with silica bricks, where chemicals and energy in waste gases are recovered while coke yields are increased, became the mainstay of modern coking (Porter, 1924). Semet-Solvay, Koppers, and Kuroda became the leading commercial producers of these ovens (Fig. 3.2). Initial dimensions of these prismatic chambers were about half a meter wide, about 2 m tall, and up to 10 m long, with up to several hundred of them arranged in batteries surrounded by heating and gas pipes and sealed from the air.

  Figure 3.2 Section through an early-twentieth-century Semet-Solvay by-product coke oven showing a pusher machine loading hot coke into a car. VS archive.

  Their coke yield (as the share of charged coal) is higher than in beehive ovens (commonly 10–15%), and they work with a variety of bituminous coals. After the completion of the coking process (lasting roughly 1 day) the red hot fuel is pushed out mechanically and transported to a quenching tower to be cooled with water. Recovered products have a variety of industrial uses: tar, ammonia, benzol, and toluol in chemical processes and CO-rich gases as fuel and ammonia (in the form of ammonium sulfate, (NH4)2SO4) also as a fertilizer. America’s first by-product coking ovens were installed only in 1895, more than a decade after their European debut, at Cambria Steel Company in Johnstown, PA, and by 1900 there were only three other blast furnace plants with by-product coking (Gold et al., 1984). Subsequent adoption of by-product coking by the US ironmakers was rather slow: it accounted for 30% of all coke just before WW I and for more than 50% by the end of WW I, and even by 1960 the country still had more than 40 plants with some 7500 beehive ovens, accounting for 5% of total coking capacity.

  In some countries conversion to coke was very gradual and charcoal use persevered for decades. In Europe, Sweden was the only major iron producer that did not switch to coke; by 1850 a quarter of the country’s wood harvest was turned to charcoal (Arpi, 1953). By 1900 charcoal remained dominant but as the efficiency of smelting improved the best Swedish furnaces needed as little as 0.77 kg of the fuel per kg of hot metal, only about half of the contemporary average (Greenwood, 1907). When Japan built its first modern blast furnace in 1881 it, too, was fueled, much as the traditional tatara, with charcoal.

  Reliable output statistics show how rapidly the American output pulled ahead of the British production after the late 1880s. In 1850 America produced 572,000 t of pig iron (ten times the total in 1810), 834,000 t before the Civil War, and, after rapidly expanding during the 1870, 4.3 Mt in 1880 (Hogan, 1971). In that year the British output was still twice as large (8.7 Mt), but then it stagnated during the 1880s while US production kept on rising. By 1889 it was 91% of the UK level and in 1890, after growing 21% in 1 year, it pulled 16% ahead of the British output (10.3 vs. 8.9 Mt), and kept this primacy for the next 80 years before it was surpassed by Soviet production in 1971.

  America’s primacy came from the confluence of all necessary production factors. First, plenty of wood was available for charcoaling, then Pennsylvania’s metallurgical anthracite was readily available for direct use in blast furnaces as well as high-quality bituminous coal (in half a dozen states) suitable for coking. Second was the abundant supply of rich iron ores from the Lake Superior region, first from the Marquette Range (mined since 1846) and then from the Menominee, Gogebic, Vermilion, and Cuyuna ranges, and (starting in 1892) from the still exploited Mesabi range in Minnesota. Third was inexpensive transport of these ores through the Great Lakes to some of the industry’s principal concentrations in the Midwest and East. Fourth was a ready supply of immigrant labor (first from Britain and Germany, after 1880 mostly from Eastern Europe) willing to work in iron and steel mills. And the fifth factor was the competitive drive (some would say the willingness to expand ruthlessly) of the industry’s leading entrepreneurs, including Andrew Carnegie (1835–1919) and Charles Schwab (1862–1939).

  By the year 1900, with 13.2 Mt, US pig iron smelting was 54% higher than the British output, which was only about 5% ahead of German production (its total also includes Luxembourg). During the last two decades of the nineteenth century German pig iron production had more than tripled and became more than three times that of France and four times that of Russia (Campbell, 1907). German ironmaking was heavily concentrated in the Ruhr Valley where the pioneering firms of Gute Hoffnungshütte (GHH) and Friedrich Krupp (1787–1826) were joined in 1867 by the company established by August Thyssen (1842–1926; Wengenroth, 1994). Together with IG Farben (the maker of Zyklon B), the last two companies later became the best-known industrial symbols of German militarism as their steel output enabled Germany to launch two aggressive wars just 25 years apart. The two companies merged in 1999, and ThyssenKrupp remains Europe’s leading producer of iron and steel (ThyssenKrupp, 2015).

  As already noted, global pig iron production kept on expanding during the first 13 years of the twentieth century as the US output had more than doubled from 13.2 Mt in 1900 to 28.1 Mt; German output (including that of Luxembourg) rose to 19.3 Mt, nearly twice the British production of 10.7 Mt. But regardless of their aggregate output, at the beginning of the twentieth century, all top iron-producing nations shared the basic production processes and arrangements, and hence they were able to smelt the metal with only a fraction of energy inputs required earlier in the century. British historical statistics show that the combination of larger blast furnaces, higher blast temperatures, and more efficient conversion of coal to coke resulted in steadily declining energy intensity of pig iron smelting, with typical rates falling from nearly 300 GJ/t in 1800 to less than 100 GJ/t by 1850 and to 50 GJ/t by 1900 (Heal, 1975). The nineteenth century was full of technical advances and efficiency gains, but few have equaled the performance of ferrous metallurgy.

  Inexpensive Steel: Bessemer Converters and Open Hearths

  By the middle of the nineteenth century steel was a well-known alloy and one in increasing demand, but a leading English metallurgist described accurately its restricted reach, when he wrote that in 1850 “steel was known in commerce in comparatively very limited quantities; and a short time anterior to that period its use was chiefly confined to those purposes, such as engineering tools and cutlery, for which high prices could be paid without inconvenience to the customer” (Bell, 1884, 435–436). Wrought iron was used for all more massive applications, and its often inferior quality was attested to by the analysis of a wrought iron disc from the hull of the USS Monitor, the battleship launched in 1862, and the prototype of future ironclads; Boesenberg (2006) found it to be a low-carbon, high-phosphorus ferrite with nearly 5% of silicate slag and overall mediocre quality. The first process that allowed inexpensive large-scale pr
oduction of steel was invented independently on two continents, but only one name has become attached to it, that of Henry Bessemer (1813–1898), English engineer and businessman (Fig. 3.3).

  Figure 3.3 Henry Bessemer. Corbis.

  Bessemer steel Bessemer announced his steel-making process in August 1856 in a paper read before the British Association in Cheltenham, and it was patented in the same year (Birch, 1968). William Kelly (1811–1888), an owner of Kentucky iron works, experimented with the blowing of air through the molten iron since 1847, eventually achieved a partial success but filed his patent only once he heard about Bessemer’s innovation (Hogan, 1971). In 1857 the courts affirmed his priority within the United States (US patent 17,628) but the invention did not acquire a hyphenated name (such as the Hall-Héroult process of aluminum smelting or Haber-Bosch synthesis of ammonia, to note just the two most famous instances). The principle of the process patented by Bessemer and Kelly—a curious reader should consult Hogan (1971) who compares the key sentences of the two patent applications to see their shared reasoning—is easily stated.

  Molten pig iron is poured into a large pear-shaped converter lined with siliceous refractory material; hot iron is then blasted with cold air that is introduced through tuyères at the converter’s bottom and the ensuing decarburization removes carbon and reduces the content of impurities present in pig iron (Fig. 3.4). Cold air blowing takes between 15 and 30 min, releasing flames and smoke from the converter’s top; the converter is then tilted and molten steel is poured into ladles. This process converts pig iron into steel without using any additional fuel; counterintuitively, blowing cool air through the molten metal does not cool it, but the airflow oxidizes silicon and carbon in an exothermic reaction that raises the temperature of the melt. But it was soon realized that the process works as intended only when using pig iron that is nearly phosphorus-free, as was the metal made from Blaenavon iron ore that was used by Bessemer in his work; otherwise the process leaves both phosphorus and sulfur in the decarburized metal.

 

‹ Prev