Still the Iron Age

Home > Other > Still the Iron Age > Page 7
Still the Iron Age Page 7

by Vaclav Smil


  Figure 3.4 Bessemer’s converters. Corbis.

  An obvious (but expensive and impractical) solution was to use only phosphorus-free iron ores, but two better remedies were discovered by British metallurgists. Robert Forester Mushet (1811–1891) improved the quality of Bessemer steel, and hence made the process widely adoptable, by adding small quantities of spiegel iron (an alloy containing about 15% manganese) to the decarbonized iron, a process he had patented in 1856. As the name implies (der Spiegel is “mirror” in German), this is a bright crystalline form of iron ore with about 8% Mn and 5% C, and because manganese has a high affinity for oxygen the addition partially deoxidizes the metal and removes the impurities in slag. Removal of phosphorus from pig iron was a more difficult challenge, solved only in the late 1870s by Sidney Gilchrist Thomas (1850–1885) and his cousin Percy Carlyle Gilchrist (1851–1935).

  The key to their success was to line the converter with hard and durable blocks of limestone, seal the joints with a combination of burned limestone or dolomite and tar, and add lime to the charge. Reaction of the basic lining and lime with acidic phosphorus oxides removed them from the molten iron into slag (Almond, 1981). The process was patented in 1878 and 1879 and made it possible to use Bessemer converters for pig iron made from poor (high-P) ores common in continental Europe. Andrew Carnegie, the founder of America’s largest nineteenth-century steel corporation and an early licensee of the process, remarked that

  these two young men, Thomas and Gilchrist of Blaenavon, did more for Britain’s greatness than all the Kings and Queens put together. Moses struck the rock and brought forth water. They struck the useless phosphoric ore and transformed it into steel … a far greater miracle. (Blaenavon World Heritage Site, 2015)

  A use was soon found for the copious phosphoric slag produced by the process: German engineers discovered that when ground to a fine powder it makes an excellent mineral fertilizer. Because of these complications it was only in the late 1870s when the Bessemer process surpassed wrought iron puddling in the United Kingdom, but the subsequent diffusion was rapid, and by 1890 converter steel claimed 80% of British and 86% of American production (Birch, 1968; Hogan, 1971). Bessemer steel had not only conquered all old markets using wrought iron but had made it possible to produce new castings and forgings (using large steam hammers), including those for offensive and defensive weapons, guns, and armour plate; the industry developed first in the United Kingdom whose practices were soon adopted, improved, and expanded in Germany by Alfred Krupp (1812–1887), who visited Sheffield steel works in 1839, took over his father’s company, and concentrated its production first on high-quality steel for railroad uses and, after 1859, increasingly on heavy artillery (ThyssenKrupp, 2014).

  Open hearths But the conquest of the steel market by the Bessemer process was short-lived. The open-hearth furnace, a better alternative, became available during the late 1860s. Its basic version was rapidly adopted during the 1880s, and it remained the world’s leading steel-making process until the 1970s. As in the case of decarburization of steel in tilting converters, two inventors brought the process to commercial maturity, and, unlike in the former case, both of their names are sometimes used in a hyphenated form (Siemens-Martin), although most of the time the technique is called simply open-hearth steelmaking (and only the latter inventor’s name is usually used in France and Russia). Carl Wilhelm Siemens (1823–1883; Charles William after he got British citizenship in 1859) was one of the quartet of inventive German brothers whose work brought fundamental advances to several branches of modern engineering. His design of the open-hearth furnace (on which he collaborated with his brother Friedrich) became a success thanks to its heat efficiency (Jeans, 1884; Riedel, 1994).

  The Siemens furnace is a simple, shallow, saucer-shaped, brick-lined hearth, and it was charged first with cold, and later with hot, pig iron with small additions of iron ore or steel scrap. Its distinguishing feature was the channeling of hot gases through a regenerator, a chamber filled with a honeycomb mass of bricks to absorb much of the waste heat. Once the bricks were sufficiently heated the furnace gas was diverted to another regenerator and fresh air to be used for the furnace operation was preheated in the heated chamber; when it absorbed most of its heat its flow was switched to another heated regenerator, and this alternation minimized overall heat waste. Gaseous fuel used to heat the furnace was subjected to the same process, and hence the regenerative furnace needed four chambers for alternating operations that saved up to 70% of fuel.

  The first heat-saving furnace was installed in the Birmingham glassworks in 1861 when the Siemens brothers patented the process, and by 1867 Siemens’ experiments showed that this high-temperature treatment (with 1600–1700 °C in the open hearth) would remove any impurities from the charged metal. But meanwhile Emile Martin (1814–1915), a French metallurgist, not only filed his patent in 1865 but also produced the first batches of high-quality steel. As a result in November 1866 Siemens and Martin agreed to share the proprietary rights and the process was first used by several British steelworks in 1869. Early open-hearth furnaces had acid hearths, and basic furnace linings were introduced in 1879 and 1880 in France (Creusot and Terrenoire plants), giving the process its generic name: regenerative/basic steelmaking.

  Unlike in the Bessemer converter with its exothermic process, heat released during the open-hearth processing was insufficient for smelting, which required an additional source of energy (coke, fuel oil, blast furnace gas, or natural gas). Open-hearth blowing lasted usually about 12 h and hot metal was poured into a massive ladle and then to individual molds to make steel ingots. During the late 1880s open-hearth furnaces began to diffuse in the United States, and in 1890 Benjamin Talbot (1864–1947), a Yorkshireman working in Pennsylvania, introduced a tilting furnace, and, with alternative slag and steel tapping, the converting process could be continuous rather than proceeding in batches. Diffusion and maturation of the technique was accompanied by increasing unit capacities: by 1900 the US steel industry had furnaces of about 30 m2, in 1914 the maximum size was nearly twice as large, by 1944 they surpassed 80 m2, with individual heat masses rising from 40 in 1900 to 200 t during WW II (King, 1948). The largest US open open-hearth furnaces installed during the 1950s had capacities of 600 t, 550 t, and 500 t, up to 40 times the typical heat mass of Bessemer converters (Berry, Ritt, & Greissel, 1999).

  Open-hearth furnaces remained dominant for much longer than the Bessemer process: US data show that Bessemer production accounted for more than half of US steelmaking between 1870 and 1907 (peaking just short of 90% around 1880), while open hearths produced more than half of all US steel for 60 years, between 1907 and 1967, rising from just 9% of the total in 1880 to 73% by 1914 and peaking only in 1959 at about 88% (Temin, 1964). Open hearths were rapidly displaced by a combination of basic oxygen furnaces and electric arc furnaces, but a great deal of steel they made—be it in New York’s skyscrapers built before 1960, in the massive dams of Roosevelt’s New Deal of the 1930s, or in the first stretches of the US interstates completed during Eisenhower’s presidency of the 1950s—–is still with us.

  Basic oxygen services were introduced commercially only during the 1950s (see the fifth chapter for details), but the electric arc furnace is the only steelmaking innovation from the pre–WW I period that is still with us. Electric arc furnaces that melt scrap metal now produce about 60% of American and almost 30% of the world’s steel (again, see the details in the fifth chapter). William Siemens experimented with electric arc furnaces during the 1870s and patented his designs between 1878 and 1879, and Paul Héroult (1863–1914), one of the two inventors of aluminum smelting, operated the first commercial units producing high-quality steel from recycled metal at the very beginning of the twentieth century (Toulouevski & Zinurov, 2010). Héroult’s furnaces had large carbon electrodes inserted into a furnace through its roof, a configuration that is still standard in modern designs. Before WW I there were more than 100 electric arc furnaces operating in Europe (parti
cularly in Germany) and in North America and wartime demand for steel, combined with declining prices of electricity, pushed their number to more than 1000 by 1920 (Boylston, 1936).

  Steel output Availability of processes for producing inexpensive steel from pig iron resulted in an increasing share of the primary metal being converted to steel. Right after the end of the Civil War only slightly more than 1% of US pig iron production was converted to steel, by 1880 the share was almost 30%, a decade later it reached nearly 50%, by 1900 it stood at 74%, and the conversion reached 100% by 1906 (Hogan, 1971; Kelly & Matos, 2014). And because of the increasing recycling of scrap metal (first on open-hearth furnaces, then in electric arc furnaces) American steel production began to surpass the country’s pig iron output: by 1913 the former was about 12% higher than the latter. Similar shifts took place in all other major steel-producing countries, in the United Kingdom, Germany, France, and Russia.

  Worldwide production of pig iron had sextupled between 1850 and 1900, from 5 Mt to more than 30 Mt, but steel output rose two orders of magnitude, from a few hundred thousand tonnes in 1850 to 28 Mt by 1900.

  While historical statistics of some basic economic inputs are only approximate, post-1870 data on steel, on the global level and for all major producers, are quite reliable and portray the relentless pace of pre–WW I expansion. Worldwide steel output rose from just half a million tonnes in 1870 to 28 Mt by 1900 and to about 76 Mt by 1913, averaging an annual compound growth rate of nearly 12%. Between the end of the Civil War and 1913, US steel production increased more than 1500 times, from only about 20,000 t to nearly 31 Mt (annual growth rate of almost 16%). British steel output went from 7.2 Mt in 1875 to 7.8 Mt in 1913 (nearly an 11-fold increase), and the respective totals for Germany were about 0.5 and 17.8 Mt and for Russia 0.4 and 4.2 Mt, while Japan’s 1913 steel production remained below 50,000 t.

  US steel production surpassed the British total in 1887, three years before it also became the world’s largest pig iron producer, and two years before it began to lead the world in iron ore extraction. By 1900 the United States was producing a third of the world’s steel, that share 50% higher than the one for Germany and double the British share. America’s expanding steel mills—concentrated in Pennsylvania (Pittsburgh and its vicinity), Ohio (Cleveland), Indiana (Gary), and Illinois (Chicago)—were the country’s largest, well-capitalized, and highly competitive industrial enterprises. In 1869 the capitalization of American steel mills averaged less than $160,000; 30 years later it had nearly tripled (Temin, 1964). And while during the 1860s British mills could produce bars and rails about 50% cheaper than the US mills, by the beginning of the twentieth century US productivity was nearly 80% higher and the United States became a vigorous exporter of metal goods to the United Kingdom (Allen, 1979).

  Not surprisingly, this expansion was accompanied by many organizational changes, including the growth of well-established enterprises, rise of new steelmaking firms, and mergers of major companies to form conglomerates on unprecedented scales. In the United States, Carnegie Steel, established by Andrew Carnegie, a Scottish emigrant to the United States, in 1892, had its beginnings in steel works in Braddock, Pennsylvania, and other neighboring mills (Bowman, 1989). By the late 1880s it became the world’s largest producer of pig iron, coke, and steel, and its works constituted the largest component of the US Steel established in New York on April 1, 1901, through a merger of 10 of America’s largest steel companies, producing 67% of the country’s and 29% of the world’s steel (Apelt, 2001).

  The company’s first president was Charles M. Schwab, but he left in 1904 to become the first president of Bethlehem Steel Corporation, which began as Saucona Iron Company in 1857 and was renamed Bethlehem Steel in 1899 (Hessen, 1975; Fig. 3.5). Metal made by America’s second largest steel company had eventually gone into such iconic structures as New York’s Chrysler Building, San Francisco’s Golden Gate Bridge, and the Grand Coulee and Hoover dams, as well as into WW II Liberty ships and postwar nuclear reactors, but the company ceased operating in the Lehigh Valley in 1995 and went bankrupt in 2001 in one of the most important signs of US deindustrialization (Warren, 2008).

  Figure 3.5 Bethlehem Steel works in Bethlehem, PA. Corbis.

  The availability of inexpensive steel led to an emergence of an increasing variety of alloys destined for specific markets. Their development started with Robert Mushet’s experiments conducted after he was freed from his financial difficulties by Bessemer who paid his debt and gave him a generous annual allowance for the rest of his life (Osborn, 1952). Mushet released the first specialty alloy, soon to be known as Robert Mushet’s Special Steel (RMS), in 1868. Parts and tools made from this alloy, produced with the addition of tungsten, did not need any quenching to be hardened, and yet they were better than the standard hardened and tempered carbon steel.

  An early, and enduring, application of this alloy was to produce ball bearings, first to supply the increased demand that resulted from the rapid growth of bicycles during the 1880s, later to meet the needs of new automobile industry and toolmaking. Small additions of manganese resulted in brittle alloys, but in 1882 Robert Abbot Hadfield (1858–1940), after adding about 13% Mn to steel, produced a hard, wear-resistant, nonmagnetic alloy suitable for tools and bearings. Tool steel (and permanent magnets) were made by alloying with 3–4% of molybdenum, crankshaft and armor plating steel by adding 2–3% of nickel, springs by alloying with 0.8–2% of silicon, and deep-hardening steels by adding small amounts of niobium and vanadium (Bryson, 2005).

  Another important alloy, patented by an American metallurgist Albert Marsh (1877–1944) in 1905 and commercially available since 1909, contained no iron. It was produced by combining 80% of nickel and 20% of chromium, and it was turned into high-resistance wires for durable heating elements, first for the newly popular electric toasters (NAE, 2000). And in 1912 Harry Brearley (1871–1948), a Sheffield metallurgist, succeeded in producing stainless steel containing 12.7% chromium and 0.24% carbon (Cobb, 2010). At the same time US metallurgists developed nonhardenable ferritic stainless steels containing 14–16% Cr and up to 0.15% C. Brearley’s alloy had initially met with only lukewarm acceptance, and it was only in 1914 when a Sheffield company used it to make the first small set of cheese knives. The metal is now ubiquitous in kitchenware, food handling, engine and machinery parts, and architecture. The steel has excellent corrosion resistance but is not actually stainless; it does corrode, and Ryan et al. (2002) explained how the process usually begins at sites in the metal where there are sulfide impurities.

  New Markets for Steel

  All of these technical advances made the four decades preceding WW I the first period in history when steel became affordable and readily available and when it could substitute for a multitude of more expensive or less durable wrought iron or wooden products and could be used on a large scale in new applications ranging from farm implements to ocean-going ships and from rails to skyscrapers. As is often the case with material substitutions, the process took a while. Perhaps the most notable example of this fact is the one that was already noted in the preceding chapter: the Eiffel Tower, one of the world’s most iconic structures completed in 1889 (nearly three decades after the first commercialization of Bessemer steel), was built with wrought iron.

  Before I turn to steel’s use in the largest pre–WW I markets—in transportation (railways and ships) and construction (buildings, bridges)—I will survey several categories of its other important and expanding uses. Obviously, a key new market for steel was the iron and steel industry itself, engaged in unprecedented efforts to extract more iron ore, to reduce the oxides in new and larger blast furnaces, and to turn cast iron into steel and semi-finished products in new mills. Similarly, the new age of machines required a large-scale expansion of machines, tools, and replacement parts used for penetrative (drilling, boring, punching), shape-forming (turning, milling, planing), and finishing (grinding, polishing) operations (Smil, 2005). Performance of many of these tool
s was limited as long as they were made from plain carbon steel and Mushet’s self-hardening steel.

  A key advance in tool quality resulted from the experiments conducted since 1898 by Frederick Winslow Taylor (1856–1915) and J. Maunsel White at Bethlehem Steel (Taylor, 1907). Their heat treatment of tools doubled or even tripled the previous metal-cutting speeds, and, starting in 1903, these new high-speed steels allowed Charles Norton (1851–1942) and James Heald (1846–1931) to design new superior grinding machines. They were used for the first time on a large scale by Henry Ford (1863–1947) to produce many of the metal components for his pioneering Model T (launched in 1908), and their subsequent improved versions were indispensable for expanding the post–WW I car and airplane industries.

  Steel in agriculture One of the earliest impacts of cheap steel, and one with momentous economic and nation-building consequences, was the metal’s use in agriculture, especially the mass production of moldboard steel ploughs with soft-center steel (Fig. 3.6). Moldboard plow is an ancient Chinese invention that became common in Europe only during the early Middle Ages. The process for commercial moldboard production was patented by John Lane Jr. (1824–1897) in 1868, and it rapidly displaced the previous method of fashioning metal plows from cast iron or from old saws, the method introduced by Lane’s father (John Sr., 1792–1857) in 1833 and by John Deere (1804–1886), an American blacksmith and the founder of his eponymous company, in 1837. Plows made of the three layers of steel (hard outer layer, soft inside) were better at absorbing shocks when cutting through heavy and stony soils.

 

‹ Prev