Creating the Twentieth Century

Home > Other > Creating the Twentieth Century > Page 21
Creating the Twentieth Century Page 21

by Vaclav Smil


  The idea of open-hearth melting itself was not new; it was the heat economy introduced into the process by Siemens’s regenerative furnace that made the commercial difference. The furnace was very simple: a rectangular brick-lined chamber with a wide, saucer-shaped and shallow hearth whose one end was used to charge pig iron as well a small quantity of iron ore or steel scrap, and to remove the finished steel (Riedel 1994). Unlike in ordinary furnaces, where much of the heat generated by fuel combustion escaped with hot gases through a chimney, Siemens’s furnace led the hot gases first through a regenerator, a chamber stacked with a honeycomb mass of bricks (chequers) that absorbed a large share of the outgoing heat.

  FIGURE 4.4. William Siemens (reproduced from Scientific American, December 22, 1883) and a section through his open-hearth steelmaking furnace. Gas and air are forced through chambers C and E, ascend separately through G, ignite in D, and melt the metal in hearth (H); hot combustion gases are led through F to preheat chambers E’ and C’. Once these are hot, they begin receiving gas and air flows and the entire operation is reversed as combustion gases leave through G to preheat C and E. Reproduced from Byrn (1900).

  As soon as the bricks were sufficiently heated, the hot gases were diverted into another regenerating chamber, while the air required for combustion was preheated by passing through the first, heated, chamber. After its temperature declined to a predetermined level, the air flow was reversed, and this alternating operation guaranteed the maximum recovery of waste heat. Moreover, the gaseous fuel used to heat the open-hearth furnace (usually produced by incomplete combustion of coal) was also led into a regenerative furnace that then required four brick-stacked chambers (figure 4.4). This energy-conserving innovation (fuel savings amounted to as much as 70%) was first used in 1861 at glassworks in Birmingham.

  In 1867, after two years of experiments, Siemens was satisfied that high temperatures (between 1,600°C and 1,700°C) generated by this process will easily remove any impurities from mixtures of wrought iron scrap and cast iron charged into the furnaces. As Bell (1884:426) put it, “the application of this invention to such a purpose … is so obvious, that its aid was speedily brought into requisition in what is now generally known as the Siemens-Martin or open-hearth process.” The double name is due to the fact that although the Siemens brothers did their first tests in 1857, and then patented the process in 1861, it took some time to perfect the technique, and a French metallurgist Emile Martin (1814-1915) succeeded in doing so first and filed definitive patents in summer 1865. Concurrent trials by Siemens were also promising, and by November 1866 Siemens and Martin agreed to share the rights: Siemens-Martin furnace was born, and the new process was finally commercialized for the first time in 1869 by three British steelworks.

  Unlike in the Bessemer converter, where the blowing process was over in less than half an hour, open-hearth furnaces needed commonly half a day to finish the purification of the metal. Tapping of the furnace was done by running a crowbar through the clay stopper and pouring the hot metal into a giant ladle, which was then lifted by a crane and its contents poured into molds to form desired ingots. Experiments with open-hearth furnaces that were lined with basic refractories began during the early 1880s, and between 1886 and 1890 the process spread to the U.S. steelmaking (Almond 1981). Moreover, in 1890 Benjamin Talbot (1874-1947) devised a tilting furnace that made it possible to tap slag and steel alternatively and hence to turn the basic open-hearth steelmaking from a batch into a virtually continuous process.

  As in so many other cases of technical advances, the initial design was kept largely intact, but the typical size and average productivities grew. During the late 1890s, the largest plant of the U.S. Steel Corporation had open-hearths with areas of about 30 m2; by 1914 the size was up to 55 m2, and during WWII it reached almost 85 m2 (King 1948). Heat sizes increased from just more than 40 t during the late 1890s to 200 t after 1940. At the beginning of the 20th century, an observer of an open-hearth operation at the Carnegie Steel Co.’s Homestead plant in Pennsylvania (Bridge 1903:149) noted that this way of steelmaking had

  none of the picturesque aspects of the Bessemer converter. The most interesting thing about it to a layman is to see, through colored glasses, how the steel boils and bubbles as if it were so much milk…but the gentle boiling of steel for hours without any fireworks or poetry, in a huge shed empty of workmen as church on weekdays, is not a very interesting sight.

  But it was this placid, slow-working, improved, tiltable, basic Siemens-Martin furnace that came to dominate American steelmaking during the first two-thirds the 20th century. The steel that built most of New York’s skyline, that went to reinforce the country’s largest hydroelectric plants that were built between the 1930s and 1960s, steel that protected aircraft carriers at Midway in 1942 and made the armor of Patton’s 5th Army tanks as they raced toward Paris in 1944, steel that is embedded in long stretches of the U.S. interstate highways and large airport runways—virtually all of that ferrous alloy came from open hearths. In the United States, its share rose from just 9% of all steel production in 1880 to 73% by 1914, and it peaked at about 90% in 1940; similarly, in the United Kingdom its share peaked just above 80% in 1960 (figure 4.5). These peaks were followed by rapid declines of the technique’s importance as basic oxygen furnace became the dominant means of modern steelmaking during the last third of the 20th century, with electric arc furnaces not far behind.

  FIGURE 4.5. Shares of leading methods of steelmaking in the United Kingdom (1880-1980) and the United States (1860-2000). Plotted from data in Almond (1981) for the United Kingdom and in Campbell (1907) and USBC (1975) for the United States.

  The last smelting innovation that was introduced before 1914 was the electric arc furnace. William Siemens built the first experimental furnaces with electrodes placed at the top and the bottom of a crucible or opposed horizontally (Anonymous 1913c). Paul Héroult commercialized the process in 1902 in order to produce high-quality steel from scrap metal. These furnaces operate on the same principles as the arc light: the current passing between large carbon electrodes in the roof melts the scrap, and the metal is produced in batches. Other electric arc furnace designs were introduced shortly afterward, and by 1910 there were more than 100 units of different designs operating worldwide, with Germany in the lead far ahead of the United States (Anonymous 1913c). Subsequent combination of decreased costs of electricity and increased demand for steel during WWI transformed the electric arc furnaces into major producers of high-quality metal, with nearly 1,000 of them at work by 1920 (Boylston 1936). Their rise to dominance in modern steelmaking will be described in the companion volume.

  Steel in Modern Society

  All of these metallurgical innovations meant that the last quarter of the 19th century was the first time in history when steel could be produced not only in unprecedented quantities, but also to satisfy a number of specific demands and to be available in large batches needed to make very large parts. Global steel output rose from just half a million tons in 1870 to 28 Mt by 1900 and to about 70 Mt by 1913. Between 1867 and 1913, U.S. pig iron smelting rose 21-fold while steel production increased from about 20,000 t to nearly 31 Mt, a more than 1,500-fold rise (figure 4.4). Consequently, steel’s share in the final output of the metal rose precipitously. In 1868 about 1.3% of the U.S. pig iron were converted to steel; by 1880 the share was slightly less than a third, and it reached about 40% by 1890, almost 75% by 1900, and 100% by 1914 (figure 4.6; Hogan 1971; Kelly and Fenton 2003).

  As the charging of scrap metal increased with the use of open-hearth and, later, electric and basic oxygen furnaces, steel production began surpassing the pig iron output: by 1950 the share was 150%, and by the year 2000 U.S. steel production was 2.1 times greater than the country’s pig iron smelting (Kelly and Fenton 2003). Inexpensive steel began to sustain industrial societies in countless ways. Initially, nearly all steel produced by the early open-hearth furnaces was rolled into rails. But soon the metal’s final uses became muc
h more diversified as it filled many niches that were previously occupied by cast and wrought iron and, much more important, as it found entirely new markets with the rise of many new industries that were created during the two pre-WWI generations.

  Substitutions that required increasing amounts of steel included energy conversions (boilers, steam engines), land transportation (locomotives, rolling stock), and after 1877 when the Lloyd’s Register of Shipping accepted steel as an insurable material, the metal rapidly conquered the shipping market. Steel substitutions also took place in the production of agricultural implements and machinery as well as in textile and food industries, in industrial machines and tools, and building of bridges. Undoubtedly the most remarkable, elegant, and daring use of steel in bridge building was the spanning of Scotland’s Firth of Forth to carry the two tracks of the North British Railway (see frontispiece to this chapter).

  FIGURE 4.6. U.S. steel production and steel/pig iron shares, 1867-1913. Plotted and calculated from data in Temin (1964) and Kelly and Fenton (2003).

  This pioneering design by John Fowler and Benjamin Baker was built between 1883 and 1890, and it required 55,000 t of steel for its 104-m-tall towers and massive cantilevered arms: the central span extends more than 105 m, and the bridge’s total length is 2,483 m (Hammond 1964). The structure—both reviled (William Morris called it “the supremest specimen of all ugliness”) and admired for its size and form—has been in continuous use since its opening. Although a clear engineering success, this expensive cantilever was not the model for other similarly sized or larger bridges. Today’s longest steel bridges use the metal much more sparingly by hanging the transport surfaces on steel cables: the central span of Japan’s Akashi Kaikyo, the world’s longest suspension bridge by 2000 that links Honshu and Shikoku, extends 1,958 m.

  New markets that were created by the activities that came into existence during the last third of the 19th century, and that had enormous demands for steel, included most prominently the electrical industry with its heavy generating machinery, and the oil and gas industry dependent on drilling pipes, well casings, pipelines, and complex equipment of refineries. An innovation that was particularly important for the future of the oil and gas industry was the introduction of seamless steel pipes. The pierce rolling process for their production was invented in 1885 by Reinhard and Max Mannesmann at their father’s file factory in Remscheid (Mannesmannröhren-Werke AG 2003). Several years later they added pilger rolling, which reduces the diameter and wall thickness while increasing the tube length. The combination of these two techniques, known as the Mannesmann process, produces all modern pipes.

  The first carriagelike cars had wooden bodies, but a large-scale switch to sheet steel was made with the onset of mass automobile production that took place before 1910. Steel consumed in U.S. automaking went from a few thousand tons in 1900 to more than 70,000 t in 1910, and then to 1 Mt by 1920 (Hogan 1971). Expanding production of motorized agricultural machinery (tractors, combines) created another new market for that versatile alloy. Beginning in the 1880s, significant shares of steel were also used to make weapons and heavy armaments for use on both land and sea, including heavily armored battleships.

  The pre-WWI period also saw the emergence of many new markets for specialty steels, whose introduction coincides almost perfectly with the two generations investigated in this book (Law 1914). Reenter Bessemer, this time in the role of a personal savior rather than an inventor. After Mushet lost his patent rights for perfecting the Bessemer process, his mounting debts and poor health led his 16-year-old daughter Mary to make a bold decision: in 1866 she traveled to London and confronted Bessemer in his home (Osborn 1952). We will never know what combination of guilt and charity led Bessemer to pay Mushet’s entire debt (£377 14s 10d) by a personal check and later to grant him an allowance of £300 a year for the rest of his life. Mushet could thus return to his metallurgical experiments, and in 1868 he produced the first special steel alloy by adding a small amount of tungsten during the melt.

  Mushet called the alloy self-hardening steel because the tools made from it did not require any quenching in order to harden. This precursor of modern tool steels soon became known commercially as RMS (Robert Mushet’s Special Steel), and it was made, without revealing the process through patenting, by a local company and later, in Sheffield, under close supervision of Mushet and his sons. This alloy was superior to hardened and tempered carbon steel, the previous standard choice for metal-working tools. One of its most notable new applications was in the mass production of steel ball bearings, which were put first into the newly introduced safety bicycles and later into an expanding variety of machines; eventually (in 1907) the Svenska Kullager Fabriken (SKF) introduced the modern method of ball-bearing production.

  Manganese—an excellent deoxidizer of Bessemer steels when added in minor quantities (less than 1%)—made brittle alloys when used in large amounts. But in 1882 when Robert Abbot Hadfield (1858-1940) added more of the metal (about 13%) to steel than did any previous metallurgist, he obtained a hard but highly wear-resistant, and also nonmagnetic, alloy that found its most important use in toolmaking. During the following decades, metallurgists began alloying with several other metals. Between 3% and 4% molybdenum was added for tool steels and for permanent magnets; as little as 2-3% nickel sufficed to make the best steel for cranks and shafts steel and, following James Riley’s 1889 studies, also for armor plating of naval ships. Between 0.8% and 2% silicon was used for the manufacture of springs, and niobium and vanadium were added in small amounts to make deep-hardening steels (Zapffe 1948; Smith 1967).

  Nickel and chromium alloy (Nichrome), patented by Albert Marsh in 1905, provided the best solution for the heating element for electric toasters: dozens of designs of that small household appliances appeared on the U.S. market in 1909 just two months after the alloy became available (NAE 2000; see figure 2.25). Finally, in August 1913 Harry Brearley (1871-1948) produced the first batch of what is now an extended group of stainless steels by adding 12.7% chromium, an innovation that was almost immediately used by Sheffield cutlers. Some modern corrosion-resistant steels contain as much as 26% chromium and more than 6% nickel (Bolton 1989), and their use ranges from household utensils to acid-resistant reaction vessels in chemical industries. Introduction of new alloys was accompanied by fundamental advances in understanding their microcrystalline structure and their behavior under extreme stress. Discovery of x-rays (in 1895) and the introduction of x-ray diffraction (in 1912) added new valuable techniques for their analysis.

  During the 20th century, steel’s annual worldwide production rose 30-fold to nearly 850 Mt (Kelly and Fenton 2003). Annual per capita consumption of finished steel products is a good surrogate measure of economic advancement. The global average in 2002 was just more than 130 kg, and the rate ranged from highs of between 250 and 600 kg for the affluent Western economies to about 160 kg in China, less than 40 kg in India, and just a few kilograms in the poorest countries of the sub-Saharan Africa (IISI 2003). Exceptionally high steel consumption rates for South Korea and Taiwan (around 1 t per capita) are anomalies that do not reflect actual domestic consumption but rather the extensive use of the metal in building large cargo ships for export.

  I would be remiss if I were to leave this segment, devoted to the most useful of all alloys, without describing in some detail its most important use in a hidden and, literally, supporting role. Although normally painted, steel that forms sleek car bodies, massive offshore drilling rigs, oversize earth-moving machines, or elegant kitchen appliances is constantly visible, and naked steel that makes myriads of households utensils and industrial tools is touched every day by billions of hands. But one of the world’s largest uses of steel, and one whose origins also dates to the eventful pre-WWI era, is normally hidden from us and is visible only in unfinished products: steel in buildings, where it is either alone in forming the structure’s skeleton and bearing its enormous load or reinforces concrete in monolithic assemblies.<
br />
  Steel in Construction

  Buildings of unprecedented height were made possible because of the combination of two different advances: production of high-tensile structural steel, and the use of steel reinforcement in concrete. Structural steel in the form of I beams, which had to be riveted together from a number of smaller pieces, began to form skeletons of the world’s first skyscrapers (less than 20 stories tall) during the 1880s. Its advantages are obvious. When tall buildings are built with solid masonry, their foundation must be substantial, and load-bearing walls in the lower stories must be very thick. Thinner walls were made possible by using cast iron (good in compression) with masonry, and later cage construction used iron frames to support the floors. The 10-story (42 m high) Chicago’s Home Insurance Building, designed by William Le Baron Jenney (1832-1907) and finished in 1885 (the Field Building now occupies its site), was the first structure with load-carrying frame of steel columns and beams. The total mass of this structural steel was only a third of the mass of a masonry building and resulted in increased floor space and larger windows (figure 4.7).

  As electric elevators became available (America’s first one was used in Baltimore in 1887, and the first Otis installation was done in New York in 1889), and as central heating, electric plumbing pumps, and telephones made it more convenient to build taller structures, skyscrapers soon followed, both in Chicago and in New York. The early projects included such memorable, and no more standing, Manhattan structures as the World Building (20 stories, 94 m in 1890) and the Singer Building (47 stories, 187 m in 1908). Henry Grey’s (1849-1913) invention (in 1897) of the universal beam mill made it possible to roll sturdy H beams. In the United States they were made for the first time by the Bethlehem Steel’s new mill at Saucon, Pennsylvania, in 1907, and their availability made the second generation of skyscrapers possible.

 

‹ Prev