Book Read Free

Still the Iron Age

Page 19

by Vaclav Smil


  Results of these pioneering studies were summarized by Boustead and Hancock (1979), and the assessments generated during the subsequent flourishing of energy analysis during the 1980s can be seen in volumes by Brown, Hamel, and Hedman (1996), Jensen et al. (1998), and Smil, Nachman, and Long (1983). By the mid-1980s, oil prices declined and then remained relatively low and stable for two decades, and this meant that, contrary to early expectations, energy analysis (although it continued to be practiced by some students of energy systems) did not become either an essential tool of energy studies or a major adjunct of economic appraisals. As one of its early pioneers, I have always found it useful, revealing, and highly instructive, but I have been also always aware of its limitations.

  Two basic approaches have been used to assess energy costs of products or entire industrial sectors: quantifications based on input–output tables and process analyses. The first option is obviously a variant of commonly used econometric analysis relying on a sectoral matrix of economic activity in order to extract the values of energy inputs and then to convert them into energy equivalents by using representative energy prices. Such a sectoral analysis embraces heterogeneous categories rather than specific products, but it is clearly more suitable for a relatively homogeneous iron and steel industry than for consumer electronics with its huge array of diverse products.

  In order to find the energy required to make a specific product (often called embodied energy), it is necessary to perform a process analysis that identifies the sequence of operations required to produce a particular item, traces all important material and direct energy inputs, and finds the values of indirect energy flows attributable to raw materials or finished products entering the process sequence. Process analyses are valuable heuristic and managerial tools, and the gained insights may be used not only to reduce energy requirements but also to rationalize material flows. The choice of system boundaries determines the outcome of process analyses.

  In many cases limiting them to direct energy inputs used in the final stage of a specific industrial process may yield satisfactory results. To use a relevant primary ironmaking example, we do not need to account for energy cost of a BF (that is mostly for energy used in smelting the needed steel and producing the refractory materials) in order to account for energy cost of the pig iron it produces. That furnace, with two relinings, could be reducing iron ore for more than half a century, and prorating the energy cost of its construction over the more than 100 Mt of pig iron it will produce during its decades of operation would result in negligibly small additional values that would be also much smaller than the errors associated with even the best accounting for large direct energy inputs. But in other instances, truncation errors arising from the imposition of arbitrary analytical boundaries may be relatively large.

  In the case of ironmaking, nontrivial higher order inputs that might be omitted from simple process analyses include the energy costs of mining coal, iron ore, and limestone and the preparation and transportation costs of raw materials. When Lenzen and Dey (2000) looked at energy used by the Australian steel industry, they discovered that lower order needs were just 19 GJ/t, but that the total requirement was 40 GJ/t, which means that truncation error (the omission of higher order energy contributions) doubled the overall specific rate. Similarly, Lenzen and Treloar’s (2002) input–output analysis of energy embodied in a four-story Swedish apartment building ended up with a rate twice as large as that established by process analysis by Börjesson and Gustavsson (2000), and the greatest discrepancies concerned structural steel (nearly 17 GJ/t vs. about 6 GJ/t) and plywood (roughly 9 GJ/t vs. 3 GJ/t).

  Recent EU rates show a significant difference when excluding a single second-order input: the sequence of BF, BOF and bloom, slab, and billet mill processing is about 55% more energy costly (20.7 GJ/t vs. 13.3 GJ/t) when coke plant energy is included, and the rates rises to more than 25 GJ when the same energy flows are expressed in primary terms, that is, when accounting for fuel energy lost in the generation of fossil-fueled electricity. And my final example of uncertainties inherent in energy analysis concerns different qualities of final products. As I will illustrate, standard energy analyses of modern crude steel show rates around 20 GJ/t, but Johnson et al. (2008) put the total energy cost of austenitic stainless steel (a variety that has been in increasing demand) at the beginning of the twenty first century at 53 GJ/t for the standard process (including small amount of stainless scrap), and at 79 GJ/t for production solely from virgin materials, with nearly half of that total going for extraction and preparation of FeCr, FeNi, and Ni (the steel has 18% Cr and 8% Ni).

  These problems of boundary choice and quality disparities are an inherent complication in the preparation of process energy accounts and they are a source of common uncertainties when comparing increasingly common (but still relatively rare) studies of energy costs of leading materials: consequently, there can be no single correct value, but as long as the compared studies use the same, or similar, analytical boundaries and conversions, they offer valuable insights into secular efficiency gains. That is why I will not offer detailed surveys of key studies and their (often misleadingly) precise calculations of energy costs but simply present rounded rates and ranges in order to trace long-term historical trends in using fuels and electricity in the production of iron and steel, both at national and process levels.

  A comprehensive energy analysis requires tracing at least direct energy inputs, including all fuels and electricity, and preferably both direct and indirect energy requirements, particularly for those processes whose material inputs require considerable energy investment and where electricity is a large or dominant form of purchased (or in-plant generated) energy. While comprehensive accounting is necessary to produce realistic estimates of total energy costs, close attention must be paid to dominant inputs where accounting errors may be easily larger than totals supplied by minor form of energy used in a specific process: in ironmaking this means, obviously, coming up with accurate assessments of energy costs of coke production and other fuels used in BFs.

  In mass terms, these fuels (dominated by coal-derived coke and also including coal dust, natural gas, and fuel oil) are the second largest input in the production of pig iron: as already noted, typical requirements for producing a tonne of the metal in a BF are 1400 kg of iron ore, 800 kg of coal (indirectly for coking, directly for injection), 300 kg of limestone, and 120 kg of recycled metal (WSA, 2012b). Hydrocarbons have a distinctly secondary position, but direct reduction of iron using inexpensive natural gas should be gaining in importance. Electricity (be it fossil-fuel generated, nuclear, or hydro) is a comparatively minor energy input in iron ore reduction in BFs, but it is indispensable for energizing EAF-based steelmaking and for operating continuous casting and rolling processes. And given the volumes of hot gases and water generated by ironmaking and steelmaking, it is also important to account for energy values of waste streams available for heat recovery.

  In aggregate monetary terms, energy use in steelmaking ranges between 20% and 40% of the final cost of steel production; for example, when using long-term prices Nucor puts the cost of energy for operating a BF at 22% of the pig iron costs (Nucor, 2014), while a Japanese integrated steelmaker (with its own coking and sintering plants using imported coal and iron ore) spends 35% of its total (and about 38% of its variable) cost on energy. Obviously, these relatively high-energy costs would have been a rewarding target for reduction even if the industry would not have been affected by rising prices of coal, crude oil, natural gas, and electricity—and the post-1973 increases (as well as unpredictable fluctuations) in energy cost had only strengthened the quest for lower energy intensity of iron and steel production, resulting in some impressive fuel and electricity savings.

  In surveying these gains, one should always specify the national origins (there are appreciable differences among leading steel-producing countries), make it clear which energy rate is calculated, quoted, or estimated, and to what year they apply,
and note if the cited rates are national averages, typical performances in the industry, or the best performances of the most modern operations, and if they refer to the entire steelmaking process or only to its specific parts; unfortunately, all too often these are explained only partially, or they are entirely assumed, leaving a reader with rates that may not be comparable.

  The most common difference is between the accounts that use only direct energy and those expressing the costs in terms of primary energy (including energy losses in generating electricity and converting fuels). This will make the greatest difference in the case of processes heavily dependent on electricity: in Europe, recent direct energy use by an EAF is 2.5 GJ/t of steel, primary energy of that input is about 6.2 GJ/t, and the two rates for energy used by a hot strip mill are, respectively, 1.7 and 2.4 GJ/t (Pardo, Moya, & Vatopoulos, 2012). In the case of energy use by BFs, the most common accounting difference arises from imposing analytical boundaries: some analyses include the energy cost of cokemaking, but most of them omit it.

  Energy Cost of Steelmaking

  Because iron and steel industry has been always a rather energy-intensive enterprise with continuing interest in managing and reducing energy inputs, we have fairly accurate accounts, including detailed retrospective appraisals, that allow us to trace the sector’s energy consumption trends for the entire twentieth century and, in a particularly rich detail, for the past few decades (Dartnell, 1978; De Beer, Worrell, & Blok, 1998; Hasanbeigi et al., 2014; Heal, 1975; Leckie, Millar, & Medley, 1982; Smithson & Sheridan, 1975; Worrell et al., 2010). I will start with energy costs of pig iron smelting in BFs, and then proceed to electricity expenditures for BOFs, EAFs, and rolling before summing up the process totals. But before reviewing these rates, I will first introduce the minimum energy requirements of common steelmaking processes, summarized by Fruehan et al. (2000), and compare them with the best existing practices. Contrasting these two rates makes it possible to appreciate how closely they have been approached by the combination of continuing technical advances aimed at maximizing energy efficiency of key steelmaking processes.

  Inherently high-energy requirements for reducing iron oxides and producing liquid iron in BFs dominate the overall energy needs in integrated steelmaking. In the US steel industry, with its high share of secondary steelmaking, about 40% of all energy goes into ironmaking (including sintering and cokemaking), nearly 20% into BOF and EAF steelmaking, and the remainder into casting, rolling, reheating, and other operations (AISI, 2014). In India, where primary metal smelting dominates, about 70% of the sector’s energy goes for ironmaking (BF 45%, coking 15%, and sintering 9%), 9% for steelmaking, 12% for rolling, and 10% for other tasks (Samajdar, 2012).

  Iron ore (Fe2O3) reduction requires at least 8.6 GJ/t, and the absolute minimum of producing pig iron in BF (5% C, tap temperature 1450 °C) is 9.8 GJ/t of hot metal; a more realistic case must include the energy needed for the formation of slag and for a partial reduction of SiO2 and MnO (hot metal containing 0.5% Si and 0.5% Mn), as well as the effect ash in metallurgical coke: slag effect increases the minimum requirements to 10.27 GJ/t, and slag and coke ash effect result in a slightly higher rate of 10.42 GJ/t. In contrast, Worrell et al. (2008) put the best commercial performance for BF operation at 12.2 GJ (12.4 GJ in primary energy terms), and Worrell et al. (2010) offer the range of 11.5–12.1 GJ/t.

  As for the inputs, the absolute theoretical minimum for ore agglomeration is 1.2 GJ/t of output, that is, 1.6 GJ/t of steel, while Fruehan et al. (2000) put actual demand at 1.5–1.7 GJ/t of output and 2.1–2.4 GJ/t of steel. Worrell et al. (2008) estimated the best actual rate at 1.9 GJ/t (2.2 GJ/t in terms of primary energy), Worrell et al. (2010) quoted the range of 1.62–1.85 GJ/t, and according to Outotec (2015b), the world leader in iron ore beneficiation, the process needs 350 MJ of heat per tonne of pellets for magnetite ores and 1.5 GJ/t for limonites, and, depending on the ore and plant capacity, an additional 25–35 kWh per tonne for mixing, balling, and induration, for totals between 0.6 and 1.9 GJ/t of pellets.

  Coke output in modern plants amounts to about 0.77 t per tonne of coal input; the remainder consists of captured volatiles used either as fuel or chemical feedstocks. Captured coke gas has a relatively high-energy density gas as it contains 11.8–14.5 GJ/t of coke (or 4–5 GJ/t of produced steel). After taking this valuable energy output into account, the minimum net energy required for cokemaking is about 2 GJ/t or 0.8 GJ/t of steel (Fruehan et al., 2000), while actual recent performances range between 5.4 and 6.2 GJ/t of coke, that is, 2.2–4.6 GJ/t of steel.

  Reconstructions of overall past energy requirements show that at the beginning of the twentieth century direct energy needed for BF smelting (all but a tiny share of it as metallurgical coke, but excluding the energy cost of coking) was between 55 and 60 GJ/t of pig iron, and by 1950 that range was reduced to 35–45 GJ/t. By the early 1970s, common Western performance of BF ironmaking was about 30 GJ/t, and the best rates were no better than 25 GJ/t, but then the OPEC-engineered oil price rise of 1973–1974 and its second round in 1979–1980 led to an accelerated progress in energy savings. By the end of the twentieth century, the net specific energy requirement of state-of-the-art BFs was no more than 15 and as little as 13 GJ/t. That was as much as 50% less than in 1975 and, even more remarkably, it was as little as 25% above the minimum energy inputs needed to produce pig iron from coke-fueled smelting of iron ore, while common performances were still 40–45% above the energetic minimum.

  As already explained in the previous chapter, these impressive gains in the production of pig iron were due to the combination of many technical fixes, and the principal savings attributable to specific improvements are as follows (IETD, 2015; USEPA, 2012). Dry quenching of coke may save more than 0.25 GJ/t, recovery of sintering heat saves 0.5 GJ/t, and the capture and combustion of top gases may reduce total energy use by up to 0.9 GJ/t of hot metal. Increased coal injection saves about 3.75 GJ/t of injected fuel; every tonne of injected coal displaces 0.85–0.95 t of coke, and the fuel savings are nearly 0.8 GJ/t of hot metal. Increased hot blast temperatures save up to 0.5 GJ/t, and heat recuperation from hot blast stoves cuts demand by up to 0.3 GJ/t. Higher BF top pressures reduce coke rates and allow more efficient electricity generation by recovery turbines, yielding as much as 60 kWh/t of hot metal. And improved controls of the hot stove process may save up to 0.04 GJ/t.

  Steelmaking does not present such large opportunities for energy savings in absolute terms, but relative reductions of fuel and electricity requirements have been no less impressive than in ironmaking, with much of the reduced energy intensity due to the displacement of OHFs by BOFs in integrated enterprises and by EAFs in mini-mills. Steelmaking in BOF, using hot pig iron and scrap, involves a highly exothermic oxygenation of carbon, silicon, and other elements, and hence the process is a net source of energy even after taking into account the about 600 MJ (as electricity) needed to make oxygen used in producing a tonne of hot metal. Compared to OHFs (they needed about 4 GJ/t) the overall saving will thus be more than 3 GJ/t, and the final energy cost of BOF steel will thus be essentially the cost of the charged hot pig iron. Depending on the amount of scrap melted per tonne of hot metal (typically between 30 and 40 kg) and on its specific composition (assuming 5% C and 0.5% Si, presence of coke ash, and 20–30% FeO in the slag), the energy cost of crude BOF steel would be no less than 7.85 and up to 8.21 GJ/t (Fruehan et al., 2000).

  Theoretical minima to produce steel by melting scrap in EAFs vary only slightly with the composition of the charge metal and the share of FeO in slag, between 1.29 and 1.32 GJ/t, but because large volumes of air (up to 100 m3/t) can enter the furnace (mainly through its door), the heating of entrained N2 would raise the total demand to 1.58 GJ/t (Fruehan et al., 2000). In contrast, the recently cited averages for large-scale production have ranged from about 375 to 565 kWh/t (Ghenda, 2014). Capacity of 100 t/heat and tap-to-tap time of 40 min would translate to between 3.8 and 5.8 GJ/t in terms of primary energy. Worrell et al.
(2010) use the US mean of 4.5 GJ/t and that is also the approximate average cited by Emi (2015), compared to less energy-intensive melting of scrap in BOF that needs only about 3.9 GJ/t.

  The electricity demand of large EAF furnaces presents a challenge for the reliability of supply and the stability of grids, even with the most efficient designs. SIMETAL’s Ultimate EAF requires only 340 kWh/t of steel (its melting power is 125–130 MW), which means that in 1 day (with 48 heats of 120 t) it needs 1.95 GWh of electricity or—using the average annual household electricity consumption of 10.9 MWh (USEIA, 2015)—as much as a city with 65,000 households (i.e., with roughly 165,000 people). Additional investment may be needed to prevent delivery problems and to assure the reliability of supply for other consumers in the area with a number of these extraordinarily electricity-intensive devices. As already noted, the two effective steps toward reducing EAF energy requirements are charging of hot pig iron and preheating of scrap.

  The world’s best practices in casting and rolling are as follows: continuous casting and hot rolling, 1.9 (2.5) GJ/t, and cold rolling and finishing, 1.5 (2.3) GJ/t (Worrell et al., 2008). Replacing traditional rolling of semifinished products from ingots (requiring 1.2 GJ/t) by continuous casting (whose intensity is just 300 MJ/t) saves almost 1 GJ/t. There is, obviously, a substantial difference between energy requirements for cold rolling and hot rolling that needs reheating of cast metal. For flat carbon steel slabs, the difference is 50-fold (17 GJ/t vs. 850 GJ/t), for stainless steel slabs it is about 17-fold, about 50 versus nearly 900 GJ/t (Fruehan et al., 2000). The world’s best practices now require (in primary energy terms) 2.2 GJ/t for hot-rolling strip steel, 2.4 GJ/t for hot-rolling bars, and 2.9 GJ/t for hot-rolling wires (Worrell et al., 2008). Thin slab casting requires about 1 GJ/t, but strip casting consumes only 100–400 MJ/t.

 

‹ Prev