Book Read Free

Still the Iron Age

Page 20

by Vaclav Smil


  Making specialty steel is more energy intensive. Production of the most common variety of stainless steel (18-8, with 18% Cr and 8% Ni) using EAF (charged with 350 kg of steel and 400 kg of stainless scrap) and argon oxygen decarburization (AOD) sequence requires at least 1.21 GJ/t. All of these values refer only to direct input of electricity and exclude losses in generating and transmitting electricity, as well as all second-order inputs including the energy cost of the furnace itself and its replacement electrodes and refractories. Actual electricity (direct energy) use in modern EAFs is about 2.5 GJ/t (even with high average 40% conversion efficiency that means 6.25 GJ/t of primary energy in all instances where electricity is generated by the combustion of fossil fuels in central power stations).

  Energy savings resulting from the adoption of new processes and from gradual improvements of old practices have eventually added up to impressive reductions per unit of final product. The total energy requirement for the UK’s finished steel was cut from about 90 GJ/t in 1920 to below 50 GJ/t by 1950, during the decades relying on BF, OHF, and traditional casting. By 1970, the best integrated mills still using OHF needed 30–45 GJ/t of hot metal, but by the late 1970s (with higher shares of BOF and CC), nationwide means in both the United Kingdom and the United States were less than 25 GJ, and the combined effects of advances in integrated (BF–BOF–CC) steelmaking (with higher reliance on EAF) reduced typical energy cost to less than 20 GJ/t by the early 1990s, with more than two-fifths of savings due to pig iron smelting, a few percent claimed in BOFs, and the remainder in rolling and shaping (De Beer, Worrell, & Blok, 1998; Leckie et al., 1982).

  In the United States, the final energy use per tonne of crude metal shipped by the steel industry declined from about 68 G/t in 1950 to just over 60 GJ/t in 1970 and to 45 GJ/t in 1980, and then, with the shift toward mini-mills and EAF, it fell by nearly three-quarters in three decades: by the year 2000, the US nationwide rate was 17.4 GJ/t (USEPA, 2012). A detailed study of the sector’s energy intensity (including all cokemaking, agglomeration, ironmaking, steelmaking, casting, hot and cold rolling, and galvanizing and coating) put the nationwide mean at 14.9 GJ/t in 2006 (Hasanbeigi et al., 2011); and by 2010 the rate was just 11.8 GJ/t, with the industry reducing the average energy need by nearly 75% in three decades. In 2005, the American Iron and Steel Institute published a roadmap for transformation of steelmaking processes: SOBOT (saving one barrel of oil per ton) should lower the overall energy cost from an equivalent of 2.07 barrels of oil per ton in 2003 to just 1.2 barrels a ton in 2025 (AISI, 2005). The comparison assumes 49% EAF share in 2003 and 55% EAF share in 2025, and the 2025 rate would be equivalent to about 9.7 GJ/t.

  For China, the world’s largest steel producer, we have several recent studies. Guo and Xu (2010) put the national average of energy requirements for steelmaking at 22 GJ/t in the year 2000 and 20.7 GJ/t in 2005, with 2004 rates for coking at 4.1, for ironmaking at 13.5, for EAFs at 6.0, and for rolling at 2.6 GJ/t. Chen, Yin, and Ma (2014) found that China’s average energy requirement in key iron and steel enterprises (hence not a true national average) declined by nearly 20% between 2005 and 2012 when it was 17.5 GJ/t, and that there were substantial differences between the average and the most and the least efficient enterprises: in 2012 the relevant rates were 11.6, 9.9, and 13.5 GJ/t for ironmaking, and 2, 0.7, and 5.3 GJ/t for steelmaking in EAFs.

  Analyses of energy use by Canada’s iron and steel industry show a less impressive decline, from the mean of 20.9 GJ/t of crude steel in 1990 to 17.23 GJ/t in 2012, a reduction of about 20% in 22 years (Nyboer & Bennett, 2014). And reductions in specific energy consumption in German steelmaking have been even smaller, amounting to just 6.3% between 1991 and 2007, with about 75% of those gains explained by a structural shift away from BF/BOF to a higher share of EAF production (Arens, Worrell, & Schleich, 2012). Gains in BF efficiency have been only 4%, with the heat rate declining from 12.5 to 12 GJ/t in 16 years. Average energy consumption of German iron and steel industry in 2013 was 19.23 GJ/t when measured in terms of finished steel products (21% reduction since 1990) and 17.42 GJ/t in terms of crude steel (Stahlinstitut VDEh, 2014). And JFE Steel, Japan’s second largest steel producer, lowered its specific energy use from 28.3 GJ/t steel in 1990 to 23.3 GJ/t in 2006, and the same rate applied in 2011 (Ogura et al., 2014).

  There used to be substantial intranational (regional) differences between energy requirements of steelmaking in large economies, but the diffusion of modern procedures has narrowed the gaps. At the same time, differences in nationwide average of energy costs in steelmaking will persist. Higher rates are caused by less exacting operation and maintenance procedures as well by the low quality of inputs, such as India’s inferior coking coals (with, even after blending, 12–17% of ash compared to 8–9% elsewhere), or iron ores requiring energy-intensive beneficiation. As a result, in comparison to practices prevailing among the world’s most efficient producers, India’s cokemaking consumes 30–35% more energy; iron ore extraction and preparation has energy intensity 7–10% higher: Samajdar (2012) puts the aggregate average range at 27–35 GJ/t.

  China’s steelmaking used to be very inefficient: during the early 1990s the mean energy cost was 46–47 GJ/t of metal, and after rapid additions of new, modern capacities the rate fell to a still high 30 GJ/t by the year 2000 (Zhang & Wang, 2009). Continuing improvements and the unprecedented acquisition of large, modern, efficient plants during the past two decades resulted in further energy intensity reduction, but a detailed comparison of energy costs of steel in the United States and China showed that by 2006 the nationwide mean for China’s crude steel production (23.11 GJ/t) was still 55% above the US average of 14.9 GJ/t (Hasanbeigi et al., 2014).

  But national means of energy costs reflect not only many specific technical accomplishments (or their lack of) but also the shares of major steelmaking routes: countries with higher shares of scrap recycling have significantly lower national means. When Hasanbeigi et al. (2014) performed another analysis that assumed the US share of EAF production to be as low as in China (just 10.5% in 2006, obviously limited by steel scrap availability in a country whose metal stock began to grow rapidly only during the 1990s), the US mean rose to 22.96 GJ/t, virtually identical to the Chinese mean (and a hardly surprising finding given the fact that most of China’s steelmaking capacity was, as just noted, installed after the mid-1990s).

  Differences arising from the choice of analytical boundaries and conversion factors are well illustrated by an international comparison of steel’s energy cost published by Oda et al. (2012). In their macrostatistical approach, they excluded energy cost of ore and coal extraction and their transportation to steel mills, included the cost of cokemaking and ore agglomeration and all direct and indirect energy inputs into blast, oxygen, and electric furnaces, casting, and rolling, and converted all electricity at a rate of 1 MWh=10.8 GJ. Their results are substantially higher than for all other cited estimates: their average for the BF–BOF route in the United States, 35.5 GJ/t, is three times the US rate calculated by Hasanbeigi et al. (2011). Other rates are 28.8 GJ/t for the EU, 25.7 GJ/t for Japan, 30.5 GJ/t for China, and 30 GJ/t for India (both rates about 15% lower than in the United States!), but 65 GJ/t for Russia and a worldwide mean of 32.7 GJ/t, all for the year 2005.

  Finally, a few key comparisons of the industry’s energy requirements. My approximate calculation is that in 2013 worldwide production of iron and steel claimed at least 35 EJ of fuels and electricity, or less than 7% of the total of the world’s primary energy supply; for comparison, Laplace Conseil (2013) put the share at about 5% for 2012, compared to 23% for all other industries, 27% for transportation, and 36% for residential use and services. In either case that makes iron and steel the world’s largest energy-consuming industrial sector, further underscoring the need for continuing efficiency gains. In terms of specific fuels, the sector’s energy use claims 11% of all coal output, only about 2% of all natural gas, and 1% of electricity (use of liquid hydrocarbons is negligible)
.

  At the same time, it is necessary to appreciate the magnitude of the past improvements. If the sector’s energy intensity had remained at its 1900 level, then today’s ferrous metallurgy would be claiming no less than 25% of all the world’s primary commercial energy. And if the industry’s performance had remained arrested at the 1960s level (when it needed 2.5 times as much energy as it does now), then the making of iron and steel would require at least 16% of the world’s primary energy supply.

  National shares depart significantly from the global mean, reflecting both the magnitudes of annual output and the importance of other energy-consuming sectors. In 1990, Japan’s iron and steel industry consumed 13.6% of the nation’s primary energy; the share was down to 10.7% in the year 2000 and a marginally better 10.3% in 2010, indicating a still relatively high importance of ferrous metallurgy in the country’s economy (JISF, 2015). Energy consumption in the US iron and steel industry peaked in 1974 at about 3.8 EJ or roughly 5% of the country’s total primary energy use. Post-1980 decline of pig iron smelting, the country’s high rates of energy use in households, transportation, and services, and improvements in industrial energy intensity combined to lower the ferrous metallurgy’s overall energy claim to only 1.3% of all primary energy by 2013.

  Similarly, in Canada the share of iron and steel industry in national primary energy use declined from 2.5% in 1990 to 1.6% in 2010 (Nyboer & Bennett, 2014). In contrast, China’s primary energy demand is still dominated by industrial enterprises whose output has made the country the world’s largest exporter of manufactured goods and has provided inputs for domestic economy that, until 2013, grew at double-digit rates. But because of unprecedented post-1995 expansion of China’s steelmaking, its energy claim has translated into an unusually high share of overall energy use: it rose from just over 10% in 1990 to nearly 13% by the year 2000 (Zhang & Wang, 2009), Guo and Xu (2010) put it at 15.2% for the year 2005, and in 2013 it was, according to my calculations, nearly 16%, much higher than in any other economy.

  Given the substantial gains achieved during the past two generations (recall how closely some of the best practices have now approached the theoretical minima), future opportunities for energy savings in iron and steel industry are relatively modest, but important in aggregate. Details of these opportunities are reviewed and assessed, among many others, by AISI (2005), Brunke and Blesl (2014), Ogura et al. (2014), USEPA (2007 and 2012), and Worrell et al. (2010). Their deployment is still rewarding even in Japan, the country with the highest overall steelmaking efficiency (Tezuka, 2014). Besides such commonly used energy-saving measures as dry coke quenching, recovery of heat in sintering or BF top pressure gas turbines, Japanese steelmakers have also introduced a new scrap-melting shaft furnace (20 m tall, 3.4 m diameter, 0.5 Mt/year annual capacity) and a new sintering process where coke breeze is partially replaced by natural gas (Ogura et al., 2014).

  Air and Water Pollution and Solid Wastes

  Ferrous metallurgy offers one of the best examples of how a traditional iconic polluter, particularly as far as the atmospheric emissions were concerned, can clean up its act, and do so to such an extent that it ceases to rank among today’s most egregious offenders. But environmental impacts of iron- and steelmaking go far beyond the release of airborne pollutants, and I will also review the most worrisome consequences in terms of waste disposal, demand for water, and water pollution. And while iron and steel mills are relatively compact industrial enterprises that do not claim unusually large areas of flat land (many of them, particularly in Japan, are located on reclaimed land), extraction of iron ores has major local and regional land use impacts in areas with large-scale extraction, above all in Western Australia and in Pará and Minas Gerais in Brazil.

  All early cokemaking, iron smelting, and steelmaking operations could be easily detected from afar due to their often voluminous releases of air pollutants whose emissions were emblematic of the industrial era: particulate matter (both relatively coarse with diameter of at least 10 μm, as well as fine particles with diameter of less than 2.5 μm that can easily penetrate into lungs), sulfur dioxide (SO2), nitrogen oxides (NOx, including NO and NO2), carbon monoxide (CO) from incomplete combustion, and volatile organic compounds. Where these uncontrolled emissions were confined by valley locations with reduced natural ventilation, the result was a chronically excessive local and regional air pollution: Pittsburgh and its surrounding areas were perhaps the best American illustration of this phenomenon.

  Recent Chinese rates and totals illustrate both the significant contribution of the sector to national pollution flows and the opportunities for effective controls. Guo and Xu (2010) estimated that the sector accounted for about 15% of total atmospheric emissions, 14% of all wastewater and waste gas, and 6% of solid waste, and they put the nationwide emission averages in the year 2000 (all per tonne of steel) at 5.56 kg SO2, 5.1 kg of dust, 1.7 kg of smoke, and 1 kg of chemical oxygen demand (COD). But just 5 years later spreading air and water pollution controls and higher conversion efficiencies reduced the emissions of SO2 by 44%, those of smoke and COD by 58%, and those of dust by 70%.

  Particulates are released at many stages of integrated steelmaking, during ore sintering, in all phases of integrated steelmaking as well as from EAFs and from DRI processes, but efficient controls (filters, scrubbers, baghouses, electrostatic precipitators, cyclones) can reduce these releases to small fractions of the uncontrolled rates (USEPA, 2008). Sintering of ores emits up to about 5 kg/t of finished sinter, but after appropriate abatement maximum EU values in sinter strand waste gas are only about 750 g of dust per tonne of sinter, and minima are only around 100 g/t, but there are also small quantities of heavy metals, with maxima less than 1 g/t of sinter and minima of less than 1 mg/t (Remus et al., 2013). In the United States, modern agglomeration processes (sintering and pelletizing) emit just 125 and up to 250 g of particulates per tonne of enriched ore (USEPA, 2008). Similarly, air pollution controls in modern coking batteries limit the dust releases to less than 300 g/t of coke and SOx emissions (after desulfurization) to less than 900 g/t, and even to less than 100 g/t.

  Smelting in BFs releases up to 18 kg of top gas dust per tonne of pig iron, but the gas is recovered and treated. Smelting in BOFs and EAFs can generate up to 15–20 kg of dust per tonne of liquid steel, but modern controls keep the actual emissions from BOFs to less than 150 g/t or even to less than 15 g/t, and from EAFs to less than 300 g/t (Remus et al., 2013). Long-term Swedish data show average specific dust emissions from the country’s steel plants falling from nearly 3 kg/t of crude steel in 1975 to 1 kg/t by 1985 and to only about 200 g/t by 2005 (Jernkontoret, 2014).

  But there is another class of air pollutants that is worrisome not because of its overall emitted mass but because of its toxicity. Hazardous air pollutants originate in coke ovens, BFs, and EAFs. Hot coke gas is cooled to separate liquid condensate (to be processed into commercial by-products, including tar, ammonia, naphthalene, and light oil) and gas (containing nearly 30% H2 and 13% CH4) to be used or sold as fuel. Coking is a source of particulates, volatile organic compounds, and polynuclear aromatic hydrocarbons: uncontrolled emissions per tonne of coke are up to 7 kg of particulate matter, up to 6 kg of sulfur oxides, around 1 kg of nitrogen oxides, and 3 kg of volatile organics. Ammonia is the largest toxic pollutant emitted from cokemaking, and relatively large volumes of hydrochloric acid (HCl) originate in pickling of steel, when the acid is used to remove oxide and scale from the surface of finished metal. Manganese, essential in ferrous metallurgy due to its ability to fix sulfur, deoxidize, and help in alloying, has the highest toxicity among the released metallic particulates, with chromium, nickel, and zinc being much less worrisome.

  But, again, modern controls can make a substantial difference: USEPA’s evaluations show that the sector’s toxicity score (normalized by annual production of iron and steel) declined by almost half between 1996 and 2005 and that the mass of all toxic chemicals was reduced by 66% (USEPA, 2008). And these imp
rovements have continued since that time. Water used in coke production and for cooling furnaces is largely recycled, and wastewater volumes that have to be treated are relatively small, typically just 0.1–0.5 m3/t of coke and 0.3–6 m3/t of BOF steel. Wastewater from BOF gas treatment is processed by electrical flocculation while mill scale and oil and grease have to be removed from wastewater from continuous casting. EAFs produce only small amounts of dusts and sludges, usually less than 13 kg/t of steel (WSA, 2014a). Dust and sludge removed from escaping gases have high iron content and can be reused by the plant, while zinc oxides captured during EAF operation can be resold.

  But solid waste mass generated by iron smelting in BFs is an order of magnitude larger, typically about 275 kg/t of steel (extremes of 250–345 kg/t), and steelmaking in BOFs adds another 125 kg/t (85–165 kg/t). The BF/BOF route thus leaves behind about 400 kg of slag per tonne of metal, and the global steelmaking now generates about 450 Mt of slag a year—and yet this large mass poses hardly any disposal problems. Concentrated and predictably constant production of the material and its physical and chemical qualities, that make it suitable for industrial and agricultural uses, mean that slag is not just another bothersome waste stream but a commercially useful by-product.

  The material is marketed in several different forms which find specific uses (NSA, 2015; WSA, 2014b). Granulated slag is produced by rapid water cooling; it is a sand-like material whose principal use is incorporation into standard (Portland) cement. Air-cooled slag is hard, dense, and chunky material that is crushed and screened to produce desirable sizes used as aggregates in precast and ready-mixed concrete, in asphalt mixtures or as a railroad ballast and permeable fill for road bases, in septic fields, and for pipe beds. Pelletized (expanded) slag resembles a volcanic rock, and its lightness and (when ground) excellent cementitious properties make it a perfect aggregate to make cement or to be added to masonry. Expanded slag is now widely used in the construction industry, and Lei (2011) reported that in 2010 China’s cement industry used all available metallurgical slag (about 223 Mt in that year). Brazilian figures for 2011 show 60% of slag used in cement production, 16% put into road bases, and 13% used for land leveling (CNI, 2012).

 

‹ Prev