Book Read Free

The Most Powerful Idea in the World

Page 18

by William Rosen


  Darby realized something else about his new method. If it worked for the relatively rare copper and zinc used to make brass, it might also work for far more abundant, and therefore cheaper, iron. The onetime malter tried to persuade his partners of the merits of his argument, but failed; unfortunately for Bristol, but very good indeed for Coalbrookdale, where Darby moved in 1709, leasing the “old furnace.” There, his competitive advantage, in the form of the patent on sand casting for iron, permitted him to succeed beyond expectation. Beyond even the capacity of Coalbrookdale’s forests to supply one of the key inputs of ironmaking; within a year, the oak and hazel forests around the Severn were clearcut down to the stumps. Coalbrookdale needed a new fuel.

  Abraham Darby wasn’t the first to recognize the potential of a charcoal shortage to disrupt iron production. In March 1589, Queen Elizabeth granted one of those pre–Statute on Monopolies patents to Thomas Proctor and William Peterson, giving them license “to make iron, steel, or lead10 by using of earth-coal, sea-coal, turf, and peat in the proportion of three parts thereof to one of wood-coal.” In 1612, another patent, this one running thirty-one years, was given to an inventor named Simon Sturtevant for the use of “sea-coale or pit-coale” in metalworking; the following year, Sturtevant’s exclusive was voided and an ironmaster named John Rovenson was granted the “sole priviledge to make iron11 … with sea-cole, pit-cole, earth-cole, &c.”

  Darby wasn’t even the first in his own family to recognize the need for a new fuel. In 1619, his great-uncle (or, possibly, great-great-uncle; genealogies for the period are vague), Edward Sutton, Baron Dudley paid a license fee to Rovenson for the use of his patent and set to work turning coal plus iron into gold. In 1622, Baron Dudley patented something—the grant, which was explicitly exempted when Edward Coke’s original Statute on Monopolies took force a year later, recognized that Dudley had discovered “the mystery, art, way, and means,12 of melting of iron ore, and of making the same into cast works or bars, with sea coals or pit coals in furnaces, with bellows”—but the actual process remained, well, mysterious. Forty years later, in 1665, Baron Dudley’s illegitimate son, the unfortunately named Dud Dudley, described, in his self-aggrandizing memoir, Dud Dudley’s Metallum martis, their success in using pitcoal to make iron. He did not, however, describe how they did it, and the patents of the period are even vaguer than the genealogies. What can be said for certain is that both Dudleys recognized that iron production was limited by the fact that it burned wood far faster than wood can be grown.*

  In the event, the younger Dudley13 continued to cast iron in quantities that, by 1630, averaged seven tons a week, but politics started to occupy more of his attention. He served as a royalist officer during the Civil War, thereby backing the losing side; in 1651, while a fugitive under sentence of death for treason, and using the name Dr. Hunt, Dudley spent £700 to build a bloomery. His partners, Sir George Horsey,14 David Ramsey, and Roger Foulke, however, successfully sued him, using his royalist record against him, taking both the bloomery and what remained of “Dr. Hunt’s” money.

  Nonetheless, Dudley continued to try to produce high-quality iron with less (or no) charcoal, both alone and with partners. Sometime in the 1670s, he joined forces with a newly made baronet named Clement Clerke, and in 1693 the “Company for Making Iron with Pitcoal” was chartered, using a “work for remelting and casting15 old Iron with sea cole [sic].” The goal, however, remained elusive. Achieving it demanded the sort of ingenuity and “useful and reliable knowledge” acquired as an apprentice and an artisan. In Darby’s case, it was a specific and unusual bit of knowledge, dating back to his days roasting malt.

  As it turned out, the Shropshire countryside that had been providing the furnace at Coalbrookdale with wood was also rich in pitcoal. No one, however, had used it to smelt iron because, Dudley and Clerke’s attempts notwithstanding, the impurities, mostly sulfur, that it caused to become incorporated into the molten iron made for a very brittle, inferior product. For similar reasons, coal is an equally poor fuel choice for roasting barley malt, since while Londoners would—complainingly—breathe sulfurous air from coal-fueled fireplaces, they weren’t about to drink beer that tasted like rotten eggs. The answer, as Abraham Darby had every reason to know, was coke.

  Coke is what you get when soft, bituminous coal is baked in a very hot oven to draw off most of the contaminants, primarily sulfur. What is left behind is not as pure as charcoal, but is far cleaner than pitcoal, and while it was therefore not perfect for smelting iron, it was a lot cheaper than the rapidly vanishing store of wood. Luckily for Darby,16 both the ore and the coke available in Shropshire were unusually low in sulfur and therefore minimized the usual problem of contamination that would otherwise have made the resulting iron too brittle.

  Using coke offered advantages other than low cost. The problem with using charcoal as a smelting fuel, even when it was abundant, was that a blast furnace needs to keep iron ore and fuel in contact while burning in order to incorporate carbon into the iron’s molecular lattice. Charcoal, however, crushes relatively easily, which meant that it couldn’t be piled very high in a furnace before it turned to powder under its own weight. This, in turn, put serious limits on the size of any charcoal-fueled blast furnace.

  Those limits vanished with Darby’s decision in 1710 to use coke, the cakes of which were already compressed by the baking process, in the old furnace at Coalbrookdale. And indeed, the first line in any biography of Abraham Darby will mention the revolutionary development that coke represents in the history of industrialization. But another element of Darby’s life helps even more to illuminate the peculiarly English character of the technological explosion that is the subject of this book.

  The element remains, unsurprisingly, iron.

  IN THE DAYS BEFORE modern quality control, the process of casting iron was highly problematic, since iron ore was as variable as fresh fruit. Its quality depended largely on the other metals bound to it, particularly the quantity of silicon and quality of carbon. Lacking the means to analyze those other elements chemically, iron makers instead categorized by color. Gray iron contains carbon in the form of graphite (the stuff in pencils) and is pretty good as a casting material; the carbon in white iron is combined with other elements (such as sulfur, which makes iron pyrite, or marcasite) that make it far more brittle. The classification of iron, in short, was almost completely empirical. Two men did the decisive work in establishing a scale that was so accurate that it established pretty much the same ten grades used today. One was Abraham Darby; the other was a Frenchman: René Antoine de Réaumur.

  Réaumur was a gentleman scientist very much in the mold of Robert Boyle. Like Boyle, he was born into the aristocracy, was a member of the “established”—i.e. Catholic—church, was educated at the finest schools his nation could offer, including the University of Paris, and, again like Boyle at the Royal Society, was one of the first members of the French Académie. His name survives most prominently in the thermometric system he devised in 1730, one that divided the range between freezing and boiling into eighty degrees; the Réaumur scale stayed popular throughout Europe until the nineteenth century, and the incorporation of the Celsius scale into the metric system.* His greatest contribution to metallurgical history17 was his 1722 insight that the structure of iron was a function of the properties of the other metals with which it was combined, particularly sulfur—an insight he, like Darby, used to classify the various forms of iron.

  Unlike Darby, however, he was a scientist before he was an inventor, and long before he was an entrepreneur or even on speaking terms with one. It is instructive that when the government of France, under Louis XV’s minister Cardinal de Fleury, made a huge investment in the development of “useful knowledge” (they used the phrase), Réaumur was awarded a huge pension for his discoveries in the grading of iron—and he turned it down because he didn’t need it.

  Scarcely any two parallel lives do more to demonstrate the differences between eighteen
th-century France and Britain: the former a national culture with a powerful affection for pure over applied knowledge, the latter the first nation on earth to give inventors the legally sanctioned right to exploit their ideas. It isn’t, of course, that Britain didn’t have its own Réaumurs—the Royal Society was full of skilled scientists uninterested in any involvement in commerce—but rather that it also had thousands of men like Darby: an inventor and engineer who cared little about scientific glory but a whole lot about pots and pans.

  IF THE CAST IRON used for pots and pans was the most mundane version of the element, the most sublime was steel. As with all iron alloys, carbon is steel’s critical component. In its simplest terms, wrought iron has essentially no minimum amount of carbon, just as there is no maximum carbon content for cast iron. As a result, the recipe for either has a substantial fudge factor. Not so with steel. Achieving steel’s unique combination of strengths demands a very narrow range of carbon: between 0.25 percent and a bit less than 2 percent. For centuries* this has meant figuring out how to initiate the process whereby carbon insinuates itself into iron’s crystalline structure, and how to stop it once it achieves the proper percentage. The techniques used have ranged from the monsoon-driven wind furnaces of south Asia to the quenching and requenching of white-hot iron in water, all of which made steelmaking a boutique business for centuries: good for swords and other edged objects, but not easy to scale up for the production of either a few large pieces or many smaller ones. By the eighteenth century, the most popular method for steelmaking was the cementation process, which stacked a number of bars of wrought iron in a box, bound them together, surrounded them with charcoal, and heated the iron at 1,000° for days, a process that forced some of the carbon into a solid solution with the iron. The resulting high carbon “blister” steel was expensive, frequently excellent, but, since the amount of carbon was wildly variable, inconsistent.

  Inconsistently good steel was still better than no steel at all. A swordsman would be more formidable with a weapon made of the “jewel steel” that the Japanese call tamahagane than with a more ordinary alloy, but either one is quite capable of dealing mayhem. Consistency gets more important as precision becomes more valuable, which means that if you had to imagine where consistency in steel manufacturing—uniform strength in tension, for example—mattered most, you could do a lot worse than thinking small. Smaller, even, than kitchenware. Something about the size of, say, a watch spring.

  BENJAMIN HUNTSMAN WAS, LIKE Abraham Darby, born into a Quaker farming family that was successful enough to afford an apprenticeship for him with a clockmaker in the Lincolnshire town of Epworth. There, in a process that should by now seem familiar, he spent a seven-year apprenticeship learning a trade so that he was able to open his own shop, in the Yorkshire town of Doncaster, with his own apprentice, by the time he was twenty-one. Two years later, he was not only making and selling his own clocks, but was given the far from honorary duty of caring for the town clock.

  In legend, at least, Huntsman entered the history of iron manufacturing sometime in the 1730s out of dissatisfaction with the quality of the steel used to make his clock springs. Since the mainspring of an eighteenth-century clock provided all of its power as it unwound, and any spring provided less drive force as it relaxed, the rate at which it yielded its energy had to be the same for every clock; an even smaller balance spring was needed to get to an “accuracy” of plus or minus ten minutes a day. Given the number of pieces of steel needed to compensate for a machine whose driving force changed with each second, consistency was more highly prized in clockmaking than in any other enterprise.

  After nearly ten years of secret experiments18* Huntsman finally had his solution: smelting blister steel in the clay crucibles used by local glass makers until it liquefied, which eliminated almost all variability from the end product. So successful was the technique, soon enough known as cast, or crucible, steelmaking, that by 1751 Huntsman had moved to Sheffield, twenty miles north of Doncaster, and hung out a shingle at his own forge. The forge was an advertisement for Huntsman’s ingenuity: His crucible used tall chimneys to increase the air draft, and “hard” coke to maintain the cementation temperature. It was also a reminder of the importance of good luck: His furnaces could be made19 with locally mined sandstone that was strongly resistant to heat, and his crucibles with exceptionally good local clay.

  Up until then, Huntsman’s life was very much like that of a thousand other innovators of eighteenth-century Britain. He departed from the norm,20 however, in declining to patent his process, instead relying on the same secrecy he had used in his experiments to be a better protection than legal sanction—supposedly to such a degree that he ran his works only at night. It didn’t, however, work. A competitor named Samuel Walker was the first to spy out Huntsman’s secret, though not the last; he also attracted industrial spies like the Swede Ludwig Robsahm and Frenchman Gabriel Jars. In 1761 and 1765 respectively they produced reports on the Huntsman process for their own use, and crucible steel became the world’s most widely used steelmaking technique until the middle of the nineteenth century.

  NEITHER HUNTSMAN’S MAINSPRINGS NOR Darby’s iron pots were enough to build Rocket, and certainly neither represented enough demand to ignite a revolution. The real demand for iron throughout the mid-1700s was in the form of arms and armor, especially once the worldwide conflict known variously as the Seven Years War or the French and Indian War commenced in 1754. And in Britain, arms and armor meant the navy.

  Between the Battle of La Hogue in 1692 and Trafalgar in 1805, the Royal Navy grew from about 250 to 950 ships while nearly doubling the number of guns carried by each ship of the line. Every one of those ships required a huge weight of iron, for everything from the cannon themselves to the hoops around hundreds of barrels, and most of that iron was purchased, despite the advances of Darby and others, from the Bergslagen district of Sweden; in 1750, when Britain consumed21 50,000 tons, only 18,000 was produced at home. The reasons were complex, primarily Scandinavia’s still abundant forests in close proximity to rich veins of iron ore, but they resulted in a net outflow of £1.5 million annually to Sweden alone. Improving the availability and quality of British iron therefore had both financial and national security implications, which was why, in the 1770s, one of the service’s senior purchasing agents was charged with finding a better source for wrought iron. His name was Henry Cort.

  FIFTY YEARS AFTER CORT’S death, The Times could still laud him as “the father of the iron trade.”22 His origins are a bit murky; he was likely from Lancaster, the son of a brickmaker, and by the 1760s he was working as a clerk to a navy agent: someone charged with collecting the pensions owed to the survivors of naval officers, prize money, and so on. In due course, the Royal Navy promoted him to a position as one of its purchasing agents, where Cort was charged with investigating other options for securing a reliable source of wrought iron.

  The choice was simple: either import it from Sweden or make it locally, without charcoal. Darby’s coke-fired furnace solved half the problem, but replacing charcoal when turning it into wrought iron—the so-called fining process—demanded a new technique. An alternate finery that dates from the 1730s came to be known as the stamp-and-pot system, in which pig iron was cooled, removed from the furnace and “stamped” or broken into small pieces, and then placed in a clay pot that was heated in a coal-fired furnace until the pot broke, after which the iron was reheated and hammered into a relatively pure form of wrought iron.23

  This limited the quantity of iron to the size of the clay pot, which was unacceptable for naval purposes. Cort’s insight was to expand the furnace itself, so that another hearth door opened onto what he called a puddling furnace. In puddling, molten pig iron was stirred by an iron “rabbling” bar, with the fuel separate from the iron in order to remove carbon and other impurities; as the carbon left the pig iron, the melting temperature increased, forcing still more carbon out of the mix. It was a brutal job, requiring men with
the strength of Olympic weightlifters, the endurance of Tour de France cyclists, and a willingness to spend ten hours a day in the world’s hottest (and filthiest) sauna stirring a pool filled with a sludge of molten iron using a thirty-pound “spoon.” Temperatures in the coolest parts of the ironworks were typically over 130°; iron was transported by the hundredweight in unsteady wheelbarrows, and the slightest bit of overbalancing meant broken bones. Ingots weighing more than fifty pounds each had to be placed in furnaces at the end of puddler’s shovels. Huge furnace doors and grates were regularly opened and closed by chains with a frightening tendency to wrap themselves around arms and legs. In the words of historian David Landes, “The puddlers were the aristocracy24 of the proletariat, proud, clannish, set apart by sweat and blood…. Numerous efforts were made to mechanize the puddling furnace, all of them in vain. Machines could be made to stir the bath, but only the human eye and touch could separate out the solidifying decarburized metal.” What the puddlers pulled, taffylike, from the furnace was a “puddle ball” or “loop” of pure iron ready for the next step.

  The next step in replacing imported metal with local was forging wrought iron into useful shapes: converting ingots into bars and sheets. Until the late eighteenth century, the conversion was largely a matter of hammering iron that had been softened by heat into bars. A slitting mill run by waterpower then cut the bars into rods, which could then, for example, be either drawn into wire, or—more frequently—cut into nails. At Fontley, near Fareham, seventy miles from London, Cort was the first to invent grooved rollers, through which the softened iron ingots could be converted into bars and sheets, in the manner of a traditional pasta machine. In 1783, he received a patent for “a peculiar method of preparing,25 welding, and working various sorts of iron.”

 

‹ Prev