The Most Powerful Idea in the World
Page 17
Thanks to the administrative genius of Harding and his successors, the order eventually comprised more than eight hundred monasteries, all over Europe, that contained the era’s most productive farms, factories—and ironmongeries. Iron was an even larger contributor to the Cistercians’ reputation than their expertise in agriculture or machinery, and was a direct consequence of Harding’s decision that because some forms of labor were barred to his monastic brothers, others, particularly metalworking, needed to be actively encouraged. The Cistercian monastery in Velehrad (today a part of the Czech Republic) may have been using waterwheels for ironmaking as early as 1269. By 1330, the Cistercians operated at least a dozen smelters and forges in the Burgundy region, of which the largest (and best preserved today) is the one at Fontenay Abbey: more than 150 feet long by nearly thirty feet wide, still bearing the archaeological detritus of substantial iron production.
Which brings us back to Rievaulx. In 1997, a team of archaeologists4 and geophysicists from the University of Bradford, led by Rob Vernon and Gerry McDonnell, came to north Yorkshire in order to investigate twelfth-century ironmaking techniques. This turns out to be a lot more than traditional pick-and-shovel archaeology; since the earth itself has a fair amount of residual iron (and therefore electrical conductivity), calculating the amount and quality of iron produced at any ruin requires extremely sophisticated high-tech instruments, with intimidating names like magnetometers and fluxgate gradiometers, to separate useful information from the random magnetism5 found at a suspected ironworking site. What Vernon and McDonnell found caused quite a stir in the world of technological history, which was that the furnaces in use during the thirteenth century at one of Rievaulx Abbey’s iron smelters were producing iron at a level of technical sophistication equivalent to that of eighteenth-century Britain. Evidence from the residual magnetism in the slag piles and pits in the nearby village of Laskill revealed that the smelter in use was not only a relatively sophisticated furnace but was, by the standards of the day, huge: built of stone, at least fifteen feet in diameter, and able to produce consistently high-quality iron in enormous quantities. In the line that figured in almost every news account of the expedition, Henry VIII’s decision to close the monasteries in 1536 (a consequence of his divorce from Catherine of Aragon and his break with Rome) “delayed the Industrial Revolution by two centuries.”
Even if the two-century delay was a journalistic exaggeration—the Cistercians in France, after all, were never suppressed, and the order was famously adept at diffusing techniques throughout all its European abbeys—it deserves attention as a serious thesis about the birth of the world’s first sustained era of technological innovation. The value of that thesis, of course, depends on the indispensability of iron to the Industrial Revolution, which at first glance seems self-evident.
First glances, however, are a poor substitute for considered thought. Though the discovery at Laskill is a powerful reminder of the sophistication of medieval technology, the Cistercians’ proven ability to produce substantial quantities of high-quality iron not only fails to prove that they were about to ignite an Industrial Revolution when they were suppressed in the early sixteenth century, it actually demonstrates the opposite—and for two reasons. First, the iron of Laskill and Fontenoy was evidence not of industrialization, but of industriousness. The Cistercians owed their factories’ efficiency to their disciplined and cheap workforce rather than any technological innovation; there’s nothing like a monastic brotherhood that labors twelve hours a day for bread and water to keep costs down. The sixteenth-century monks were still using thirteenth-century technology, and they neither embraced, nor contributed to, the Scientific Revolution of Galileo and Descartes.
The second reason is even more telling: For centuries, the Cistercian monasteries (and other ironmakers; the Cistercians were leaders of medieval iron manufacturing, but they scarcely monopolized it) had been able to supply all the high-quality iron that anyone could use, but all that iron still failed to ignite a technological revolution. Until something happened to increase demand for iron, smelters and forges, like the waterpower that drove them, sounded a lot like one hand clapping. It would sound like nothing else for—what else?—two hundred years.
THE SEVERN RIVER, THE longest in Britain, runs for more than two hundred miles from its source in the Cambrian Mountains of Wales to its mouth at Bristol Channel. The town of Ironbridge in Shropshire is about midway between mouth and source, just south of the intersection with the Tern River. Today the place is home not only to its eponymous bridge—the world’s first to be made of iron—but to the Ironbridge Institute, one of the United Kingdom’s premier institutions for the study of what is known nationally as “heritage management.” The Iron Gorge, where the bridge is located, is one of the United Nations World Heritage Sites, along with the Great Wall of China, Versailles, and the Grand Canyon.* The reason is found in the nearby town of Coalbrookdale.
Men were smelting iron in Coalbrookdale6 by the middle of the sixteenth century, and probably long before. The oldest surviving furnace at the site is treated as a pretty valuable piece of world heritage itself. Housed inside a modern glass pyramid at the Museum of Iron, the “old” furnace, as it is known, is a rough rectangular structure, maybe twenty feet on a side, that looks for all the world like a hypertrophied wood-burning pizza oven. It is built of red bricks still covered with soot that no amount of restoration can remove. When it was excavated, in 1954, the pile of slag hiding it weighed more than fourteen thousand tons, tangible testimony to the century of smelting performed in its hearth beginning in 1709, when it changed the nature of ironmaking forever.
Ironmaking involves a lot more than just digging up a quantity of iron ore and baking it until it’s hot enough to melt—though, to be sure, that’s a big part of it. Finding the ore is no great challenge; more than 5 percent of the earth’s crust is iron, and about one-quarter of the planet’s core is a nickel-iron alloy, but it rarely appears in an obligingly pure form. Most of the ores that can be found in nature are oxides: iron plus oxygen, in sixteen different varieties, most commonly hematite and magnetite, with bonus elements like sulfur and phosphorus in varying amounts. To make a material useful for weapons, structures, and so on, those oxides and other impurities must be separated from the iron by smelting, in which the iron ore is heated by a fuel that creates a reducing atmosphere—one that removes the oxides from the ore. The usual fuel is one that contains carbon, because when two carbon atoms are heated in the bottom of the furnace in the presence of oxygen—O2—they become two molecules of carbon monoxide. The CO in turn reacts with iron oxide as it rises, liberating the oxygen as carbon dioxide—CO2—and metallic iron.
Fe2O3 + 3CO → 2Fe + 3C02
There are a lot of other chemical reactions involved, but that’s the big one, since the first step in turning iron ore into a bar of iron is getting the oxygen out; the second one is putting carbon in. And that is a bit more complicated, because the molecular structure of iron—the crystalline shapes into which it forms—changes with heat. At room temperature, and up to about 900°C, iron organizes itself into cubes, with an iron atom at each corner and another in the center of the cube. When it gets hotter than 900°C, the structure changes into a cube with the same eight iron atoms at the corners and another in the center of each face of the cube; at about 1300°C, it changes back to a body-centered crystal. If the transformation takes place in the presence of carbon, carbon atoms take their place in the crystal lattices, increasing the metal’s hardness and durability by several orders of magnitude and reducing the malleability of the metal likewise. The percentage of carbon that bonds to iron atoms is the key: If more than 4.5 percent of the final mixture is carbon, the final product is hard, but brittle: good, while molten, for casting, but hard to shape, and far stronger in compression than when twisted or bent. With the carbon percentage less than 0.5 percent, the iron is eminently workable, and becomes the stuff that we call wrought iron. And when the percentage hits
the sweet spot of between about 0.5 percent and 1.85 percent, you get steel.
This is slightly more complicated than making soup. The different alloys of carbon and iron, each with different properties, form at different times depending upon the phase transitions between face-centered and body-centered crystalline structures. The timing of those transitions, in turn, vary with temperature, pressure, the presence of other elements, and half a dozen other variables, none of them obvious. Of course, humans were making iron for thousands of years before anyone had anything useful to say about atoms, much less molecular bonds. They were making bronze, from copper and tin, even earlier. During the cultural stage that archaeologists call “the” Iron Age—the definite article is deceptive; Iron Age civilizations appeared in West Africa and Anatolia sometime around 1200 BCE, five hundred years later in northern Europe*—early civilizations weaned themselves from the equally sturdy bronze (probably because of a shortage of easily mined tin) by using trial and error to combine the ore with heat and another substance, such as limestone (in the jargon of the trade, a flux), which melted out impurities such as silicon and sulfur. The earliest iron furnaces were shafts generally about six to eight feet high and about a foot in diameter, in which the burning fuel could get the temperature up to about 1200°C, which was enough for wrought, but not cast, iron.
By the sixteenth century, iron making began to progress beyond folk wisdom and trial and error. The first manuals of metallurgy started to appear in the mid-1500s, most especially De re metallica by the German Georg Bauer, writing under the name Agricola, who described the use of the first European blast furnaces, known in German as Stückofen, which had hearths roughly five feet long and three feet high, with a foot-deep crucible in the center:
A certain quantity of iron ore7 is given to the master [who] throws charcoal into the crucible, and sprinkles over it an iron shovelful of crushed iron ore mixed with unslaked lime. Then he repeatedly throws on charcoal and sprinkles it with ore, and continues until he has slowly built up a heap; it melts when the charcoal has been kindled and the fire violently stimulated by the blast of the bellows….
Agricola’s work was so advanced that it remained at the cutting edge of mining and smelting for a century and a half. The furnaces he described replaced the earlier forges, known as bloomeries, which produced a spongelike combination of iron and slag—a “bloom”—from which the slag could be hammered out, leaving a fairly low-carbon iron that could be shaped and worked by smiths, hence wrought iron.
Though relatively malleable, early wrought iron wasn’t terribly durable; okay for making a door, but not nearly strong enough for a cannon. The Stückofen, or its narrower successor, the blast furnace, however, was built to introduce the iron ore and flux at the top of the shaft and to force air at the bottom. The result, once gravity dropped the fuel through the superheated air, which was “blasted” into the chamber and rose via convection, was a furnace that could actually get hot enough to transform the iron. At about 1500°C, the metal undergoes the transition from face-centered to body-centered crystal and back again, absorbing more carbon, making it very hard indeed. This kind of iron—pig iron, supposedly named because the relatively narrow channels emerging from the much wider smelter resembled piglets suckling—is so brittle, however, that it is only useful after being poured into forms usually made of loam, or clay.
Those forms could be in the shape of the final iron object, and quite a few useful items could be made from the cast iron so produced. They could also, and even more usefully, be converted into wrought iron by blowing air over heated charcoal and pig iron, which, counterintuitively, simultaneously consumed the carbon in both fuel and iron, “decarbonizing” it to the <1 percent level that permitted shaping as wrought iron (this is known as the “indirect method” for producing wrought iron). The Cistercians had been doing so from about 1300, but they were, in global terms, latecomers; Chinese iron foundries had been using these techniques two thousand years earlier.
Controlling the process that melted, and therefore hardened, iron was an art form, like cooking on a woodstove without a thermostat. It’s worth remembering that while recognizably human cultures had been using fire for everything from illumination to space heating to cooking for hundreds of thousands of years, only potters and metalworkers needed to regulate its heat with much precision, and they developed a large empirical body of knowledge about fire millennia before anyone could figure out why a fire burns red at one temperature and white at another. The clues for extracting iron from highly variable ores were partly texture—a taffylike bloom, at the right temperature, might be precisely what the ironmonger wanted—partly color: When the gases in a furnace turned violet, what would be left behind was a pretty pure bit of iron.*
Purity was important: Ironmakers sought just the right mix of iron and carbon, and knew that any contamination by other elements would spoil the iron. Though they were ignorant of the chemical reactions involved, they soon learned that mineral fuels such as pitcoal, or its predecessor, peat, worked poorly, because they introduced impurities, and so, for thousands of years, the fuel of choice was charcoal. The blast furnace at Rievaulx Abbey used charcoal. So did the one at Fontenay Abbey. And, for at least a century, so did the “old” furnace at Coalbrookdale. Unfortunately, that old furnace, like all its contemporaries, needed a lot of charcoal: The production of 10,000 tons of iron demanded nearly 100,000 acres of forest, which meant that a single seventeenth-century blast furnace could denude more than four thousand acres each year.
Until 1709, and the arrival of Abraham Darby.
ABRAHAM DARBY WAS BORN in a limestone mining region of the West Midlands, in a village with the memorable name of Wren’s Nest. He was descended from barons and earls, though the descent was considerable by the time Abraham was born in 1678. His father, a locksmith and sometime farmer, was at least prosperous enough to stake his son to an apprenticeship working in Birmingham for a “malter”—a roaster and miller of malt for use in beer and whisky. Abraham’s master, Jonathan Freeth, like the Darby family, was a member of the Religious Society of Friends. By the time he was an adult, Darby had been educated in a trade and accepted into a religious community, and it is by no means clear which proved the more important in his life—indeed, in the story of industrialization.
Darby’s connection with the Society of Friends—the Quakers—proved its worth fairly early. A latecomer to the confessional mosaic of seventeenth-century England, which included (in addition to the established Anglican church) Mennonites, Anabaptists, Presbyterians, Baptists, Puritans, (don’t laugh) Muggletonians and Grindletonians, and thousands of very nervous Catholics, the Society of Friends was less than thirty years old when Darby was born and was illegal until passage of the Toleration Act of 1689, one of the many consequences of the arrival of William and Mary (and John Locke) the year before. Darby’s Quaker affiliation was to have a number of consequences—the Society’s well-known pacifism barred him, for example, from the armaments industry—but the most important was that, like persecuted minorities throughout history, the Quakers took care of their own.
So when Darby moved to Bristol in 1699, after completing his seven years of training with Freeth, he was embraced by the city’s small but prosperous Quaker community, which had been established in Bristol since the early 1650s, less than a decade after the movement broke away from the Puritan establishment. The industrious Darby spent three years roasting and milling barley before he decided that brass, not beer, offered the swiftest path to riches, and in 1702, the ambitious twenty-five-year-old joined a number of other Quakers as one of the principals of the Bristol Brass Works Company.
For centuries, brass, the golden alloy of copper and zinc, had been popular all over Britain, first as a purely decorative metal used in tombstones, and then, once the deluge of silver from Spain’s New World colonies inundated Europe, as the metal of choice for household utensils and vessels. However, the manufacture of those brass cups and spoons was a near mo
nopoly of the Netherlands, where they had somehow figured out an affordable way of casting them.
The traditional method for casting brass used the same kind of forms used in the manufacture of pig iron: either loam or clay. This was fine for the fairly rough needs of iron tools, but not for kitchenware, which was why the process of fine casting in loam—time-consuming, painstaking, highly skilled—made it too costly for the mass market. This was why the technique was originally developed for more precious metals, such as bronze. Selling kitchenware to working-class English families was only practicable if the costs could be reduced—and the Dutch had figured out how. If the Bristol Brass Works was to compete with imports, it needed to do the same, and Darby traveled across the channel in 1704 to discover how.
The Dutch secret turned out to be8 casting in sand rather than loam or clay, and upon his return, Darby sought to perfect what he had learned in Holland, experimenting rigorously with any number of different sands and eventually settling, with the help of another ironworker and Quaker named John Thomas, on a material and process that he patented in 1708. It is by no means insignificant that the wording of the patent explicitly noted that the novelty of Darby’s invention was not that it made more, or better, castings, but that it made them at a lower cost: “a new way of casting iron bellied pots9 and other iron bellied ware in sand only, without loam or clay, by which such iron pots and other ware may be cast fine and with more ease and expedition and may be afforded cheaper than they can by the way commonly used” (emphasis added).