But Carrier’s vision of domestic cool would be postponed by the outbreak of World War II. It wasn’t until the late 1940s, after almost fifty years of experimentation, that air-conditioning finally made its way to the home front, with the first in-window portable units appearing on the market. Within half a decade, Americans were installing more than a million units a year. When we think about twentieth-century miniaturization, our minds naturally gravitate to the transistor or the microchip, but the shrinking footprint of air-conditioning deserves its place in the annals of innovation as well: a machine that had once been larger than a flatbed truck reduced in size so that it could fit in a window.
That shrinking would quickly set off an extraordinary chain of events, in many ways rivaling the impact of the automobile on settlement patterns in the United States. Places that had been intolerably hot and humid—including some of the cities where Frederic Tudor had sweated out the summer as a young man—were suddenly tolerable to a much larger slice of the general public. By 1964, the historic flow of people from South to North that had characterized the post–Civil War era had been reversed. The Sun Belt expanded with new immigrants from colder states, who could put up with the tropical humidity or blazing desert climates thanks to domestic air-conditioning. Tucson rocketed from 45,000 people to 210,000 in just ten years; Houston expanded from 600,000 to 940,000 in the same decade. In the 1920s, when Willis Carrier was first demonstrating air-conditioning to Adolph Zukor at the Rivoli Theatre, Florida’s population stood at less than one million. Half a century later, the state was well on the way to becoming one of the four most populous in the country, with ten million people escaping the humid summer months in air-conditioned homes. Carrier’s invention circulated more than just molecules of oxygen and water. It ended up circulating people as well.
Irvin Theatre, 1920s
Broad changes in demography invariably have political effects. The migration to the Sun Belt changed the political map of America. Once a Democratic stronghold, the South was besieged by a massive influx of retirees who were more conservative in their political outlook. As the historian Nelson W. Polsby demonstrates in his book How Congress Evolves, Northern Republicans moving south in the post-AC era did as much to undo the “Dixiecrat” base as the rebellion against the civil rights movement. In Congress, this had the paradoxical effect of unleashing a wave of liberal reforms, as the congressional Democrats were no longer divided between conservative Southerners and progressives in the North. But air-conditioning arguably had the most significant impact on Presidential politics. Swelling populations in Florida, Texas, and Southern California shifted the electoral college toward the Sun Belt, with warm-climate states gaining twenty-nine electoral college votes between 1940 and 1980, while the colder states of the Northeast and Rust Belt lost thirty-one. In the first half of the twentieth century, only two presidents or vice presidents hailed from Sun Belt states. Starting in 1952, however, every single winning presidential ticket contained a Sun Belt candidate, until Barack Obama and Joe Biden broke the streak in 2008.
The “Igloo of Tomorrow.” Dr. Willis H. Carrier holds a thermometer inside an igloo display that demonstrates air-conditioning at the St. Louis World’s Fair. The temperature-controlled igloo remained at a steady 68 degrees Fahrenheit inside.
This is long-zoom history: almost a century after Willis Carrier began thinking about keeping the ink from smearing in Brooklyn, our ability to manipulate tiny molecules of air and moisture helped transform the geography of American politics. But the rise of the Sun Belt in the United States was just a dress rehearsal for what is now happening on a planetary scale. All around the world, the fastest growing megacities are predominantly in tropical climates: Chennai, Bangkok, Manila, Jakarta, Karachi, Lagos, Dubai, Rio de Janeiro. Demographers predict that these hot cities will have more than a billion new residents by 2025.
It goes without saying that many of these new immigrants don’t have air-conditioning in their homes, at least not yet, and it is an open question whether these cities are sustainable in the long run, particularly those based in desert climates. But the ability to control temperature and humidity in office buildings, stores, and wealthier homes allowed these urban centers to attract an economic base that has catapulted them to megacity status. It’s no accident that the world’s largest cities—London, Paris, New York, Tokyo—were almost exclusively in temperate climates until the second half of the twentieth century. What we are seeing now is arguably the largest mass migration in human history, and the first to be triggered by a home appliance.
—
THE DREAMERS AND INVENTORS who ushered in the cold revolution didn’t have eureka moments, and their brilliant ideas rarely transformed the world immediately. Mostly they had hunches, but they were tenacious enough to keep those hunches alive for years, even decades, until the pieces came together. Some of those innovations can seem trivial to us today. All that collective ingenuity, focused over decades and decades—all to make the world safe for the TV dinner? But the frozen world that Tudor and Birdseye helped conjure into being would do more than just populate the world with fish sticks. It would also populate the world with people, thanks to the flash freezing and cryopreservation of human semen, eggs, and embryos. Millions of human beings around the world owe their existence to the technologies of artificial cold. Today, new techniques in oocyte cryopreservation are allowing women to store healthy eggs in their younger years, extending their fertility well into their forties and fifties in many cases. So much of the new freedom in the way we have children now—from lesbian couples or single mothers using sperm banks to conceive, to women giving themselves two decades in the workforce before thinking about kids—would have been impossible without the invention of flash freezing.
When we think about breakthrough ideas, we tend to be constrained by the scale of the original invention. We figure out a way to make artificial cold, and we assume that will just mean that our rooms will be cooler, we’ll sleep better on hot nights, or there will be a reliable supply of ice cubes for our sodas. That much is easy to understand. But if you tell the story of cold only in that way, you miss the epic scope of it. Just two centuries after Frederic Tudor started thinking about shipping ice to Savannah, our mastery of cold is helping to reorganize settlement patterns all over the planet and bring millions of new babies into the world. Ice seems at first glance like a trivial advance: a luxury item, not a necessity. Yet over the past two centuries its impact has been staggering, when you look at it from the long-zoom perspective: from the transformed landscape of the Great Plains; to the new lives and lifestyles brought into being via frozen embryos; all the way to vast cities blooming in the desert.
3. Sound
Roughly one million years ago, the seas retreated from the basin that surrounds modern-day Paris, leaving a ring of limestone deposits that had once been active coral reefs. Over time, the River Cure in Burgundy slowly carved its way through some of those limestone blocks, creating a network of caves and tunnels that would eventually be festooned with stalactites and stalagmites formed by rainwater and carbon dioxide. Archeological findings suggest that Neanderthals and early modern humans used the caves for shelter and ceremony for tens of thousands of years. In the early 1990s, an immense collection of ancient paintings was discovered on the walls of the cave complex in Arcy-sur-Cure: over a hundred images of bison, woolly mammoths, birds, fish—even, most hauntingly, the imprint of a child’s hand. Radiometric dating determined that the images were thirty thousand years old. Only the paintings at Chauvet, in southern France, are believed to be older.
For understandable reasons, cave paintings are conventionally cited as evidence of the primordial drive to represent the world in images. Eons before the invention of cinema, our ancestors would gather together in the firelit caverns and stare at flickering images on the wall. But in recent years, a new theory has emerged about the primitive use of the Burgundy caves, one focused not on the images of these underground passages, but
rather on the sounds.
A few years after the paintings in Arcy-sur-Cure were discovered, a music ethnographer from the University of Paris named Iegor Reznikoff began studying the caves the way a bat would: by listening to the echoes and reverberations created in different parts of the cave complex. It had long been apparent that the Neanderthal images were clustered in specific parts of the cave, with some of the most ornate and dense images appearing more than a kilometer deep. Reznikoff determined that the paintings were consistently placed at the most acoustically interesting parts of the cave, the places where the reverberation was the most profound. If you make a loud sound standing beneath the images of Paleolithic animals at the far end of the Arcy-sur-Cure caves, you hear seven distinct echoes of your voice. The reverberation takes almost five seconds to die down after your vocal chords stop vibrating. Acoustically, the effect is not unlike the famous “wall of sound” technique that Phil Spector used on the 1960s records he produced for artists such as the Ronettes and Ike and Tina Turner. In Spector’s system, recorded sound was routed through a basement room filled with speakers and microphones that created a massive artificial echo. In Arcy-sur-Cure, the effect comes courtesy of the natural environment of the cave itself.
Reznikoff’s theory is that Neanderthal communities gathered beside the images they had painted, and they chanted or sang in some kind of shamanic ritual, using the reverberations of the cave to magically widen the sound of their voices. (Reznikoff also discovered small red dots painted at other sonically rich parts of the cave.) Our ancestors couldn’t record the sounds they experienced the way they recorded their visual experience of the world in paintings. But if Reznikoff is correct, those early humans were experimenting with a primitive form of sound engineering—amplifying and enhancing that most intoxicating of sounds: the human voice.
Discovery of La grotte d’Arcy-sur-Cure, in France, September 1991
The drive to enhance—and, ultimately, reproduce—the human voice would in time pave the way for a series of social and technological breakthroughs: in communications and computation, politics and the arts. We readily accept the idea that science and technology have enhanced our vision to a remarkable extent: from spectacles to the Keck telescopes. But our vocal chords, vibrating in speech and in song, have also been massively augmented by artificial means. Our voices grew louder; they began traveling across wires laid on the ocean floor; they slipped the surly bonds of Earth and began bouncing off satellites. The essential revolutions in vision largely unfolded between the Renaissance and the Enlightenment: spectacles, microscopes, telescopes; seeing clearly, seeing very far, and seeing very close. The technologies of the voice did not arrive in full force until the late nineteenth century. When they did, they changed just about everything. But they didn’t begin with amplification. The first great breakthrough in our obsession with the human voice arrived in the simple act of writing it down.
—
FOR THOUSANDS OF YEARS after those Neanderthal singers gathered in the reverberant sections of the Burgundy caves, the idea of recording sound was as fanciful as counting fairies. Yes, over that period we refined the art of designing acoustic spaces to amplify our voices and our instruments: medieval cathedral design, after all, was as much about sound engineering as it was about creating epic visual experiences. But no one even bothered to imagine capturing sound directly. Sound was ethereal, not tangible. The best you could do was imitate sound with your own voice and instruments.
The dream of recording the human voice entered the adjacent possible only after two key developments: one from physics, the other from anatomy. From about 1500 on, scientists began to work under the assumption that sound traveled through the air in invisible waves. (Shortly thereafter they discovered that these waves traveled four times faster through water, a curious fact that wouldn’t turn out to be useful for another four centuries.) By the time of the Enlightenment, detailed books of anatomy had mapped the basic structure of the human ear, documenting the way sound waves were funneled through the auditory canal, triggering vibrations in the eardrum. In the 1850s, a Parisian printer named Édouard-Léon Scott de Martinville happened to stumble across one of these anatomy books, triggering a hobbyist’s interest in the biology and physics of sound.
Human ear
Scott had also been a student of shorthand writing; he’d published a book on the history of stenography a few years before he began thinking about sound. At the time, stenography was the most advanced form of voice-recording technology in existence; no system could capture the spoken word with as much accuracy and speed as a trained stenographer. But as he looked at these detailed illustrations of the inner ear, a new thought began to take shape in Scott’s mind: perhaps the process of transcribing the human voice could be automated. Instead of a human writing down words, a machine could write sound waves.
In March 1857, two decades before Thomas Edison would invent the phonograph, the French patent office awarded Scott a patent for a machine that recorded sound. Scott’s contraption funneled sound waves through a hornlike apparatus that ended with a membrane of parchment. Sound waves would trigger vibrations in the parchment, which would then be transmitted to a stylus made of pig’s bristle. The stylus would etch out the waves on a page darkened by the carbon of lampblack. He called his invention the “phonautograph”: the self-writing of sound.
In the annals of invention, there may be no more curious mix of farsightedness and myopia than the story of the phonautograph. On the one hand, Scott had managed to make a critical conceptual leap—that sound waves could be pulled out of the air and etched onto a recording medium—more than a decade before other inventors and scientists got around to it. (When you’re two decades ahead of Edison, you can be pretty sure you’re doing well for yourself.) But Scott’s invention was hamstrung by one crucial—even comical—limitation. He had invented the first sound recording device in history. But he forgot to include playback.
Édouard-Léon Scott de Martinville, French writer and inventor of the phonautograph
Actually, “forgot” is too strong a word. It seems obvious to us now that a device for recording sound should also include a feature where you can actually hear the recording. Inventing the phonautograph without including playback seems a bit like inventing the automobile but forgetting to include the bit where the wheels rotate. But that is because we are judging Scott’s work from the other side of the divide. The idea that machines could convey sound waves that had originated elsewhere was not at all an intuitive one; it wasn’t until Alexander Graham Bell began reproducing sound waves at the end of a telephone that playback became an obvious leap. In a sense, Scott had to look around two significant blind spots, the idea that sound could be recorded and that those recordings could be converted back into sound waves. Scott managed to grasp the first, but he couldn’t make it all the way to the second. It wasn’t so much that he forgot or failed to make playback work; it was that the idea never even occurred to him.
Phonautograph, circa 1857
If playback was never part of Scott’s plan, it is fair to ask exactly why he bothered to build the phonautograph in the first place. What good is a record player that doesn’t play records? Here we confront the double-edged sword of relying on governing metaphors, of borrowing ideas from other fields and applying them in a new context. Scott got to the idea of recording audio through the metaphor of stenography: write waves instead of words. That structuring metaphor enabled him to make the first leap, years ahead of his peers, but it also may have prevented him from making the second. Once words have been converted into the code of shorthand, the information captured there is decoded by a reader who understands the code. Scott thought the same would happen with his phonautograph. The machine would etch waveforms into the lampblack, each twitch of the stylus corresponding to some phoneme uttered by a human voice. And humans would learn to “read” those squiggles the way they had learned to read the squiggles of shorthand. In a sense, Scott wasn’t trying to inve
nt an audio-recording device at all. He was trying to invent the ultimate transcription service—only you had to learn a whole new language in order to read the transcript.
It wasn’t that crazy an idea, looking back on it. Humans had proven to be unusually good at learning to recognize visual patterns; we internalize our alphabets so well we don’t even have to think about reading once we’ve learned how to do it. Why would sound waves, once you could get them on the page, be any different?
Sadly, the neural toolkit of human beings doesn’t seem to include the capacity for reading sound waves by sight. A hundred and fifty years have passed since Scott’s invention, and we have mastered the art and science of sound to a degree that would have astonished Scott. But not a single person among us has learned to visually parse the spoken words embedded in printed sound waves. It was a brilliant gamble, but ultimately a losing one. If we were going to decode recorded audio, we needed to convert it back to sound so we could do our decoding via the eardrum, not the retina.
We may not be waveform readers, but we’re not exactly slackers, either: during the century and a half that followed Scott’s invention, we did manage to invent a machine that could “read” the visual image of a waveform and convert it back into sound: namely, computers. Just a few years ago, a team of sound historians named David Giovannoni, Patrick Feaster, Meagan Hennessey, and Richard Martin discovered a trove of Scott’s phonautographs in the Academy of Sciences in Paris, including one from April 1860 that had been marvelously preserved. Giovannoni and his colleagues scanned the faint, erratic lines that had been first scratched into the lampblack when Lincoln was still alive. They converted that image into a digital waveform, then played it back through computer speakers.
At first, they thought they were hearing a woman’s voice, singing the French folk song “Au clair de la lune,” but later they realized they had been playing back the audio at double its recorded speed. When they dropped it down to the right tempo, a man’s voice appeared out of the crackle and hiss: Édouard-Léon Scott de Martinville warbling from the grave.
How We Got to Now: Six Innovations That Made the Modern World Page 7