Book Read Free

The Great Animal Orchestra

Page 3

by Krause, Bernie


  One explanation for the curious musical anomaly of the differing concert As is the variable hardness of the European woods used to make frames and soundboards for the plucked or bowed stringed instruments, including harpsichords, of the time. A harder and denser type of wood would have enabled the strings to accommodate more tension, and thus the instruments could endure a higher tuning—and a “brighter” sound.

  Given the growing amount of time I was spending in the wild, Helmholtz’s writings gave me much to consider. Instruments were man-made to complement one another, and from my work with animal sound, I began wondering why particular species would, in the same way, settle on a particular range—higher or lower than another. Do animals, as part of the complex chorus in a given habitat, use one or more certain pitches as some sort of crude reference? How and why did their respective vocal ranges develop? What roles did physiology and environment play?

  • • •

  Thanks in part to Helmholtz’s historical review of sound as well as his own contributions to the science of acoustics, we know that sound is transmitted as waves of pressure coursing through air, solids, or liquids, and we know that the attributes of many sounds include frequency (sometimes referred to as pitch, but that tends to be a more relative term), timbre, amplitude, and envelope. But even though I had played and composed music through two-thirds of my life, it wasn’t until I began working with synthesizers that I started to understand its components and how they all came together. To generate sounds that would fit in a musical composition, I needed to know precisely how all four sound characteristics interacted with one another. If sound—which by itself is very abstract—was to mean anything, then it was control over those four parameters and the placement of the results within a recognizable milieu that gave it form.

  Humans with perfect reception can hear frequencies between 20 wave cycles per second, or 20 Hz, at the low end to 20,000 Hz at the high end. The lowest note on a typical piano is 27.5 Hz, and the highest is about 4,186 Hz. Nonhuman animals have evolved different ranges of hearing, the widest of which can be found in whales. We think whales generate and hear vocalizations from below 10 Hz (the blue whale) to a reported 200 kHz (the blind Ganges dolphin)—nearly four octaves beyond the highest pitch we can hear. Other animals typically fall somewhere in between—a large percentage within the range of human hearing.

  Pitch is closely related to frequency, but the two are not the same thing. Pitch is mostly used in the comparative framework of sounds or tones that make up a musical scale. So while frequency is a physical property of sound—it’s a measurement of the number of cycles per second of a sound wave—pitch refers to what we hear. The chromatic scale, for example, is made up of twelve equally spaced pitches. As we go up the scale, we hear each note as going up in pitch by an equal amount—a semitone, or half step. However, the change in frequency from note to note is not equal—each successive increment of semitone requires a greater jump in frequency than the last. For example, going from a C pitch to a C-sharp on a piano (261.626 Hz to 277.183 Hz—a difference of about 15.56 cycles) requires less of a jump in frequency than going from that same C-sharp to a D (277.183 Hz to 293.665 Hz—a difference of about 16.48 Hz). The wider spread between C-sharp and D is the result of how the auditory cortex of our brains processes and perceives the sounds that reach our ears. Our brains trick us into hearing the same half-step interval between notes while the spread in actual units of frequency increases as the scale gets higher.

  Timbre is the emblematic tone, or voice, generated by each type of instrument or biological sound source. Not only do musical instruments have singular voice characteristics but so does every living organism and most man-made machines. The difference between the sound of a violin and that of a trumpet is as distinctive as that between a cicada and an American robin, or a cat and a dog—or between a Rolls-Royce and a Formula 1 automobile.

  When Paul Beaver and I first began to reproduce sounds on an analog synthesizer, we needed to understand how each instrumental voice was produced. At first we had no idea how complicated this would be. Part of our problem lay in trying to define the sound, or timbre, of each instrument. In the nonelectronic, purely physical world, instruments are made of metal or wood, or a combination of both. Some involve strings and/or skins, and many are played by blowing, striking, plucking, or creating friction. These different instruments have different shapes, and each manages to resonate, or “sound,” in a different way.

  Most instruments produce tones that are quite complex, each generating a series of overtones that contribute to our perception of their timbre and that exist in each note played on the instrument, defining its unique, haunting sound. A clarinet, for instance, produces a series of overtones in which some of the harmonics—the overtones that are a whole-number multiple of the note played on the instrument—drop out. A violin, on the other hand, produces an entirely different series of overtones. As the rosined bow is drawn downward across a string from the frog to the tip—a down-bow, in musical terms—exciting it into motion, the string produces a set of overtones in which every harmonic is heard in a descending order of loudness, thus producing the violin’s particular tonal color. Because of a combination of its unique physical structure and the techniques required to generate sound, each sound-producing entity—whether animal or constructed of nonliving material components—yields a distinctive resonance.

  Loudness, or amplitude, is measured in decibels. One decibel, or dB, is the smallest discernible unit by which humans can detect a change. If you can hear the sound of a mosquito flying by you at ten feet, then you’ve got a set of ears that can pick up the quietest sound humans can hear—around 5 dBA. (The A appended to the symbol dB signifies that the measurement is calibrated to the ways in which a “normal” human ear processes acoustic signals over the entire frequency range.) Hearing damage will occur for many of us at around 115 dBA—the loudness of a pneumatic drill—and, if sustained, will cause the hair cells in your cochleas to fail; this can result in deafness. A few of us, however, experience pain and damage at much lower levels. For me, sounds much louder than 90 dBA begin to cause discomfort, if not actual pain. I just happen to be extremely sensitive to sounds, particularly loud ones.

  Some animals, such as toothed whales, can generate sound levels that, if produced in the air, would be equivalent to a large-bore firearm being discharged a few inches from your ear. But, pound for pound, one of the loudest organisms in the animal kingdom is, oddly enough, the inch-and-a-half-long snapping shrimp. Many snorkelers and scuba divers are familiar with this marine sound, since the crustacean appears along most ocean shorelines and reefs, and in estuaries. It has a staticlike sound that permeates the entire underwater region, and it generates a signal with its large claw that can meet or exceed 200 dB underwater—a sound-pressure level equivalent to around 165 dB in air. With every 6 dB change representing a doubling or halving of intensity, we can compare the shrimp output to a symphony orchestra, which may generate loud moments peaking around 110 dBA. Indeed, the lowly, unsophisticated shrimp will not be outdone even by the Grateful Dead, a rock group whose concerts have been measured at levels exceeding 130 dB. Get this, Deadheads: the shrimp is louder by close to a factor of five—all that without a huge stack of stage speakers!

  The loudest human sound I’ve ever tested came from a female’s scream. It measured 117 dBA at ten feet—a bit greater than the volume of an average, painfully loud rock concert. But with the exception of a volcanic eruption on the order of Krakatoa, or a very loud crack of thunder, there are very few other natural sounds generated in air that could cause hearing damage.

  The fourth major sound property, acoustic envelope, determines the shape and texture of a sound through time, from the moment it is first heard to the time it fades out. No matter where you live or what you hear—no matter if it is an entire wild habitat, such as a rain forest, or a single bird; whether it is a note played on a piano or guitar, or a chord played by an entire orchestra—eve
ry sound or series of sounds has a beginning and end point, and between those two points can get softer or louder. The entire sounding period, including the whole transformation of the character of a sound, is the acoustic envelope.

  Impact sounds such as gunshots or rim shots on a snare drum have a very fast rise time—they go from nothing to really loud in microseconds—and they also have a very fast decay, depending on whether the environment in which they are generated is reverberant or not. The onset of other sounds, such as a crescendo played on a violin or articulated by cicadas in a tropical rain forest, is marked by a slow rise from their softest to their loudest point. These types of sounds may sustain for a period before becoming quieter over time until they are no longer heard. The envelope can, simultaneously, define the shape of an instrumental sound’s tonal color, taking the very smooth and delicate tone of a steel-string guitar, for instance, to a raunchy fuzz tone, or enabling a trumpet to produce a blurt, growl, or muted sound over an articulated phrase.

  While the elements of sound are part of every acoustic signal—whether generated by animals, humans, musical instruments, or machines—they make up only one part of what collective sounds amount to in a given location. The word soundscape first appeared in our language toward the end of the last century; it refers to all of the sound that reaches our ears in a given moment. The term is credited to R. Murray Schafer, who embraced and studied the sounds of various habitats. Schafer was searching for ways to frame the experience of sound in new, nonvisual contexts. At the same time, his goal was to encourage us to pay more attention to the sonic fabric of our environments, wherever we happen to live.

  Schafer and his colleagues at Simon Fraser University in Vancouver showed that each soundscape uniquely represents a place and time through the combination of its special blend of voices, whether urban, rural, or natural. The geological and architectural features of Vancouver’s Stanley Park, where Schafer and his friends worked and recorded during the 1970s, generated a soundscape on Sundays at dawn with a particular blend of park animal life and minimal traffic—one that sounded very different from that of the same location during the week at afternoon or morning rush hour. The combination of seasonal birds, amphibians, and insects, together with road and air traffic—all enhanced by the passive acoustic features of the landscape and the vegetation—generated an acoustic signature distinct to that location whenever times and conditions were comparable.

  Natural soundscapes, in particular, are the voices of whole ecological systems. Every living organism—from the tiniest to the largest—and every site on earth has its own acoustic signature. Soundscapes acquire their individuality through a combination of factors: In hilly habitats, sound tends to be more contained. But when the area is flat, open, and dry, sound disperses more quickly and seems to get lost. The acoustics of a single location can vary greatly over the course of each season, too, depending on the density and type of dominant vegetation (e.g., the needle-like leaves of conifers versus the broad leaves of deciduous plants) and the area’s basic geological features (i.e., rocky, hilly, mountainous, or flat). Sound reflects off saturated or uniquely shaped leaves that certain bats use as lures for pollination; the bark of forest vegetation; and ground that is wet with rain or early-morning dew, causing reverberation throughout a habitat. When dry, a forest will usually be hushed and quiet, with sounds tending not to travel as far or to last as long.

  In the mid-1990s, just before a horrifying outbreak of social and political upheaval in Zimbabwe, I recorded a spectacular old-growth morning soundscape there. It was a site that had remained intact for a very long period of time—according to our guide, Derek Solomon, a knowledgeable landscape ecologist and naturalist, large areas of the environment and its voice probably hadn’t changed much over tens of thousands of years. The experience gave me a rough idea of what this type of dry, mostly deciduous forest must have been like when our early human ancestors first appeared in Africa millions of years ago.

  The dawn chorus was infused with a tightly orchestrated ensemble of barred owlets, sounding much like lazy California gulls; a Scops owl, with its low, slow series of short burbling sounds; Natal francolins, their quickly repeated kiss-squeaks augmenting the sense of rhythm; freckled nightjars, with quick up-and-down medium-frequency whistles—three to five repetitions in succession; ground hornbills singing high-pitched repeated chirp sequences; bearded robins, their melodious three-note phrase followed by a high chirp; rattling cisticolas, repeating long, high-mid-range melodious sequences; a chinspot batis, with its slow, staccato quasi-half-step descending note songs; and thirty or so other bird species, along with baboons and dozens of species of insects. The acoustic moment was so rich with counterpoint and fugal elements that it immediately brought to mind some of the same intricate compositional techniques used by Johann Sebastian Bach (as in his Prelude and Fugue in A Minor).

  But this wasn’t an ordinary dawn chorus. I had noticed at our campsite in nearby Gonarezhou that, unlike in tropical rain forests, I wasn’t sweating much even though it was quite warm. Everything sounded incredibly “dry,” the low humidity making it seem as if I were in a sound-proofed recording studio with no reverberation—every sound being quickly absorbed. The birds and insects were vocalizing in a habitat that had seen no rainfall for several weeks, and there were no reflective surfaces in this environment and thus no echoes. But the baboons, always onstage, were not to be denied their moment: they had found a nearby kopje—a type of granite outcropping that can rise three hundred feet above the forest or plains floor seemingly out of nowhere—and used the partially concave surface of the structure to send their sharp vocal retorts reverberating throughout the forest, the acoustic decay lasting for six or seven seconds before fading into silence. It was a unique gathering of sounds specific to this singular place, and their voices created an eerie imbalance within the soundscape—the dry, nonreverberant sound of the many birds and insects set off against long-echoed voices of the few baboons.

  The acoustic features of a landscape play an important role in the way vocal organisms eventually populate a habitat. Some insects, birds, and mammals like to vocalize when the habitat dries out—at midmorning, when the forest has given up its surface moisture and the soundscape becomes redefined by the acoustic properties of dryness. Others take advantage of when the water of a pond or lake is still, and sounds tend to travel farther, the creatures’ voices repeatedly echoing everywhere in a magical, dreamlike effect. This is especially true for the early mornings and late nights of spring and summer: when the weather changes and the wind comes up, reverberation tends to disappear, and a hushed, extended breathlike atmosphere cloaks the landscape.

  As a seasoned listener, I especially love the sounds produced by creatures that have evolved to vocalize at night, when dew settles on the ground or on the leaves and branches of trees. The nighttime imparts the sense of a resplendent echoey theater—a beneficial effect for nocturnal terrestrial creatures whose voices need to carry over great distances. Coyotes and wolves likely choose nighttime to vocalize because their sound signatures resonate and travel so well.

  The pleasure of hearing our voices echo is one reason why many of us sing in the shower—reverberation is a sound quality that seems to be particularly alluring to many creatures. The elk that live in the American West rut in the fall and often use the more pronounced echo of the forest environment during that season to project their modulated bellows, extending the illusion of their territory and securing their harems. These calls can be heard everywhere elk live throughout the western United States—especially in the Tetons and Yellowstone National Park. Hyenas, baboons, many species of frogs, and a nightjar called the pauraque also often vocalize when climate and landscape conditions conspire to produce reverberation.

  In marine habitats, water temperature, salinity, currents, and the diverse bottom contours of an environment affect the transmission of sound in both subtle and profound ways. The enclosed bottom contour of sections of Glacie
r Bay in Southeast Alaska causes sound to take on a reverberant and amplified quality. As a result, signals produced by both vessels and animals can appear much larger than life. Meanwhile, inland lakes and some other marine habitats, such as coral reefs, produce little reverberation. I once heard the biologist Roger Payne lecture about the songs of humpback whales that he and his then wife Katy discovered in the 1960s. During his talk he speculated that the vocal syntax learned by male humpback vocalists for a given season featured themes and structures commonly found in the most intricate forms of human music.

  A while back, when I was cataloging my analog tape recordings and transferring them into digital formats, I had a dream about sound—actually, a nightmare. In the dream, I went out to my lab at dawn and found that all of my ambient recordings had been transferred to thousands of little CDs, each with only one short clip of a single animal voice removed from the context of the soundscape. The CDs were scattered ankle-deep all over the floor, and I couldn’t figure out how in the world I’d put the parts back together again. It still frightens me to think about it.

  A few years later, I happened across Finding Beauty in a Broken World, Terry Tempest Williams’s book about mosaics—how they are whole structures of magnificence built from disparate broken pieces. The same can be said for graphics, words, and film sound design, but not for natural soundscapes. What reaches out to us from the wild is a deeply profound connection—a constantly evolving multidimensional weave of sonic fabric. Natural soundscapes are never expressed the same way from one day to another. Even with the best technologies, we can only partially capture these sonorous moments, the main reason being that the voices that make up these choruses are always adjusting slightly to accommodate for the most successful transmission and reception—a kind of perpetual self-editing mechanism. For that reason, it is extremely difficult to re-create those choral expressions from their separate abstract parts unless we are able to grasp the underlying infrastructure characterizing how each component voice fits within an ever-changing bioacoustic composition.

 

‹ Prev