by Alex Ross
The principal irony of the history of recording is that Edison did not make the phonograph with music in mind. Rather, he conceived of his cylinder as a business gadget, one that would supersede the costly, imperfect practice of stenography and have the added virtue of preserving in perpetuity the voices of the deceased. In an 1878 essay titled “The Phonograph and Its Future,” Edison or his ghostwriter proclaimed that his invention would “annihilate time and space, and bottle up for posterity the mere utterance of man.” “Annihilation” is an interestingly ambiguous figure of speech. Recording opened lines of communication between far-flung worlds, but it also placed older art and folk traditions in danger of extinction. With American popular culture as its house god, it brought about a global homogenization of taste, the effects of which are still spreading.
Although Edison mentioned the idea of recording music in his 1878 article, he had no inkling of a music industry. He pictured the phonograph as a tool for teaching singing and as a natural extension of domestic music-making: “A friend may in a morning-call sing us a song which shall delight an evening company.” By the 1890s, however, alert entrepreneurs had installed phonographs in penny arcades, allowing customers to listen to assorted songs over ear tubes. In 1888, Emile Berliner introduced the flat disc, a less cumbersome storage device, and envisioned with it the entire modern music business—mass distribution, recording stars, royalties, and the rest. In 1902, the first great star was born: the tenor Enrico Caruso, whose voice remains one of the most transfixing phenomena in the history of the medium. The ping in Caruso’s tone, that golden bark, made the man himself seem viscerally present, proving Edison’s theory of the annihilation of space and time. Not so lucky was Johannes Brahms, who, in 1889, attempted to record his First Hungarian Dance. The master seems to be sending us a garbled message from a spacecraft disintegrating near Pluto.
Whenever a new gadget comes along, salespeople inevitably point out that an older gadget has been rendered obsolete. The automobile pushed aside the railroad; the computer replaced the typewriter. Sousa feared that the phonograph would supplant live music. His fears were excessive but not irrational. The Victor Talking Machine Company, which the engineer Eldridge Johnson founded in 1901, marketed its machines not just as vessels for music but as instruments in themselves. In a way, Victor was taking direct aim at the piano, which, around the turn of the century, dominated domestic musical life, from the salon to the tavern. The top-selling Victrola of 1906, a massive object standing four feet tall and weighing 137 pounds, was encased in “piano-finished” mahogany, if anyone was missing the point. Ads showed families clustered about their phonographs, no piano in sight. Edison, whose cylinders soon began to lag behind flat discs in popularity, was so determined to demonstrate the verisimilitude of his machines that he held a nationwide series of Tone Tests, during which halls were plunged into darkness and audiences were supposedly unable to tell the difference between Anna Case singing live and one of her records.
Each subsequent leap in audio technology—microphones, magnetic tape, long-playing records, stereo sound, transistors, digital sound, the compact disc, and the MP3—has elicited the same kind of over-the-top reaction. The latest device inspires heady confusion between reality and reproduction, while yesterday’s wonder machine is exposed as inadequate, even primitive. When, in 1931, the composer and critic Deems Taylor heard a pioneering example of stereophonic recording, he commented, “The difference between what we usually hear and what I heard was, roughly, the difference between looking at a photograph of somebody and looking at the person himself.” Twenty years later, Howard Taubman wrote of a long-playing record on the Mercury label: “The orchestra’s tone is so lifelike that one feels one is listening to the living presence.” (Mercury promptly adopted “Living Presence” as its slogan.) A high-fidelity ad of the 1950s offered users “the finest seat in the house”—an experience not simply equal to the concert hall but superior to it, cleansed of the inconvenience of “audience distraction.” A television commercial of the seventies, starring Ella Fitzgerald, famously asked, “Is it live or is it Memorex?” Compact discs promised “perfect sound forever.”
Just as inevitably, audiophile happy-talk leads to a backlash among listeners who doubt the rhetoric of fidelity and perfection. Dissenters complain that the latest device is actually inferior to the old—artificial, inauthentic, soulless. Greg Milner has documented this never-ending back-and-forth in his book Perfecting Sound Forever, a smartly skeptical account of the ideology of audio progress. Some enthusiasts of the Edison cylinder felt that no other machine gave such a faithful sensation of the warmth of the human voice. When electrical recording came in, a few stalwarts detected nothing but fakery in the use of microphones to amplify soft sounds and invent a sonic perspective that does not exist for human ears. “I wonder if pure tone will disappear from the earth sometimes,” a British critic wrote in 1928.
Magnetic tape led to the most crucial shift in the relationship between recordings and musical reality. German engineers perfected the magnetic tape recorder, or Magnetophon, during the Second World War. Late one night, an audio expert turned serviceman named Jack Mullin was monitoring German radio when he noticed that an overnight orchestral broadcast was astonishingly clear: it sounded “live,” yet not even at Hitler’s whim could the orchestra have been playing in the middle of the night. After the war was over, Mullin tracked down a Magnetophon and brought it to America. He demonstrated it to Bing Crosby, who used it to tape his broadcasts in advance. Crosby was a pioneer of perhaps the most famous of all technological effects, the croon. Magnetic tape meant that Bing could practically whisper into the microphone and still be heard across America; a marked drop-off in surface noise meant that vocal murmurs could register as readily as Louis Armstrong’s pealing trumpet.
The magnetic process also allowed performers to invent their own reality in the studio. Errors could be corrected by splicing together bits of different takes. In the sixties, the Beatles and the Beach Boys, following in the wake of electronic compositions by Cage and Stockhausen, constructed intricate studio soundscapes that could never be replicated onstage; even Glenn Gould might have had trouble executing the mechanically accelerated keyboard solo in “In My Life.” The great rock debate about authenticity began. Were the Beatles pushing the art forward by reinventing it in the studio? Or were they losing touch with the rugged intelligence of folk, blues, and rock traditions? Bob Dylan stood at a craggy opposite extreme, turning out records in a few days’ time and avoiding any vocal overdubs until the seventies. The Dylan scholar Clinton Heylin points out that while the Beatles spent 129 days crafting Sgt. Pepper, Dylan needed only 90 days to make his first fifteen records. Yet frills-free, “lo-fi” recording has no special claim on musical truth; indeed, it easily becomes another effect, the effect of no effect. Today’s neoclassical rock bands pay good money to sound old.
The advent of digital recording was, for many skeptics, the ultimate outrage. The old machines vibrated in sympathy with their subjects: the hills and valleys on a cylinder or a flat disc followed the contours of the music. Digital technology literally chopped the incoming vibrations into bits—strings of 0’s and 1’s that were encoded onto a compact disc and then reconstituted on a CD player. Traditionalists felt that the end product was a kind of android music. Neil Young, the raw-voiced Canadian singer-songwriter, was especially withering: “Listening to a CD is like looking at the world through a screen window.” Step by step, recordings have become an ever more fictional world, even as they become ever more “real.” The final frontier—for the moment—has been reached with Auto-Tune, Pro Tools, and other forms of digital software, which can readjust out-of-tune playing and generate entire orchestras from nowhere. At the touch of a key, a tone-deaf starlet becomes dulcet and a college rock band turns Wagnerian.
Yet some audio equivalent of the law of conservation of energy means that these incessant crises have a way of balancing themselves out. Fakers, hucksters, an
d mediocrities prosper in every age; artists of genius manage to survive, or, at least, to fail memorably. Technology has certainly advanced the careers of nonentities, but it has also lent a hand to those who lacked a foothold in the system. Nowhere is this more evident than in the story of African-American music. Almost from the start, recording permitted black musicians on the margins of the culture—notably, the blues singers of the Mississippi Delta—to speak out with nothing more than a voice and a guitar. Many of these artists were robbed blind by corporate manipulators, but their music got through. Recordings gave Armstrong, Ellington, Chuck Berry, and James Brown the chance to occupy a global platform that Sousa’s idyllic old America, racist to the core, would have denied them. The fact that their records played a crucial role in the advancement of African-American civil rights puts in proper perspective the debate about whether or not technology has been “good” for music.
Hip-hop, the dominant turn-of-the-century pop form, gives the most electrifying demonstration of technology’s empowering effect. As Jeff Chang recounts, in his book Can’t Stop Won’t Stop: A History of the Hip-Hop Generation, the genre rose up from desperately impoverished high-rise ghettos, where families couldn’t afford to buy instruments for their kids and even the most rudimentary music-making seemed out of reach. But music was made all the same: the phonograph itself became an instrument. In the South Bronx in the 1970s, DJs like Kool Here, Afrika Bambaataa, and Grandmaster Flash used turntables to create a hurtling collage of effects—loops, breaks, beats, scratches. Later, studio-bound DJs and producers used digital sampling to assemble some of the most densely packed sonic assemblages in musical history: Eric B. and Rakim’s Paid in Full, Public Enemy’s Fear of a Black Planet, Dr. Dre’s The Chronic.
Sooner or later, every critique of recording gets around to quoting Walter Benjamin’s essay “The Work of Art in the Age of Its Technological Reproducibility,” written in the late 1930s. The most often cited passage is Benjamin’s discussion of the loss of an “aura” surrounding works of art—the “here and now” of the sacred artistic object, its connection to a well-defined community. This formulation seems to recall the familiar lament, going back to Sousa, that recordings have leeched the life out of music. But when Benjamin spoke of the withering of aura and the rise of reproducible art, lamentation was not his aim. While he stopped short of populism, he voiced a nagging mistrust of the elitist spiel—the automatic privileging of high-art devotion over mass-market consumption. The cult of art for art’s sake, Benjamin noted, was deteriorating into fascist kitsch. The films of Charlie Chaplin, by contrast, mixed comic pratfalls with subversive political messages. In other words, mechanical reproduction is not an inherently cheapening process; an outsider artist may use it to bypass cultural gatekeepers and advance radical ideas. That the thugs of commerce seldom fail to win out in the end does not lessen the glory of the moment.
Although classical performers and listeners like to picture themselves in a high tower, remote from the electronic melee, they, too, are in thrall to the machines. Some of the most overheated propaganda on behalf of new technologies has come from the classical side, where the illusion of perfect reproduction is particularly alluring. Classical recordings are supposed to deny the fact that they are recordings. That process involves, paradoxically, considerable artifice. Overdubbing, patching, knob-twiddling, and, in recent years, pitch correction have all come into play. The phenomenon of the dummy star, who has a hard time duplicating in the concert hall what he or she purports to do on record, is not unheard of.
Perhaps there is something unnatural in the very act of making a studio recording, no matter how intelligent the presentation. At the height of the hi-fi era, leading classical producers and executives—Walter Legge, at EMI; Goddard Lieberson, at Columbia Records; and John Culshaw, at Decca, to name three of the best—spent many millions of dollars engaging top-of-the-line orchestras, soloists, and conductors in an effort to create definitive recordings of the peaks of the repertory. They met their goal: any short list of gramophone classics would include Maria Callas’s Tosca, Wilhelm Furtwangler’s Tristan und Isolde, Georg Solti’s Ring, and Glenn Gould’s Goldberg Variations, all recorded or set in motion in the fifties. Yet the excellence of these discs posed a problem for the working musicians who had to play in their wake. Concert presenters began to complain that record collectors had formed a separate audience, one that seldom ventured into the concert hall. Recordings threatened to become a phantasmagoria, a virtual reality encroaching on concert life. (Gould claimed that the Decca Ring achieved “a more effective unity between intensity of action and displacement of sound than could be afforded by the best of all seasons at Bayreuth.”) When people did venture out, they brought with them the habits of home listening. The solitary ritual of absorbing symphonies in one’s living room almost certainly contributed to the growing quietude of the classical public; that applause-free spell after the first movement of the Eroica matches the whispery groove on the long-playing record.
Like Heisenberg’s mythical observer, the phonograph was never a mere recorder of events: it changed not only how people listened but also how they sang and played. Mark Katz, in his book Capturing Sound, calls these changes “phonograph effects.” (The phrase comes from the digital studio, where it is used to describe the crackling, scratching noises that are sometimes added to pop-music tracks to lend them an appealingly antique air.) Katz devotes one chapter of his book to a shift in violin technique that occurred in the early twentieth century. It involved vibrato—the trembling action of the hand on the fingerboard, whereby the player is able to give notes a warbling sweetness. Early recordings and written evidence suggest that in prior eras vibrato was used more sparingly than it is today. By the twenties and thirties, many leading violinists had adopted continuous vibrato, which became the approved style in conservatories. Katz proposes that technology prompted the change. When a wobble was added to violin tone, the phonograph was able to pick it up more easily; it’s a “wider” sound in acoustical terms, a blob of several superimposed frequencies. Also, the fuzzy focus of vibrato enabled players to cover up inaccuracies of intonation, and, from the start, the phonograph made players more self-conscious about intonation than they were before. What worked in the studio then spread to the concert stage.
Robert Philip, a British scholar who specializes in performance practice, tackles the same problem in his book Performing Music in the Age of Recording. He proposes that when musicians listened to records of their own playing they passed through a kind of mirror stage; for the first time, they were forced to confront their “true” selves. “Musicians who first heard their own recordings in the early years of the twentieth century were often taken aback by what they heard, suddenly being made aware of inaccuracies and mannerisms they had not suspected,” Philip writes. When they went back onstage, he says, they tried to embody the superior self that they glimpsed in the phonographic mirror, and never again played in quite the same way.
Philip gives a riveting description of what classical performances sounded like at the turn of the last century. “Freedom from disaster was the standard for a good concert,” he writes. Rehearsals were brief, mishaps routine. Precision was not a universal value. Pianists rolled chords instead of playing them at one stroke. String players slid expressively from one note to the next—portamento, the style was called—in imitation of the slide of the voice. In a 1912 recording, the great Belgian violinist Eugène Ysaye “sways either side of the beat, while the piano maintains an even rhythm.” Orchestras flirted with chaos in an effort to generate maximum passion—witness Edward Elgar’s recordings of his music. And the instruments themselves sounded different, depending on the nationality of the player. French bassoons had a pungent tone, quite unlike the rounded timbre of German bassoons. French flutists, by contrast, used more vibrato than their German and English counterparts, supplying a warmer, mellower quality. American orchestral culture, which brought together immigrant musicians from all Eur
opean countries, began to erode the differences, and recordings helped to cement the new standard practice. Whatever style sounded cleanest on the medium—in these cases, German bassoons and French flutes—became the golden mean. Young virtuosos today may have recognizable idiosyncrasies, but their playing seldom indicates that they have come from any particular place or that they have emerged from any particular tradition.
Opera is prey to the same standardizing trend. The conductor and scholar Will Crutchfield cites a startling example of a “phonograph effect” in an essay on changing perceptions of operatic style. He once sat down to compare all extant recordings of “Una furtiva lagrima,” the plaintive tenor aria from Donizetti’s bel-canto comedy L’elisir d’amore. Crutchfield wanted to know what singers of various eras have done with the cadenza—the passage at the end of the aria where the orchestra halts and the tenor engages in a few graceful acrobatics. Early recordings show singers trying out a range of possibilities, some contemplative, some florid, none the same. Then came Caruso. He first recorded “Una furtiva lagrima” in 1902, and returned to it three more times in the course of his epochal studio career. After that, tenors began imitating the stylish little display that Caruso devised: a quick up-and-down run followed by two slow, tender phrases. Out of more than two hundred singers who have recorded the aria since Caruso’s death, how many try something different? Crutchfield counts four. Many operagoers would identify Caruso’s cadenza as the “traditional” one, but Crutchfield calls it the “death-of-tradition” cadenza, the one that stifled a long-flourishing vocal practice.