Book Read Free

How Music Works

Page 13

by David Byrne


  The other issue often mentioned was that club music was “manufactured,” made my machines, robotic—the implication being that the heart had been taken out of it. It was also claimed that this music wasn’t original; it was made by cobbling together bits of other people’s recordings. Like mixtapes. I’d argue that other than race and sex, this latter aspect was the most threatening. To rock purists, this new music messed with the idea of authorship. If music was now accepted as a kind of property, then this hodgepodge version that disregarded ownership and seemed to belong to and originate with so many people (and machines) called into question a whole social and economic framework. With digital technology around the bend, the situation would only get worse—or better, depending on your point of view.

  CHAPTER FOUR

  Technology Shapes Music

  Part Two: Digital

  I heard computer scientist Jaron Lanier speak at a symposium recently. After playing some pieces on a shen, a Chinese mouth organ, he said that it had a surprising and prodigious heritage. He claimed that this instrument was maybe the first in which the notes to be played were chosen by a mechanism, a mechanism that was a precursor to binary theory and thus to all computers.

  That ancient bit of gear found its way to Rome via the Silk Road, and the Empire had a giant version built—as Empires are wont to do. This larger instrument required an assistant to pump the air—it was too big to play by mouth anymore—and, more significantly, a series of levers that selected the notes. This system was the inspiration for what we now know as the keyboard—the series of levers that are used to play the notes of organs (which is also a large wind instrument) and pianos. It was also inspirational to the Frenchman Joseph Marie Jacquard, who in 1801 made a weaving loom whose complicated patterns were guided by punch cards. One could control the design of a fabric by stringing the cards together.

  Decades later Jacquard’s loom was inspirational to Charles Babbage, who owned one of Jacquard’s self portraits, in which he used these cards to make an image of himself woven in silk. Babbage designed his Analytical Engine— a computational machine that, were it ever built, would also have been controlled by punch cards. In Babbage’s version the cards no longer controlled threads, but made the leap to binary abstraction—making pure calculations. Babbage’s young friend Ada Byron (daughter of the poet) was fascinated by the device, and many years later became celebrated as the first computer programmer. So, according to Lanier, our present computer-saturated world owes something of its lineage to a musical instrument. And computer technology, not too long after it came into being, affected music as well.

  The technology that allowed sound information (and, soon thereafter, all other information) to be digitized was largely developed by the phone company. Bell Labs, the research division of the Bell Telephone Company, had a mandate to find more efficient and reliable ways of transmitting conversations. Prior to the sixties, all phone lines were analog and the number of conversations that could be handled at one time was limited. The only way to squeeze more calls through the lines was to roll the high and low frequencies off the sound of the voice, and then turn the resulting lo-fi sound into waves that could run in parallel without interfering with one another—much like what happens with terrestrial radio transmissions.

  Bell Labs was huge, and gave birth to a slew of new inventions. The transistor and semiconductors that form silicon-based integrated circuits (making today’s tiny devices possible), the laser, microwave technology, and solar panels—the list goes on and on. When you’re a monopoly you can afford to spend on R&D, and they had the luxury and foresight to take the long view. Scientists and engineers could work on a project that might not show results for ten years.

  In 1962, Bell Labs figured out how to digitize sound—to, in effect, sample a sound wave and slice it into tiny bits that could be broken down into ones and zeros. When they could do this in a way that was not prohibitively expensive and that still left the human voice recognizable, they immediately applied that technology to making their long-distance lines more efficient. More calls could now be made simultaneously, as the voice was now just a stream of ones and zeros that they could squeeze (via encoding and transposing), along with other calls, into their telephone cables. This was especially relevant considering the limitations imposed by long-distance underwater phone cables; you couldn’t just go out and lay more lines down if suddenly it seemed more people wanted to talk to France. A voice is, in the abstract sense, a kind of information, viewed from Bell’s perspective. Therefore, much of their research regarding what made a transmission understandable, or how you could squeeze more transmissions in, involved applying the science of information in combination with insights gleaned from the science of psychoacoustics—the study of how the brain perceives sound in all its aspects. So, understanding how we perceive sound became integrated with the quest for how to most efficiently transmit information of all kinds. It was even relevant to the meta question “what is information anyway?”

  Psychoacoustics has applications to the sound of ambulances (why can we never tell where they’re coming from?), the speaking voice, and, of course, music. The psycho prefix is there because what and how we hear is not simply mechanical, it’s mental (meaning the brain “hears” as much as the ear does, not mental as in insane, or insanely great).

  Of course, much of what we hear is partly defined and limited by the mechanics of our ears. We know that we can’t hear all the high-pitched sounds that bats emit or the full range of sounds that a dog can hear. There are lowpitched sounds that whales produce that we can’t really hear either, though they are strong enough to do us physical harm if we are too close to the source.

  But there are things we “hear” that have nothing to do with the physics of the eardrum and the auditory canal. We can, for example, isolate the voice of someone talking to us in a noisy environment. If you were to listen to a recording of a noisy restaurant it would sound like acoustic chaos, but sometimes we manage to make order out of it, and carry on a rudimentary conversation. Repetitious sounds, the sound of waves or constant traffic, become somewhat inaudible to us after a while. We have the ability to selectively hear just the stuff we’re interested in, and make the rest recede into some distant acoustic background. We also have the ability to perceive patterns in sounds. This too has nothing to do with our ears. We can remember pitches, and some people with perfect pitch can accurately determine notes heard out of a musical context. We can tell if the sound of squealing subway brakes and the highest note on a clarinet are the same. We can remember sequences of sounds—a bird’s song or a door creak followed by a slam—and the exact timbre of sounds—we sometimes recognize a friend’s voice by hearing a single word.

  How does this work? Can we simulate that mental process with a mathematical formula or a computer program? As you can imagine, such questions—how little information do we need to recognize someone’s voice, for example—were of prime importance to a phone company. If they could understand what exactly makes speech understandable and intelligible, and isolate just that aspect—refine it, control it—then they might increase the efficiency of their phone system by eliminating all the superfluous parts of the transmissions. The goal was to eventually communicate more and more using less, or the same amount, of the mechanical and physical electrical stuff. This possible increase in information flow would make them a lot more money. Psychoacoustics, would eventually lead to an increased understanding of information transmission. This arcane science was suddenly hugely useful.

  An unforeseen consequence of this phone-related research was the emergence of digital-based audio technology that was eventually used in, among other places, recording studios. In the seventies a new piece of equipment the size of a briefcase appeared in recording studios. It was called a Harmonizer, and it could change the pitch of a sound without changing its speed or tempo, as would happen if you changed the pitch by speeding up a tape. It achieved this by slicing up the sound waves into digital s
livers, mathematically transposing what were now merely numbers and then reconstructing those as sounds at a higher or lower pitch. The early versions of this machine sounded pretty glitchy, but the effect was cool, even when it didn’t work.

  Around that same time there appeared other devices called digital delays, which were in effect primitive samplers. The digital samples they created in order to mimic acoustic echoes were usually much less than a second long and they would be used to produce very short delay effects.

  More devices followed: machines that could grab and hold longer sound samples with greater resolution, and some that could manipulate those “sounds” (they were really just numbers) more freely. All sorts of weirdness resulted. Bell Labs was involved in manufacturing a sound processor called a vocoder that could isolate certain aspects of talking (or singing) like speech formants (the shape of the sounds that we use to form words). This device could remove these aspects of our talking or singing from the pitch—like isolating the just the percussive parts—the t’s and b’s and the sibilants of s’s and f’s. This machine could transmit these formant aspects of a voice separate from the rest of a vocalization and the resulting gibberish, when transmitted, was more or less unintelligible. But the components of intelligent speech were still there. The elements of the sound of speech or singing had been deconstructed, and could make sense again when put back together. Wonderful, but what do you do with this? One use for this technology was a sort of cryptology for the voice: the garbled nonsense could be “decoded” at the other end if you knew what had been taken out and where. These machines were also adopted for music production. Below, the German band Kraftwerk’s vocoder, made especially for them.A

  A vocoder was typically used to apply those isolated and separated speech formants to the sound of a pitched instrument. The instrument then appeared to be talking or singing. Often the resulting “voice” was somewhat robotic sounding, an aspect that likely appealed to Kraftwerk. I once used a vocoder like this that I borrowed from Bernie Krause, a musician and early synthesizer pioneer, who I met when Brian Eno and I did the Bush of Ghosts record. The vocoder was beautifully made, but rather complicated, and very expensive.

  An early Harmonizer (that digital pitch shifter) cost thousands of dollars. A good digital reverberation unit set a studio back maybe ten thousand dollars, and a full-fledged digital sampling device, like the Fairlight or a Synclavier that emerged soon after, cost much, much more. But soon the price of memory and processing dropped, and the technology became more affordable. Inexpensive Akai samplers became the backbone of hip-hop and DJ mixes, replacing the earlier use of vinyl, and sampled or digitally derived drum sounds took the place of live drummers in many recordings. We were off to the races, for better or worse.

  With the digitization of sound, digital recording and consumer products like the CD became possible, and entire record albums were soon sliced into these tiny slivers of ones and zeros. Not long after that, the capacity and speed of home computers became sufficient enough to allow individuals to record, archive, and process music. All of this follows from Bell Labs desire to improve the efficiency of their phone lines.

  Bell Labs eventually became Lucent. I visited their labs in the mid-nineties and they showed me a processor that could squeeze what sounded to the ear to be CD-quality music into a miniscule bandwidth. I believe encoding music as MP3s had already been invented in Germany by that time, so this extremely efficient compressing/encoding trick was not a complete surprise. And it was certainly no surprise that squeezing more sound information into smaller spaces continued to be a priority for a subsidy of a phone company. But like many people, I worried that the quality of music might somehow get sacrificed in this “rezzing down” process.

  Early 1970’s transister vocoder custom built and used by the pop duo Kraftwerk.

  I was right. Those early, low-bandwidth digital files sounded slightly off, as if something ineffable was missing. It was hard to put your finger on why they sounded wrong, but they did. All the frequencies seemed to be there, but something seemed to have been sucked out in the process. Zombie music. MP3s have improved quite a bit since then, and now I listen to most of the music I own in that format. I believe what Lucent was working on ended up being used for satellite-radio transmission: getting “CD-quality” sound into smaller bandwidth transmissions, so that a satellite could send out lots of channels of sound that seem to be of high quality. Similar processing would be applied to photographs and video signals, which allows us to stream movies without them looking completely grainy or pixelated.

  In 1988, I got an advance peek at this technology as it was applied to visual information when the designer Tibor Kalman and I visited a printing studio on Long Island. The studio had a machine that could digitize and then subtly manipulate images (we wanted to “improve” the image that was to be used on a Talking Heads record cover). Like the early computers and recording-studio gear, this machine was incredibly expensive and rare. We had to go to it (it couldn’t be brought to the design studio), and we had to book time in advance. A Sytex machine, I think it was called. Impressed as we were, its cost and rarity meant we didn’t think much about incorporating its talents into future projects.

  After a while, as with sampling, the price of scanning images dropped, and manipulating images using Photoshop became common. There are some film holdouts, and I have no doubt that, as with MP3s, something has been lost with digital images, but, well, for most of us, the trade-off seems acceptable—and inevitable. Needless to say, as images become digitized, they enter the river of networked data. Images for us are increasingly sequences of ones and zeros—information, like everything else. The digitization of every form of media enabled the Web to be what it is, much more than a way of transmitting text-based documents. This slicing of content allowed a wide variety of media to flow into that river, and in a way we owe all the pictures, sounds, songs, games, and movies that are part of our Internet experience to the phone company, information science, and psychoacoustics.

  CDS

  CDs, which made their debut in 1982, were jointly developed by Sony in Japan and Philips in Holland. Previously, digitized movies had been stored on LaserDiscs, which were the size of LPs, and the prospect of encoding an entire record album’s worth of sound therefore seemed within reach. If the discs could be made smaller, it could be lucrative. Philips had the laser aspect in development and Sony had the manufacturing prowess, so they agreed to work on this new format together. The arrangement was unusual; usually one company developed a format on their own and then tried to exert control over it so they could start charging others for using it. As a result, a lot of proprietary nonsense that could have burdened the acceptance and dissemination of CDs was avoided.

  It was rumored that the length of the CD was determined by the duration of Beethoven’s Ninth Symphony, because that was Norio Ohga’s favorite piece of music, and he was the president of Sony at that time. Philips had designed a CD with an 11.5 cm diameter, but Ohga insisted that a disc must be able to hold the entire Beethoven recording. The longest recording of the symphony in Polygram’s archive was 74 minutes, so the CD size was increased to 12 cm diameter to accommodate the extra data.

  Unlike LPs, whose grooves and bouncing needles limited the volume, the low frequencies were practically unlimited with the super-high-end CD technology. The music was no longer mirrored by physical grooves, but was now encoded in a series of digital ones and zeroes. Though these discs spun around like LPs, they were technically nothing like the old records. Their extended audio range resulted from the fact that since there was no physical analogue of the sound, the coded messages “told” the CD player what frequencies to play. The ones and zeroes could tell the stereo system to play anything audible to the human ear, at whatever frequency or volume was desired. This sonic range in digital music was really only limited by the playback and sampling mechanisms which allowed sounds outside the human hearing range to be recorded. The expanded and unlimite
d sound range was now, or would soon be, available to everyone.

  Inevitably, this sonic freedom got abused quite a bit. Some records (the writer Greg Milner mentions most Oasis albums and Californication by the Red Hot Chili Peppers) were made so artificially loud that though the music seemed amazing on first listen (it was louder, and more consistently louder, than anything else), it rapidly wore on the ears. Milner claims that this “volume war” was spurred by radio DJs and technicians who wanted their stations to seem louder than the stations near them on the radio dial.1 To achieve this, inventor Mike Dorrough developed a device in the sixties called a “discriminate audio processor,” which caught on widely years later when every station was trying to be louder than every other. Milner speculates that musicians and record producers responded to this competition by figuring out how to make their records sound louder, and stay louder, for the whole length of the record.2 Pretty soon there was ear fatigue all around. The listener never got a break; there was no dynamic range anymore. Milner suggests that even rabid music fans can’t listen to these records over and over, or very much at all. The actual enjoyment of them is short lived, and he proposes that this might have had something to do with pushing consumers away from purchasing recorded music. The technology that was supposed to make music more popular than ever instead made everyone run away from it.

  CRAPPY SOUND FOREVER

  Early CDs, like the MP3s that followed, didn’t sound all that great. Dr. John Diamond treated psychotic patients with music, but by 1989 he sensed that it had all gone wrong. He claims that the natural healing and therapeutic properties of music were lost in the rush to digitize.3 He believes that certain pieces of music can help soothe and heal, if they are the entirely analog versions, while the digital versions actually have the reverse effect. When his test subjects are played digital recordings, they get agitated and twitchy.

 

‹ Prev