Book Read Free

Movies and Meaning- Pearson New International Edition

Page 32

by Stephen Prince


  At the beginning of Double Indemnity (1944), with a bullet wound slowly leaking blood from his shoulder, a cynical insurance agent confesses his crime: he killed a man for money and a woman and, as fate would have it, didn’t get either.

  Voice-over narration can be used for ironic or playful effects. In one of the most famous films noir, Sunset Boulevard (1950), the narrator turns out to be a dead man.

  The film opens with shots of a man’s body floating in a swimming pool. The police arrive and remove the body as the narrator, a screenwriter named Joe Gillis, tells how the murder occurred. It is not until the end of the movie that viewers realize the dead man is Joe Gillis. He talks wistfully about how it feels when the police fish him out of the pool and lay him out “like a harpooned baby whale.”

  DOUBLE INDEMNITY

  (PARAMOUNT, 1944)

  Hard-boiled, tough-

  guy dialogue, spoken

  as voice-over narration,

  coupled with dark, low-

  key lighting to establish

  the hardedged, cynical

  atmosphere of classic film

  noir. Fred MacMurray, as

  the doomed Walter Neff,

  provides the gripping

  narration about a murder

  scheme gone awry. Frame

  enlargement.

  197

  Principles of Sound Design

  Of course, in the case of Sunset Boulevard , the narration is unreliable and misleading. Dead men don’t talk. Director Billy Wilder plays against an established convention of voice-over narration, which is that the character doing the narration must survive the events of the story. In this case, he doesn’t, and it enabled Wilder to pull off one of his darkest jokes. In a similar manner, the narrator of American History X (1998) is murdered but continues his narration, commenting on the things that his death has taught him. American Beauty (1999) is another film that uses this device.

  While voice-over narration is closely identified with U.S. films noir, it also has been used in documentary filmmaking, especially that subcategory of documentaries known as the newsreel. Newsreels routinely accompanied feature films, cartoons, and serials in the nation’s movie theaters in earlier decades, and they typically employed the so-called “voice of God” narrator. Such a narrator was male and spoke with a deep, booming, authoritative tone.

  In Citizen Kane (1941), director Orson Welles satirized the “voice of God”

  newsreel narrator. The film tells the life story of Charles Foster Kane, a rich newspaper man who rose from humble beginnings. The film opens with Kane’s death; a newsreel follows, viewed by newspaper reporters for background on their stories about Kane’s death. The newsreel features a “voice of God” narrator as director Welles expertly mimics the conventions of this kind of documentary.

  Beyond the fake newsreel, however, Citizen Kane offers a host of other voice-over narrators. Citizen Kane is a classic and superlative example of voice-over narration used for complex effect and as an essential ingredient of film structure. The plot of the film is constructed as a series of flashbacks, each one narrated by a different SUNSET BLVD. (PARAMOUNT, 1950); DAUGHTERS OF THE DUST

  (AMERICAN PLAYHOUSE, 1991)

  Voice-over narration can be quite playful. Joe Gillis (William Holden) narrates Sunset Blvd., despite the fact that he’s a murder victim. As the film begins, the police find Joe floating face down in a swimming pool, and he proceeds to tell us how he ended up there.

  Billy Wilder’s film plays with the movie convention that narrators will survive the stories they tell. An unborn child narrates Julie Dash’s Daughters of the Dust and makes fleeting appearances in the film as a kind of phantom. She tells the film’s story about the Gullah people of the Sea Islands and their migration to the U.S. mainland, events that are not yet a part of her own life-to-come. Frame enlargements.

  198

  Principles of Sound Design

  CITIZEN KANE (RKO, 1941)

  Jed Leland (Joseph Cotton),

  one of the principal narrators in

  Citizen Kane, explains why Kane’s

  first marriage failed. As he begins

  his speech, the image dissolves

  to the past to show the first Mrs.

  Kane at breakfast. The narrative

  voices are not easily reconciled.

  Leland describes events he

  couldn’t possibly have witnessed.

  Frame enlargement.

  character, which makes the emerging portrait of Charles Foster Kane into a kaleidoscope. Characters recollecting Kane include the millionaire banker, Walter P. Thatcher, who was given custody of Kane as a little boy; Susan Alexander, Kane’s second wife; Jed Leland, the drama critic who worked briefly on Kane’s newspapers; Mr. Bernstein, Kane’s chief editor and close friend; and Raymond, Kane’s personal valet.

  Each of these characters narrates a section of the film, recalling events in ways that clash with the memories of the other narrators. For example, Jed Leland recalls the Charles Foster Kane who betrayed his ideals and principles, whereas Mr. Bernstein emphasizes those principles, remembering how Kane used his newspaper to fight crime and expose official graft and corruption.

  The voice-over narration frames the various flashbacks and colors them with a variety of psychological perspectives. Citizen Kane , in part, is a mystery film. The mystery is Kane’s personality, which ultimately remains unknowable. It is difficult to reconcile the various Kanes disclosed in the narrators’ memories because each is so different from the others. In this way, the respective voice-over narrations deepen the emotional and psychological mystery of film, the nature of Kane’s personality. Few films in cinema history have used voice-over narration so skillfully and with such profound structural and emotional effects.

  ADR AND DIALOGUE MIXING Most of the dialogue heard in the average feature originates from the production track (the soundtrack recorded at the point of filming), but 30 percent or more of a film’s dialogue is the result of ADR (automated dialogue replacement). Following shooting, actors recreate portions of a scene’s dialogue in a sound studio, and this postproduction sound is mixed in with dialogue from the production track. The mixer must smooth out the audible differences of tone and timbre and make sure that no audio cuts are apparent to the listener. Digital software facilitates the ADR process, alleviating the need for an actor to speak in perfect synch with the picture; the software can match the ADR speech with the lip movements on screen.

  199

  Principles of Sound Design

  (a)

  (b)

  PRETTY WOMAN (TOUCHSTONE, 1990)

  The blocking of a scene can create opportunities to add new dialogue using ADR. These two frames from a single shot show how changing character positions facilitated the addition in postproduction of the salesman’s line of dialogue about helping her use the credit card. The salesman is visible at the rear (a) between Julia Roberts and Richard Gere, but when Roberts walks out of the store, she blocks the salesman from the camera’s view (b), at which point the new line of dialogue was inserted. Frame enlargement.

  ADR is typically used when portions of the production track are unusable or

  unsatisfactory, and some films, such as Sergio Leone’s Once upon a Time in America (1984), have extraordinarily high amounts of ADR. All the dialogue in that picture was done as ADR; none originated from the production track.

  Camera placement can facilitate opportunities for using ADR. One of the highlights of Pretty Woman (1990) occurs when Julia Roberts goes on a Beverly Hills shopping spree. The ensuing montage is scored to the titular Roy Orbison song, and a dialogue exchange between Richard Gere and the shop clerk (Larry Miller) kicks off the start of the montage. Gere tells the clerk that she has the credit card, and the clerk incautiously replies he’ll help her use it. The clerk’s dialogue was dropped in as late-in-the-game ADR, an opportunity facilitated by the blocking of the scene, as the accompanying frame enlargements demonstrate.


  Sound Effects

  Sound effects are the physical (i.e., nonspeech) sounds heard as part of the action and the physical environments seen on screen. They include ambient sound , which is the naturally occurring, generally low-level sound produced by an environment (wind in the trees, traffic in the city). They also include the sounds produced by specific actions in a scene, such as the rumble of the spaceship Nostromo in Alien (1979) as it passes nearby, or the crash of broken glass as Mookie throws a trashcan through the window of Sal’s Pizzeria in Do the Right Thing (1989). Digital methods of sound recording and mixing enable sound engineers to achieve an impressive aural separation of individual sound elements. This gives the effects in contemporary film a richer texture than in decades past and enables selective emphasis of individual effects without a corresponding loss of the overall sonic context.

  Virtually all the sound effects that one hears in contemporary film are the result of postproduction manipulation. Sound effects recorded as part of the production track may be electronically cleaned and optimized, but most are recorded separately and in 200

  Principles of Sound Design

  places other than the filming environment. Many effects are created using Foley technique . Foley technique refers to the live performance and recording of sound effects in synchronization with the picture. As the film is projected in a sound recording studio, a Foley artist watches the action and performs the necessary effects. A Foley artist might walk across a bare floor using hard shoes in synchronization with a character on-screen to produce the needed effects of footsteps. The Foley artist may open or close a door or drop a tray of glasses on the floor to create these effects as needed in a given scene.

  Foley techniques require considerable physical dexterity, often verging on the acrobatic, from the artists creating these live effects. Foley is often needed because many of today’s films involve the use of radio microphones that are attached to individual actors in a scene. Unlike mikes on a boom overhead, radio mikes fail to pick up natural sounds in the environment, and these often have to be dubbed using Foley techniques.

  Because of the nonspecific nature of sound—taken out of context, many sounds are difficult to identify—Foley often uses objects that are not part of the scene. To create sound effects in Star Wars Episode 2: Attack of the Clones for the skin surfaces of alien creatures, when other aliens or objects touch them, the Foley artists used pineapples, coconuts, and cantaloupes. The rough texture of their surfaces proved to be ideally suited to evoking the imaginary sound of alien skin.

  Whether or not Foley is employed to create a given effect, digital tools enable sound engineers to electronically enhance effects and introduce changes in the sound-wave characteristics of a given source. The effects track of a film is the highly processed outcome of these electronic methods of sound manipulation. Leading the industry’s transition to digital audio in 1984, Lucasfilm had a proprietary digital sound worksta-tion (ASP, Audio Signal Processor) that stored and mixed sound in digital format. For Indiana Jones and the Temple of Doom (1984), when Jones is surrounded by a bevy of arrows flying toward him, ASP electronically extended the arrows’ whizzing sounds and added Doppler effects (Doppler is a means of spatializing sound by altering its pitch).

  The simple, raw recording of a given effect usually lacks emotional impact, so audio engineers typically manipulate the effect, by layering in other components, to make it suitably expressive. In Apocalypse Now (1979), during the scene where panicky Americans machine-gun a group of Vietnamese in their boat, sound designer Walter Murch wanted to affect the viewer’s psychological and emotional response to the machine-gun sound.

  He wanted the viewer to feel that the sound was realistic even though it was not a live recording of a single source but a synthetic blend of multiple, separate recordings.

  Murch backed the microphone away from the gun to get a clean recording and

  then, later, added supplementary elements such as the clank of discharging metallic cartridges and the hiss of hot metal. By layering these additional features over the softer sound of the gun firing, Murch artificially created a convincing realism in ways that were compatible with his recording technology. Doing this involved “disassembling” the sound rather than capturing it live and direct on tape.

  In Terminator 2 (1991), for the gun battle in an underground parking garage, sound designer Gary Rydstrom recorded guns firing in this reverberant space. But to make the sound interesting, he also recorded the sound of two-by-fours slapping together in the garage and layered this echoing sound into the effect to “fatten” it up.

  In Backdraft (1991), Rydstrom gave blazing fires an audio presence and personality by layering in animal growls and monkey screams. Given the film’s context—about deadly urban fires—he knew the audience would not hear these sounds as animal noises but as attributes of the fire. For the backdrafts, produced when a huge fire sucks in oxygen before exploding, he used coyote howls, which gave the backdrafts 201

  Principles of Sound Design

  APOCALYPSE

  NOW (UNITED

  ARTISTS, 1979)

  The sound of the

  machine gun in

  Apocalypse Now

  was actually a

  blend of mul-

  tiple, separate el-

  ements expertly

  layered together

  to produce the

  psychological

  impression of

  a single, live

  source. Frame

  enlargement.

  BACKDRAFT (UNIVERSAL, 1991)

  Taken out of context, the meaning of an isolated sound can be very fluid and difficult to identify. This enables sound designers to attach sounds to unrelated images to great effect. The fires in Backdraft were mixed with animal sounds, although viewers did not identify these sounds as such. This audio design suggested that fire was a kind of living organism, with intelligence and personality. Frame enlargement.

  a subliminal personality and intelligence. Expressive sound effects are complex, artificial creations that transcend their live sound components.

  Music

  Music has always accompanied the presentation of films for audiences. During the silent period, film music was often drawn from public-domain, noncopyrighted classical selections or from the popular tunes of the era. Numerous catalogues offered filmmakers or musical directors a guide for selecting appropriate music depending on the tempo of the scene and its general emotional content. In addition, some original symphonic scores were composed for silent films.

  202

  Principles of Sound Design

  The original score composed especially for motion pictures became standard

  practice in the sound period. While many different musical styles can be employed in film scoring—jazz ( Mo’ Better Blues , 1990), rock ( Bill and Ted’s Excellent Adventure , 1989), ragtime ( Ragtime , 1981), symphonic orchestral ( Star Wars , 1977)—music is typically used to follow action on-screen and to illustrate a character’s emotions.

  CREATING MOVIE MUSIC The production of movie music involves five distinct steps: spotting, preparation of a cue sheet, composing, performance and recording, and mixing.

  The first stage is spotting , during which the composer consults with the film’s director and producer and views the final cut in order to determine where and when music might be needed. Spotting determines the locations in the film that require musical cues, where and how the music will enter, and its general tempo and emotional color.

  Much of this is left up to the composer, although detailed discussions with a film’s director are not uncommon, especially when the director has strong preferences as to the style of scoring. Sometimes the director will impose a temp track —a temporary musical track derived from a score the director likes—onto the soundtrack of an edited scene, or even the entire film, and ask that the composer create something like the temp track. Not surprisingly, many composers find this stifling.

/>   After the film has been spotted, the music editor then prepares a cue sheet . The cue sheet contains a detailed description of each scene’s action requiring music plus the exact timings to the second of that action. This enables the composer to work knowing the exact timing in minutes, seconds, and frames of each action requiring music. As a result, musical cues can catch the action and enter and end at precisely determined points.

  Once the cue sheet has been prepared, the third step is actual composition of the score. This is done by the composer using a video copy of the film. The video contains a digital time code that displays minutes, seconds, and frames for all the action. Using the cue sheet and video, the composer creates the score, carefully fitting the timing of music and action.

  Digital programs known as sequencers enable the composer to lock the score onto the video’s digital time code. Once this is done, any scene can be played back, and the computer can call up the score, enabling the composer to check timings.

  Tempo adjustments—speeding up or slowing down the music—also can be made by

  computer to precisely match music with action. The sequencer also can generate a series of clicks that many composers use to establish a desired tempo for a given scene and that is then used as a guide for composition.

  Digital technology also has altered the phase of composition in which the composer demonstrates the score for the director. Digital samplers enable composers to electronically simulate all needed instrumentation in their scores and play the results for the director, who can hear a close approximation of the film’s score-in-progress.

  Before the age of samplers, composers demonstrated their scores on the piano, which required that directors be able to understand how the piano performance would translate into a full-bodied instrumentation. The disadvantage of digital sampling is that demonstrations now give directors more input into scoring—an area most are not qualified to handle—because, using a sampler’s keyboard, anyone can easily manipulate the musical characteristics of a composition. Some directors, to their composer’s dislike, find this an irresistible temptation.

 

‹ Prev