The purpose of this post is to explore some ideas that could potentially prove to be interesting and worthy research topics. There will be holes in it and probably some mistakes so please do raise any interesting or contradictory points.
If you haven't already watched Westworld, then you should. I won't give away any major spoiler details, but you should watch it because...it's great. Apart from the adventure and drama it is also creating an ethical discussion that we need to be having about robotics and AI. But I'm not here to talk about that, I'm here to talk about the soundtrack. You could say there are two parts to the soundtrack; the originally composed pieces from Ramin Djawadi and the Piano Players' interpretations of popular songs. I'll be looking at the latter. Westworld uses a Piano Player ( autonomous-piano) in the saloon to play these popular songs, (which becomes both diegetic and non-diegetic - sometimes we see the piano in the scene, other times we see short shots of it interlaced in the main narrative sequence). It's a clever trick in bringing in the emotional affordance of these pop songs, and using it to propel the narrative. It also creates a juxtaposition that aids the story. Here we are, hearing these jaunty, honky-tonk melodies in a Western. But we know that these melodies are out of place, out of time. They don't quite belong there. “The show has an anachronistic feel to it,” [ Ramin Djawadi] explained to Vulture. “It’s a Western theme park, and yet it has robots in it, so why not have modern songs? And that’s a metaphor in itself, wrapped up in the overall theme of the show.” It's a great soundtrack and it reminded me of an idea put forward by Daniel Levitin on musical categorisation and the Invariant Properties. It's an easy task for our brain to recognise different versions of the same song. We hear Audioslave's "Black Hole Sun" and the Westworld piano version, and we know it's the same song. Levitin explains in his book how difficult this task is for a computer, you'd need a supercomputer and some complex programming. Now, the "Black Hole Sun" cover retains the melody and rhythm, but it is played on a piano. Compare that to the "Paint It, Black" cover and while we retain melodic and rhythmic motifs we are treated to a vastly different arrangement and orchestration.
We can identify and categorize a piece as the same song when the intervals, timbre, tempo, pitch and key can be changed. Think about that for a moment. "Tune recognition involves a number of complex neural computations interacting with memory. It requires that our brains ignore certain features while we focus only on features that are invariant from one listening to the next—and in this way, extract invariant properties of a song." (Levitin, 2006). Levitin talks in detail about the different memory models and supporting evidence for each. To summarise quickly, the Record-Keeping model holds that memory is like a video camera - recording with high fidelity, whereas the Constructivist model holds that we ignore irrelevant details and only record the gist of what happens. There is a third, which now has the general consensus: The Multiple-Trace Memory Model, which is a hybrid between the two. I refer to Levitin's chapter on the topic of Categorization. "Music works because we remember the tones we have just heard and are relating them to the ones that are just now being played. Those groups of tones—phrases—might come up later in the piece in a variation or transposition that tickles our memory system at the same time as it activates our emotional centers." (Levitin, 2006). Considering this, it may actually be easier to develop the level of AI shown in Westworld than to actually build a computer than can do this as well as our brains. So, regarding our emotional centres. When you hear Radiohead, Rolling Stones, Amy Winehouse (to name a few) it may take a moment to recognise it, or maybe you just feel a vague sense of familiarity. At first you're not expecting to hear these pop songs in this scenario and while you eventually become accustomed to it, there isstill a 'tickling' sensation as you hear and recall what the original track is. Levitin states his book that the brain builds a model of expectation and then is delighted when there is a violation of these models. "These Violent Delights, Have Violent Ends." Each time we hear a pop song in Westworld, this violates our memory model of that song and our brain is excited by it. As previously mentioned, we bring in our own affordances, our own emotional connections to these tracks, but is the lyrical connection an important enhancement. There's a scene involving a house of decadence, where a character is questioning who they are and what their purpose is. It's accompanied by NIN's Something I Can Never Have. I knew this, and I brought in my affordance. It did enhance the scene for me, bringing in feelings and thoughts from another place, and using them to place me in the scene, in the characters mind. But I'm wondering if the canonical versions of these tracks would work as well in comparison to the Player Piano versions, or would this bring a disrupted connection between sound and image, a weaker audio-visual contract (Chion, 1994). Would there be a competitive narrative between the lyrics of a track and the scene itself? Is the lyrical hook of a track enough to buy into the idea ? I think I'm crazy,maybe... I just want something i can never have... We died a hundred times, ...and I go back to black... When I hear Paint It Black I make the emotional connection to war,(specifically Vietnam) and a sense of rebellion, (not necessarily the lyrics) which suited the characters needs at the time. Or is the affordance we bring to the track more important? The soundtrack is causing a bit of a stir, and fans are looking for hidden meanings within the songs. This demonstrates the area of interest here between these culture-rich pop songs and the connections we make, have, and take along with us, wherever these songs appear. http://www.vulture.com/2016/10/westworld-modern-songs.html http://daniellevitin.com/publicpage/books/this-is-your-brain-on-music/ https://cup.columbia.edu/book/audio-vision/9780231078993
0 Comments
I am going to be writing about and exploring an example of how sound design influences us. Quite often we're not hearing the original sound recording, but we're listening to sound that is specifically designed to make us feel the way the director wants us to feel. I am going to take look at, possibly, the most iconic sound effect and why it worked so well, and I will give short demonstration on how we encode and draw information/emotion from sound. The Lightsabre. So well known I'm not even going to add a picture of it here. It's familiar and yet also refreshing and exciting. Most cultures are familiar with a sword as a weapon and its expected functions ( swipe, stab, spinning it with a flick of the wrist to look cool) and a lot of this we experience through film and TV. I'm not sure many people are still crusading and having sword fights anymore. Most people can do an impression of a Lightsabre in great detail, the hum, swooshes and clashes it makes. It's a cool sound and is probably cemented in human culture now. If one day a real one is created, we will all be really disappointed if it doesn't sound exactly like the movies. It's a great piece of sound design, because like the visual aspects of it, the audio is also familiar yet new and exciting. This is a key part of sound design, the audience has to believe that the sound and visuals are connected. It’s sound is convincing because it has a basis in real world sounds, with expected acoustic properties. When a Lightsabre is swung around, it reacts in accordance with our acoustical expectations. If I swung a hollow metal pipe, you'd know what kind of sound it would make because of the model our brains have made for this type of material, size and movement. This extends to non-existent objects. When you see a Lightsabre, you can be convinced that this is the sound it's supposed to make. Ben Burtt first recorded the raw sound of a stalled electrical motor (“Grrrrrr”). He then added the sound of a television power supply for high frequency sweetening (“Hummm”). However, to encode the recording with a spatial component, Burtt replayed these sounds at half-speed through an amplifier to a speaker and re-recorded them using a shotgun microphone, which he wielded like a sword at various angles in front of the speaker. Because this microphone had a 9 lobed pickup pattern—meaning it was highly directional in its focus—the re- recorded sound floated on and off axis, offering the shimmering oscillations and electronic “whooshes” of the Jedi light saber. (Whittington, 2007) Drawing on the familiar sound of an electrical motor and TV power supply, and the physical acoustic properties of the doppler effect ( swinging the microphone around the sound source for the sensation of movement), the swoops and clashes reminiscent of the sword fights we've seen in movies and played as children, this all gives the audience enough real world reference to accept the abstraction of a Lightsabre. We judge the acoustical properties of sound and match these properties to memory sound bank of matching objects and/or events. We can call this Causal listening; Listening for the purpose of gaining information about the sound's source. (Chion,1990). Even if the sound does not match the sound source we still connect the two objects through Synchresis. Synchresis is defined by Chion (1990) as “ the spontaneous and irresistible weld produced between a particular auditory phenomenon and visual phenomenon when they occur at the same time.” Any sound that plays at the same time as an event creates a sensation that the two are related, even if there are unrelatable. If we saw the lightsabre powered up and at the same moment heard the sound of an elephant roar, our brains would weld these together, but we would notice the confliction. Another example is the whoops and beeps in comedy films as a character falls over. We may relate a metallic crashing sound to a character falling over in a comedic, scene but our brain will not accept that the person, made of bone and flesh, caused that metallic crashing sound. We would recognise that the sound is that of a trash can being dropped. We can use the phenomenon to enhance our sound design, either by meeting expectations or by disregarding them completely. Here we have a short clip, produce by the increasingly talented Rob, whereby I've added some audio as to demonstrate how we encode & draw information into & from sound. First, mute your device and watch the clip. What sound would you expect? What kind of feeling? Then have a listen, did it meet expectations or go against them? We see a small desk lamp but already through Synchresis we have welded the Godzilla like percussion to the movement of the lamp. We also have the tense strings and horns drawing a common horror/thriller sensation. As the lamp looks up we hear the squeak of the lamp. This was originally the sound of a squeaky door opening, but it has been manipulated to a higher pitch, in order to match the expectancy of the audience ( we expect small things = high pitch). A small desk lamp wouldn't create the same lower pitch sound as a door. I mean, it probably wouldn't make much of a sound at all in reality, but hey, you're watching a movie to be entertained, so embrace the hyper-real sound. Now to break the "horror/thiller" set up, we make the lamp cute and playful.... how? We add the sound of a dog panting and a squeak toy. Through our cultural affordance of this sound, it brings up associations of cute, happy, playful and we apply these to the lamp. This is an example of how sound design can influence us in so many ways. Quite often we're not hearing the original sound recording but we're listening to a sound designed to make us feel the way the director wants us too. The Birth of The Lightsabre Audio-Vision (1990) Michel Chion Sound Design and Science Fiction (2007) William Whittington |
Lee ClarkeExploring ideas, looking for questions. ArchivesCategories |