Musical Effects of Pitch Correction in the Speech Sounds of Two Video Game Characters
Elizabeth Medina-Gray
Ithaca College
The blurry boundary between speech and song has long been a recurring topic of interest in various fields of music study (List 1963; Patel 2008; Deutsch, Henthorn, and Lapidis 2011). In recent decades, pitch correction—using Auto-Tune, for example—has emerged as one means through which music creators can variously transform speech into song (Rings 2019; Flore 2021) and, through heavy use of this tool, signify android or cyborg personas (Stras 2016). What happens, however, when pitch correction, a tool designed for and familiar in music-production contexts, is instead applied in contexts that are framed as speech?
In this paper, I closely examine the pitch-corrected sounds that represent the ostensibly speaking voices of two prominent characters in well-received video games: GLaDOS, the initially benign but ultimately murderous AI character in Portal (Valve 2007), and Fi, the helpful spirit inside the player’s character’s sword in The Legend of Zelda: Skyward Sword (Nintendo 2011). In the case of both characters, free non-metric rhythms and plain conversational language cast this type of audio overall as speech rather than song. And in both cases, correction to discrete pitches in standard equal-tempered tuning is one of multiple processing effects that help to audibly frame these two characters as machine-like. At the same time, I argue that the predominant use of discrete pitches makes this audio both unique and curiously musical, and this quasi-musicality opens space for novel meanings with respect to these characters, their game worlds, and players’ experiences. I focus on pitch in my analysis of GLaDOS’s and Fi’s speech sounds. I consider these sounds’ overall pitch content, the melodic framework of particular lines of dialogue, and comparisons between the pitch content of these speech sounds and the game’s other music and sound effects.
Player Progress and Musical Affect in Early Video Game Music
Alan Elkins
Cleveland Institute of Music, United States of America
Composers of video games from the 1980s and 1990s often used musical cues to communicate information to the player, using stock musical signifiers to differentiate cue types. Scholars have proposed classifications for cues based on the context in which they are used; William Gibbons, in his discussion of Japanese role-playing games, divides music into location-based cues and game-state cues, depending on whether the music is primarily associated with a particular place (e.g., a town or a dungeon) or with a particular type of gameplay sequence (e.g., a battle). Julianne Grasso provides two additional categories related to Gibbons’ second type: event-triggered musical cues, which are linked with in-game occurrences (such as non-interactive story sequences), and task-triggered musical cues, which are associated with something the player is required to do (such as music for puzzle-solving sections).
In this paper, I propose another cue type: progress-based cues, which primarily communicate a sense of beginning, middle, or end relative to the game’s overall structure. I argue that the stylistic markers used by game composers in the 1980s and 1990s often correlate with the temporal location of a cue—sometimes in correlation with location-based or game-state aspects, but often independent of geography or gameplay context. Musical strategies for opening areas include the use of a heroic affect, combining features such as compound meter, ostinato on a single pitch, and ascending arpeggios in the melodic line. The music for mid-game areas often demonstrates ambiguous or static harmony and frequent repetition of small musical units. Late-game music may employ a number of features, including the Phrygian mode (or simply ↓2) and a paradigm I call the Mountain King schema, which is frequently associated with villains and dangerous situations in games, film, and television.
The notion of progress-based cues provides another dimension through which to view the relationship between musical style and gameplay function. Considering when a given track occurs, in addition to where, can offer new insights into the hermeneutic interpretation of video game music, further informing the analysis of individual cues or entire video game soundtracks.
The Mashups of Mouth Moods: Parody and Intertextuality in Neil Cicierega’s Third Album
Isaac William Smith
Indiana University Jacobs School of Music, United States of America
Mashups are seldom discussed analytically in the scholarly discourse of music theory, and when they are mentioned, it is often in terms of comedy or parody. Neil Cicierega’s Mouth albums – a quartet of albums drawing heavily on Smash Mouth’s “All-Star” – transgress their surface-level parodic boundaries and contain unique intertextual relationships to pop music and internet culture that need to be unpacked. In this presentation, I will give a brief background of Cicierega’s colorful relationship with the internet, discuss how his third album (Mouth Moods) sidesteps purely parodic interpretations, and provide an inroad to how Cicierega's humor and recontextualization work together to achieve a unified artistic product.
This presentation aims to shed light on Cicierega’s multi-layered approach to sampling and referentiality, and to broaden the lens through which we examine mashups. First, I will demonstrate how Cicierega creates intertextual connections between his first two albums and Mouth Moods in mosaic mashups – tracks composed of many short samples. He recasts and reuses earlier material to not only deepen the referential connections to popular and internet culture, but also to include self-referentiality as an artistic device. This deviates from the intent of established mosaic mashup artists like DJ Earworm, as Cicierega uses this process to establish a sense of thematic unity within the context of the albums.
Next, I will examine a selection of A+B mashups – the most common kind, usually created by taking vocals from one song and the accompaniment from another – and analyze how diametrically opposed genres can create and resolve musical-lyrical dissonance. This examination will include discussion of how Cicierega’s approach to A+B mashups differs from mashup artists like Girl Talk, both in its comedic effect and its intertextuality. I will then argue that the synthesis of these elements in the album Mouth Moods represents an artistic product separate from the goals of most mashups, and suggest a closer examination of intertextual relationships in other instances of sample-based music.
|