Abstract
NoiseSpeech is a compositional device in which sound is digitally manipulated with the intention of evoking the sound qualities of unintelligible speech (Dean 2005). Speech is characterized by "rapidly changing broadband sounds" (Zatorre, Belin, and Penhune 2002), whereas music"”particularly tonal music"”changes more slowly and narrowly in frequency content. As Zatorre and colleagues argue, this distinction may be reflected in better temporal resolution in the left auditory cortex and better spectral resolution in the right, so that perception is adapted to both ranges and extremes of sonic stimuli. NoiseSpeech is constructed either by applying the formant structure (that is, spectral peak content) of speech to noise or other sounds, or by distorting speech sounds such that they no longer form identifiable phonemes or words. The resultant hybrid is an artistic device that, we argue, may owe its force to an encapsulation of the affective qualities of human speech, while intentionally stripping the sounds of any semantic content. In this article, we present an empirical investigation of listener perceptions of NoiseSpeech, demonstrating that non-specialist listeners hear such sounds as similar to each other and to unaltered speech.
Original language | English |
---|---|
Number of pages | 11 |
Journal | Computer Music Journal |
Publication status | Published - 2009 |
Open Access - Access Right Statement
©2009Keywords
- auditory perception
- noise
- phonemics
- sound
- speech
- speech perception