Generative live music-making using autoregressive time series models : melodies and beats

Research output: Contribution to journalArticlepeer-review

Abstract

Autoregressive Time Series Analysis (TSA) of music can model aspects of its acoustic features, structural sequencing and of consequent listeners' perceptions. This article concerns generation of keyboard music by repeatedly simulating from both uni-and multi-variate TSA models of live performed event pitch, key velocity (which influences loudness), duration and inter-onset interval (specifying rhythmic structure). The MAX coding platform receives performed, random or preformed note sequences, and transfers them via a computer socket to the statistical platform R, in which time series models of long segments of the data streams are obtained. Unlike many predecessors, the system exploits both univariate (e.g., pitch alone) and multivariate (pitch, velocity, note duration, and inter-onset intervals taken jointly) modelling. Simulations from the models are played by MAX on a MIDI-instrument. Data retention (memory) allows delayed or immediate sounding of newly generated melodic material, amplifying surprise. The resultant “Beat and Note Generator” (BANG), can function in collaboration with a MIDI-instrument performer who can also use the BANG interface, or autonomously. It can generate relatively large-scale structures (commonly chunks of 200 events) or shorter structures such as beats and glitches like those of electronic dance music.
Original languageEnglish
Number of pages18
JournalJournal of Creative Music Systems
Volume1
Issue number2
Publication statusPublished - 2017

Keywords

  • time, series analysis
  • keyboard instrument music
  • improvisation (music)
  • algorithms
  • sound

Fingerprint

Dive into the research topics of 'Generative live music-making using autoregressive time series models : melodies and beats'. Together they form a unique fingerprint.

Cite this