Table of Contents

Breakbeats and Backshifts: A Time Series Analysis of Musical Groove

Karter Mycroft Harmon, Indiana University - Indiana Business Research Center, Indiana University Bloomington, School of Public & Environmental Affairs (SPEA), Students


"Breakbeats and Backshifts: A Time Series Analysis of Musical Groove" Free Download

KARTER MYCROFT HARMON, Indiana University - Indiana Business Research Center, Indiana University Bloomington, School of Public & Environmental Affairs (SPEA), Students

In nearly all human musical traditions, repetitive patterns of amplitude are employed to create specific rhythms. Interactions between multiple rhythmic signals (instruments) can give rise to a phenomenon known as the “groove� — a pronounced cycle of fluctuating loudness and/or timing, which conveys an enjoyable subjective quality to listeners. Understanding the forces which give rise to and govern the groove is a key focus of centuries-old music theory as well as modern digital signal processing.

Breakbeats are useful for analysis of musical grooves due to their fast, syncopated phrases (which typically repeat every one or two bars), the interplay between drums and bass, and the nuanced fluctuations in loudness which result in exciting, danceable rhythms. These aspects have contributed to the popularity of breakbeats as the rhythmic core of many diverse styles of contemporary music. However, the characteristic groove of early jazz and funk breakbeats can be difficult to emulate using modern digital sampling and composition technology.

Time series analysis provides a robust set of well-tested methods for modeling the behavior of signals across time, and these methods can be readily applied to signals representing the amplitude of recorded music. In this paper I demonstrate that time series methodology, applied to datasets of the amplitude of musical signals, can be used to quantitatively model and estimate the groove, where “groove� is defined as the variation in relative loudness from note to note in a piece of recorded music.

I first present a procedure for resampling recordings of musical instruments in a method conducive to discrete-time signal processing for musical analysis. I then present several theoretical models of recorded music where the amplitude of an instrument at time t is a function of its past values, the exogenous input of other instruments, and stochastic error. Several different models are estimated and compared using criteria such as AIC, BIC, and explanatory variable parsimony. Based on these comparisons, I suggest a seasonal autoregressive integrated moving-average model with explanatory variables (SARIMAX) as an efficient modeling strategy for this type of data. I also estimate a simple application of the model using amplitude datasets derived from recorded audio. I demonstrate how the model can be used to determine certain features of a recorded groove, such as the relative influence of specific divisions of musical time on the drummer’s overall dynamics.

Finally, I demonstrate that these data and modeling procedures can generate predicted values of musical amplitudes in a statistically robust way. I close with a discussion of the implications of my results in terms of statistics and music theory, the generalizability of the selected model, and potential applications in fields such as electronic music production and sound design. By quantitatively modeling and predicting the groove produced by human musicians, “humanization� of electronic instruments might be achieved in a more robust manner than the simple randomization procedures employed currently.


About this eJournal

Very few fields have attracted as much attention to music in recent years as the study of music and the brain. Intelligence, broadly defined and conceived, as it converges with music is one of the areas of discussion for this eJournal. All interdisciplinary areas of enquiry for this field are invited, including: music and cognition, music and neuroscience, music education and the brain, music and evolutionary biology, music and communication, music perception and cognition, and neurological processes engaged during musical activity, such as auditory memory and its encoding process. The scope of the eJournal is broad, including research ranging from the most specific cognitive processes - such as how the mind monitors the beat - to broader concepts of determining familiar and unfamiliar music. This eJournal provides a shared platform for research that draws on both the discipline of cognitive neuroscience and the music social sciences.

Editors: Janet R. Barrett, Northwestern University, Philip Brunelle, VocalEssence, Victor Coelho, Boston University, Steven Cornelius, University of Massachusetts Boston, Richard Cornell, Boston University, and Betty Anne Younker, University of Western Ontario


To submit your research to SSRN, sign in to the SSRN User HeadQuarters, click the My Papers link on left menu and then the Start New Submission button at top of page.

Distribution Services

If your organization is interested in increasing readership for its research by starting a Research Paper Series, or sponsoring a Subject Matter eJournal, please email:

Distributed by

­Humanities Network (HUM), a division of Social Science Electronic Publishing (SSEP) and Social Science Research Network (SSRN)



Boston University - School of Music

Please contact us at the above addresses with your comments, questions or suggestions for MRCN-Sub.

Advisory Board

Music & the Mind eJournal

Professor of Music Education, University of Illinois at Urbana-Champaign

Artistic Director and Founder, VocalEssence, Vice President, International Federation for Choral Music (IFCM)

Professor of Music, Musicology and Ethnomusicology, Boston University

Lecturer, University of Massachusetts Boston; Visiting Scholar, Phillips Academy

Professor of Music, College of Fine Art, Director ad interim, School of Music, Boston University

Professor of Music, Music Education, Dean, Don Wright Faculty of Music - University of Western Ontario