Title
Coupled Recurrent Models for Polyphonic Music Composition.
Abstract
This work describes a novel recurrent model for music composition, which accounts for the rich statistical structure of polyphonic music. There are many ways to factor the probability distribution over musical scores; we consider the merits of various approaches and propose a new factorization that decomposes a score into a collection of concurrent, coupled time series: u0027parts.u0027 The model we propose borrows ideas from both convolutional neural models and recurrent neural models; we argue that these ideas are natural for capturing musicu0027s pitch invariances, temporal structure, and polyphony. train generative models for homophonic and polyphonic composition on the KernScores dataset (Sapp, 2005) a collection of 2,300 musical scores comprised of around 2.8 million notes spanning time from the Renaissance to the early 20th century. While evaluation of generative models is known to be hard (Theis et al., 2016), we present careful quantitative results using a unit-adjusted cross entropy metric that is independent of how we factor the distribution over scores. We also present qualitative results using a blind discrimination test.
Year
Venue
Field
2018
ISMIR
Cross entropy,Musical,Computer science,Discrimination testing,Musical composition,Speech recognition,Probability distribution,Factorization,Polyphony,Generative grammar
DocType
Volume
Citations 
Journal
abs/1811.08045
0
PageRank 
References 
Authors
0.34
9
4
Name
Order
Citations
PageRank
John Thickstun101.01
Zaid Harchaoui278135.17
Dean P. Foster377068.45
Sham Kakade44365282.77