Title
<Emphasis Type="Italic">This time with feeling:</Emphasis> learning expressive musical performance
Abstract
Music generation has generally been focused on either creating scores or interpreting them. We discuss differences between these two problems and propose that, in fact, it may be valuable to work in the space of direct performance generation: jointly predicting the notes and also their expressive timing and dynamics. We consider the significance and qualities of the dataset needed for this. Having identified both a problem domain and characteristics of an appropriate dataset, we show an LSTM-based recurrent network model that subjectively performs quite well on this task. Critically, we provide generated examples. We also include feedback from professional composers and musicians about some of these examples.
Year
DOI
Venue
2020
10.1007/s00521-018-3758-9
Neural Computing and Applications
Keywords
DocType
Volume
Music generation, Deep learning, Recurrent neural networks, Artificial intelligence
Journal
32
Issue
ISSN
Citations 
4
1433-3058
1
PageRank 
References 
Authors
0.48
0
5
Name
Order
Citations
PageRank
Sageev Oore110118.63
Ian Simon267546.26
Sander Dieleman310.48
Douglas Eck474864.84
Karen Simonyan512058446.84