Title
Music transcription modelling and composition using deep learning.
Abstract
We apply deep learning methods, specifically long short-termmemory (LSTM) networks, to music transcription modelling and composition.We build and train LSTM networks using approximately 23,000music transcriptions expressed with a high-level vocabulary (ABC notation),and use them to generate new transcriptions. Our practical aimis to create music transcription models useful in particular contexts ofmusic composition. We present results from three perspectives: 1) at thepopulation level, comparing descriptive statistics of the set of trainingtranscriptions and generated transcriptions; 2) at the individual level,examining how a generated transcription reflects the conventions of amusic practice in the training transcriptions (Celtic folk); 3) at the applicationlevel, using the system for idea generation in music composition.We make our datasets, software and sound examples open and available:https://github.com/IraKorshunova/folk-rnn.
Year
Venue
Field
2016
arXiv: Sound
Transcription (linguistics),Population,Notation,Computer science,Musical composition,Speech recognition,Software,Artificial intelligence,Deep learning,Pop music automation,Vocabulary
DocType
Volume
Citations 
Journal
abs/1604.08723
1
PageRank 
References 
Authors
0.35
0
4
Name
Order
Citations
PageRank
Bob L. Sturm124129.88
João Felipe Santos2708.21
Oded Ben-Tal310.69
Iryna Korshunova4182.36