Abstract | ||
---|---|---|
We apply deep learning methods, specifically long short-termmemory (LSTM) networks, to music transcription modelling and composition.We build and train LSTM networks using approximately 23,000music transcriptions expressed with a high-level vocabulary (ABC notation),and use them to generate new transcriptions. Our practical aimis to create music transcription models useful in particular contexts ofmusic composition. We present results from three perspectives: 1) at thepopulation level, comparing descriptive statistics of the set of trainingtranscriptions and generated transcriptions; 2) at the individual level,examining how a generated transcription reflects the conventions of amusic practice in the training transcriptions (Celtic folk); 3) at the applicationlevel, using the system for idea generation in music composition.We make our datasets, software and sound examples open and available:https://github.com/IraKorshunova/folk-rnn. |
Year | Venue | Field |
---|---|---|
2016 | arXiv: Sound | Transcription (linguistics),Population,Notation,Computer science,Musical composition,Speech recognition,Software,Artificial intelligence,Deep learning,Pop music automation,Vocabulary |
DocType | Volume | Citations |
Journal | abs/1604.08723 | 1 |
PageRank | References | Authors |
0.35 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bob L. Sturm | 1 | 241 | 29.88 |
João Felipe Santos | 2 | 70 | 8.21 |
Oded Ben-Tal | 3 | 1 | 0.69 |
Iryna Korshunova | 4 | 18 | 2.36 |