Title
An Associative Memorization Architecture of Extracted Musical Features from Audio Signals by Deep Learning Architecture.
Abstract
In this paper, we develop associative memorization architecture of the musical features from time sequential data of the music audio signals. This associative memorization architecture is constructed by using deep learning architecture. Challenging purpose of our research is the development of the new composition system that automatically creates a new music based on some existing music. How does a human composer make musical compositions or pieces? Generally speaking, music piece is generated by the cyclic analysis process and re-synthesis process of musical features in music creation procedures. This process can be simulated by learning models using Artificial Neural Network (ANN) architecture. The first and critical problem is how to describe the music data, because, in those models, description format for this data has a great influence on learning performance and function. Almost of related works adopt symbolic representation methods of music data. However, we believe human composers never treat a music piece as a symbol. Therefore raw music audio signals are input to our system. The constructed associative model memorizes musical features of music audio signals, and regenerates sequential data of that music. Based on experimental results of memorizing music audio data, we verify the performances and effectiveness of our system.
Year
DOI
Venue
2014
10.1016/j.procs.2014.09.032
Procedia Computer Science
Keywords
DocType
Volume
automatic music composition,algorithmic composition,machine learning,Deep learning,Restricted Boltzmann Machine
Conference
36
ISSN
Citations 
PageRank 
1877-0509
0
0.34
References 
Authors
7
6
Name
Order
Citations
PageRank
Tadaaki Niwa100.34
Keitaro Naruse24719.98
Ryosuke Ooe301.01
Masahiro Kinoshita400.34
Tamotsu Mitamura552.68
Takashi Kawakami686.11