Title
How Low Can You Go? Reducing Frequency and Time Resolution in Current CNN Architectures for Music Auto-tagging
Abstract
Automatic tagging of music is an important research topic in Music Information Retrieval and audio analysis algorithms proposed for this task have achieved improvements with advances in deep learning. In particular, many state-of-the-art systems use Convolutional Neural Networks and operate on mel-spectrogram representations of the audio. In this paper, we compare commonly used mel-spectrogram representations and evaluate model performances that can be achieved by reducing the input size in terms of both lesser amount of frequency bands and larger frame rates. We use the MagnaTagaTune dataset for comprehensive performance comparisons and then compare selected configurations on the larger Million Song Dataset. The results of this study can serve researchers and practitioners in their trade-off decision between accuracy of the models, data storage size and training and inference times.
Year
DOI
Venue
2020
10.23919/Eusipco47968.2020.9287769
2020 28th European Signal Processing Conference (EUSIPCO)
Keywords
DocType
ISSN
music auto-tagging,audio classification,convolutional neural networks
Conference
2219-5491
ISBN
Citations 
PageRank 
978-1-7281-5001-7
0
0.34
References 
Authors
6
5
Name
Order
Citations
PageRank
Andres Ferraro174.64
Dmitry Bogdanov223620.72
Xavier Serra31014118.93
Jay Ho Jeon400.34
Jason Yoon500.34