Title
MusicYOLO: A Vision-Based Framework for Automatic Singing Transcription
Abstract
Automatic singing transcription (AST), which refers to the process of inferring the onset, offset, and pitch from the singing audio, is of great significance in music information retrieval. Most AST models use the convolutional neural network to extract spectral features and predict the onset and offset moments separately. The frame-level probabilities are inferred first, and then the note-level transcription results are obtained through post-processing. In this paper, a new AST framework called MusicYOLO is proposed, which obtains the note-level transcription results directly. The onset/offset detection is based on the object detection model YOLOX, and the pitch labeling is completed by a spectrogram peak search. Compared with previous methods, the MusicYOLO detects note objects rather than isolated onset/offset moments, thus greatly enhancing the transcription performance. On the sight-singing vocal dataset (SSVD) established in this paper, the MusicYOLO achieves an 84.60% transcription F1-score, which is the state-of-the-art method.
Year
DOI
Venue
2023
10.1109/TASLP.2022.3221005
IEEE/ACM Transactions on Audio, Speech, and Language Processing
Keywords
DocType
Volume
AST,note object detection,spectrogram peak search
Journal
31
Issue
ISSN
Citations 
1
2329-9290
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Xianke Wang102.03
Bowen Tian200.34
Weiming Yang300.34
Wei Xu4411.47
Cheng Wenqing53811.20