Abstract | ||
---|---|---|
Music annotation is the task automatically assigning a set of semantically meaningful text labels to a music piece, which is of great value to many variant music applications such as music searching, indexing, recommendation and management. In this paper, we propose a novel music annotation method that integrates feature-to-label correspondence, label smoothness and local-to-global annotation consistency in a conditional random field (CRF) model with label-specific feature learning. For one music piece to be annotated, we first divide the music into a set of acoustically homogeneous segments and infer the relevant labels of every music segment using the CRF models corresponding to respective labels. These local annotations are then aggregated to obtain the holistic annotation of the music. Experiments on the public CAL500 music annotation dataset demonstrate the effectiveness of the proposed method. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/ICPR.2018.8545335 | 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) |
Field | DocType | ISSN |
Conditional random field,Annotation,Pattern recognition,Computer science,Search engine indexing,Feature extraction,Image segmentation,Artificial intelligence,Hidden Markov model,Semantics,Feature learning | Conference | 1051-4651 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Qianqian Wang | 1 | 132 | 26.59 |
Yu Xiong | 2 | 0 | 0.34 |
Feng Su | 3 | 170 | 18.63 |