Title
Audio Caption: Listen And Tell
Abstract
Increasing amount of research has shed light on machine perception of audio events, most of which concerns detection and classification tasks. However, human-like perception of audio scenes involves not only detecting and classifying audio sounds, but also summarizing the relationship between different audio events. Comparable research such as image caption has been conducted, yet the audio field is still quite barren. This paper introduces a manually-annotated dataset for audio caption. The purpose is to automatically generate natural sentences for audio scene description and to bridge the gap between machine perception of audio and image. The whole dataset is labelled in Mandarin and we also include translated English annotations. A baseline encoder-decoder model is provided for both English and Mandarin. Similar BLEU scores are derived for both languages: our model can generate understandable and data-related captions based on the dataset.
Year
Venue
Keywords
2019
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Audio Caption, Audio Databases, Natural Language Generation, Recurrent Neural Networks
Field
DocType
Volume
Machine perception,Task analysis,Pattern recognition,Computer science,Feature extraction,Natural language processing,Artificial intelligence,Perception,Mandarin Chinese
Journal
abs/1902.09254
ISSN
Citations 
PageRank 
1520-6149
0
0.34
References 
Authors
12
3
Name
Order
Citations
PageRank
Mengyue Wu104.73
Heinrich Dinkel2235.79
Kai Yu3108290.58