Abstract | ||
---|---|---|
With ever-increasing number of car-mounted electronic devices that are accessed, managed, and controlled with smartphones, car apps are becoming an important part of the automotive industry. Audio classification is one of the key components of car apps as a front-end technology to enable human-app interactions. Existing approaches for audio classification, however, fall short as the unique and time-varying audio characteristics of car environments are not appropriately taken into account. Leveraging recent advances in mobile sensing technology that allow for effective and accurate driving environment detection, in this paper, we develop an audio classification framework for mobile apps that categorizes an audio stream into music, speech, speech+music, and noise, adaptably depending on different driving environments. A case study is performed with four different driving environments, i.e., highway, local road, crowded city, and stopped vehicle. More than 420 minutes of audio data are collected including various genres of music, speech, speech+music, and noise from the driving environments. The results demonstrate that the proposed approach improves the average classification accuracy by up to 166%, and 64% for speech, and speech+music, respectively, compared with a non-adaptive approach in our experimental settings.
|
Year | DOI | Venue |
---|---|---|
2017 | 10.1145/3123266.3123397 | MM '17: ACM Multimedia Conference
Mountain View
California
USA
October, 2017 |
Keywords | Field | DocType |
Multi-class audio classification, driving environments, in-vehicle noise | Mobile sensing,Speech coding,Local road,Computer science,Speech recognition,Electronics,Multimedia,Mobile apps,Automotive industry | Conference |
ISBN | Citations | PageRank |
978-1-4503-4906-2 | 0 | 0.34 |
References | Authors | |
29 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Myounggyu Won | 1 | 0 | 0.68 |
Haitham Alsaadan | 2 | 0 | 0.34 |
Yongsoon Eun | 3 | 77 | 23.26 |