Abstract | ||
---|---|---|
Computational paralinguistics is an area which contains diverse classification tasks. In many cases the class distribution of these tasks is highly imbalanced by nature, as the phenomena needed to detect in human speech do not occur uniformly. To ignore this imbalance, it is common to measure the efficiency of classification approaches via the Unweighted Average Recall (UAR) metric in this area. However, general classification methods such as Support-Vector Machines (SVM) and Deep Neural Networks (DNNs) were shown to focus on traditional classification accuracy, which might lead to a suboptimal performance for imbalanced datasets. In this study we show that by performing posterior calibration, this effect can be countered and the UAR scores obtained might be improved. Our approach led to relative error reduction values of 4% and 14% on the test set of two multi-class paralinguistic datasets that had imbalanced class distributions, outperforming the traditional downsampling. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/SLT.2018.8639628 | SLT |
Keywords | Field | DocType |
Calibration,Training,Standards,Task analysis,Optimization,Support vector machines | Paralanguage,Pattern recognition,Task analysis,Computer science,Support vector machine,Speech recognition,Artificial intelligence,Upsampling,Recall,Approximation error,Calibration,Test set | Conference |
ISSN | ISBN | Citations |
2639-5479 | 978-1-5386-4334-1 | 0 |
PageRank | References | Authors |
0.34 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Gábor Gosztolya | 1 | 75 | 21.66 |
Róbert Busa-Fekete | 2 | 23 | 4.48 |