Title
A Comparative Study on Outlier Removal from a Large-scale Dataset using Unsupervised Anomaly Detection.
Abstract
Outlier removal from training data is a classical problem in pattern recognition. Nowadays, this problem becomes more important for large-scale datasets by the following two reasons: First, we will have a higher risk of “unexpected” outliers, such as mislabeled training data. Second, a large-scale dataset makes it more difficult to grasp the distribution of outliers. On the other hand, many unsupervised anomaly detection methods have been proposed, which can be also used for outlier removal. In this paper, we present a comparative study of nine different anomaly detection methods in the scenario of outlier removal from a large-scale dataset. For accurate performance observation, we need to use a simple and describable recognition procedure and thus utilize a nearest neighbor-based classifier. As an adequate large-scale dataset, we prepared a handwritten digit dataset comprising of more than 800,000 manually labeled samples. With a data dimensionality of 16×16 = 256, it is ensured that each digit class has at least 100 times more instances than data dimensionality. The experimental results show that the common understanding that outlier removal improves classification performance on small datasets is not true for high-dimensional large-scale datasets. Additionally, it was found that local anomaly detection algorithms perform better on this data than their global equivalents.
Year
DOI
Venue
2016
10.5220/0005701302630269
ICPRAM
Field
DocType
Citations 
Anomaly detection,Data mining,Data cleansing,One-class classification,Computer science,Artificial intelligence,Classifier (linguistics),k-nearest neighbors algorithm,GRASP,Pattern recognition,Outlier,Curse of dimensionality,Machine learning
Conference
1
PageRank 
References 
Authors
0.39
1
2
Name
Order
Citations
PageRank
Markus Goldstein1483.69
Seiichi Uchida2790105.59