Title
The Mc-Elm: Learning An Elm-Like Network With Minimum Vc Dimension
Abstract
Though the Extreme Learning Machine (ELM) has become quite popular in recent years, there are no performance guarantees; the resultant networks also tend to be densely connected. The complexity of a learning machine may be measured by the Vapnik-Chervonenkis (VC) dimension, and a small VC dimension leads to good generalization and lower test set errors. The Minimal Complexity Machine (MCM), that has been proposed very recently, shows that it is possible to learn a classifier with minimal VC dimension, leading to sparse representations and good generalization. In this paper, we draw on results from the MCM to propose a hybrid variant of the ELM, termed the Minimal Complexity - Extreme Learning Machine (MC-ELM), in order to realize a robust classifier that minimizes an exact bound on the VC dimension. The MC-ELM solves a linear programming problem for the last layer and offers the advantages of large margin and low VC dimension. In effect, the learning paradigm elucidated in this paper helps us build a classifier which is based on a minimal representation of the training data owing to MCM, and high training speed attributed to ELM. This makes it feasible for use in complex machine learning applications, where these advantages are of significance.
Year
Venue
Keywords
2015
2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)
Minimal Complexity Machine, Extreme Learning Machine, VC Dimension, Performance Bounds, Support Vectors, Linear Programming, Hybrid Models
Field
DocType
ISSN
VC dimension,Stability (learning theory),Pattern recognition,Active learning (machine learning),Computer science,Extreme learning machine,Artificial intelligence,Structural risk minimization,Computational learning theory,Margin classifier,Machine learning,Test set
Conference
2161-4393
Citations 
PageRank 
References 
2
0.37
32
Authors
3
Name
Order
Citations
PageRank
jayadeva16710.50
sumit soman2207.53
Amit Bhaya321533.47