Abstract | ||
---|---|---|
This contribution extends linear classifiers to sub-linear classifiers for graphs and analyzes their properties. The results are (i) a geometric interpretation of sub linear classifiers, (ii) a generic learning rule based on the principle of empirical risk minimization, (iii) a convergence theorem for the margin perceptron in the separable case, and (iv) the VC-dimension of sub linear functions. Empirical results on graph data show that the perceptron and margin perceptron algorithm on graphs have similar properties as their vectorial counterparts. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1109/ICPR.2014.661 | Pattern Recognition |
Keywords | Field | DocType |
graph theory,VC-dimension,empirical risk minimization,generic learning,geometric interpretation,graph theory,margin perceptron,margin perceptrons,sublinear classifiers,vectorial counterparts | Convergence (routing),Graph theory,Graph,Pattern recognition,Computer science,Empirical risk minimization,Separable space,Learning rule,Artificial intelligence,Linear function,Perceptron | Conference |
ISSN | Citations | PageRank |
1051-4651 | 0 | 0.34 |
References | Authors | |
0 | 1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Brijnesh J. Jain | 1 | 0 | 0.34 |