Abstract | ||
---|---|---|
In this brief, the use of lattice point sets (LPSs) is investigated in the context of general learning problems (including function estimation and dynamic optimization), in the case where the classic empirical risk minimization (ERM) principle is considered and there is freedom to choose the sampling points of the input space. Here it is proved that convergence of the ERM principle is guaranteed when LPSs are employed as training sets for the learning procedure, yielding up to a superlinear convergence rate under some regularity hypotheses on the involved functions. Preliminary simulation results are also provided. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1109/TNN.2010.2041360 | IEEE Transactions on Neural Networks |
Keywords | Field | DocType |
optimisation,empirical risk minimization,general learning problem,classic empirical risk minimization,superlinear convergence rate,involved function,learning (artificial intelligence),lattice point sets (lpss),deterministic learning,dynamic optimization,lattice point sets,function estimation,convergence,preliminary simulation result,empirical risk minimization (erm),input space,erm principle,approximate optimization problem,approximate optimization,lattice point set,some regularity hypothesis,lattices,cost function,risk management,learning artificial intelligence,operations research,computer simulation,sampling methods,optimization problem,statistics,lattice points,neural networks | Convergence (routing),Dynamic programming,Mathematical optimization,Computer science,Empirical risk minimization,Supervised learning,Artificial intelligence,Rate of convergence,Artificial neural network,Optimization problem,Deterministic system (philosophy),Machine learning | Journal |
Volume | Issue | ISSN |
21 | 4 | 1941-0093 |
Citations | PageRank | References |
3 | 0.45 | 6 |
Authors | ||
1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cristiano Cervellera | 1 | 226 | 23.63 |