Abstract | ||
---|---|---|
Extreme learning machine is known for its fast learning speed while maintaining acceptable generalisation. Its learning process can be divided into two parts: (1) randomly assigns input weights and biases in hidden layer, and (2) analytically determines output weights by the use of Moore-Penrose generalised inverse. Through the analysis from theory and experiment aspects we point out that it is the random weights assignment rather than the analytical determination with generalised inverse that leads to its fast training speed. In fact, the calculation of generalised inverse of hidden layer output matrix based on singular value decomposition (SVD) has very low efficiency especially on large scale data, and even directly cannot work. Considering this high calculation complexity reduces the learning speed of ELM conjugate gradient is introduced as a replacement of Moore-Penrose generalised inverse and conjugate gradient based ELM (CG-ELM) is proposed. Numerical simulations show that, in most cases, CG-ELM ac... |
Year | Venue | Field |
---|---|---|
2017 | IJWMC | Conjugate gradient method,Inverse,Singular value decomposition,Computer science,Extreme learning machine,Generalization,Matrix (mathematics),Algorithm,Distributed computing |
DocType | Volume | Issue |
Journal | 13 | 4 |
Citations | PageRank | References |
1 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Shi-Xin Zhao | 1 | 1 | 0.34 |
Xizhao Wang | 2 | 3593 | 166.16 |
Li-Ying Wang | 3 | 1 | 0.34 |
Jun-Mei Hu | 4 | 1 | 0.34 |
Wei-Ping Li | 5 | 1 | 0.34 |