Abstract
![CDATA[We present a closed form expression for initializing the input weights in a multi-layer perceptron, which can be used as the first step in synthesis of an Extreme Learning Ma-chine. The expression is based on the standard function for a separating hyperplane as computed in multilayer perceptrons and linear Support Vector Machines; that is, as a linear combination of input data samples. In the absence of supervised training for the input weights, random linear combinations of training data samples are used to project the input data to a higher dimensional hidden layer. The hidden layer weights are solved in the standard ELM fashion by computing the pseudoinverse of the hidden layer outputs and multiplying by the desired output values. All weights for this method can be computed in a single pass, and the resulting networks are more accurate and more consistent on some standard problems than regular ELM networks of the same size.]]
Original language | English |
---|---|
Title of host publication | Proceedings of ELM-2014. Volume 1, Algorithms and Theories |
Publisher | Springer |
Pages | 41-49 |
Number of pages | 9 |
ISBN (Print) | 9783319140636 |
DOIs | |
Publication status | Published - 2015 |
Event | International Conference on Extreme Learning Machines - Duration: 8 Dec 2014 → … |
Conference
Conference | International Conference on Extreme Learning Machines |
---|---|
Period | 8/12/14 → … |
Keywords
- machine learning
- computational intelligence
- artificial intelligence