Explicit computation of input weights in extreme learning machines

Jonathan Tapson, Philip De Chazal, André van Schaik

    Research output: Chapter in Book / Conference PaperConference Paperpeer-review

    Abstract

    ![CDATA[We present a closed form expression for initializing the input weights in a multi-layer perceptron, which can be used as the first step in synthesis of an Extreme Learning Ma-chine. The expression is based on the standard function for a separating hyperplane as computed in multilayer perceptrons and linear Support Vector Machines; that is, as a linear combination of input data samples. In the absence of supervised training for the input weights, random linear combinations of training data samples are used to project the input data to a higher dimensional hidden layer. The hidden layer weights are solved in the standard ELM fashion by computing the pseudoinverse of the hidden layer outputs and multiplying by the desired output values. All weights for this method can be computed in a single pass, and the resulting networks are more accurate and more consistent on some standard problems than regular ELM networks of the same size.]]
    Original languageEnglish
    Title of host publicationProceedings of ELM-2014. Volume 1, Algorithms and Theories
    PublisherSpringer
    Pages41-49
    Number of pages9
    ISBN (Print)9783319140636
    DOIs
    Publication statusPublished - 2015
    EventInternational Conference on Extreme Learning Machines -
    Duration: 8 Dec 2014 → …

    Conference

    ConferenceInternational Conference on Extreme Learning Machines
    Period8/12/14 → …

    Keywords

    • machine learning
    • computational intelligence
    • artificial intelligence

    Fingerprint

    Dive into the research topics of 'Explicit computation of input weights in extreme learning machines'. Together they form a unique fingerprint.

    Cite this