Learning the pseudoinverse solution to network weights

J. Tapson, A. van Schaik

    Research output: Contribution to journalArticlepeer-review

    74 Citations (Scopus)

    Abstract

    The last decade has seen the parallel emergence in computational neuroscience and machine learning of neural network structures which spread the input signal randomly to a higher dimensional space; perform a nonlinear activation; and then solve for a regression or classification output by means of a mathematical pseudoinverse operation. In the field of neuromorphic engineering, these methods are increasingly popular for synthesizing biologically plausible neural networks, but the “learning method”—computation of the pseudoinverse by singular value decomposition—is problematic both for biological plausibility and because it is not an online or an adaptive method. We present an online or incremental method of computing the pseudoinverse precisely, which we argue is biologically plausible as a learning method, and which can be made adaptable for non-stationary data streams. The method is significantly more memory-efficient than the conventional computation of pseudoinverses by singular value decomposition.
    Original languageEnglish
    Pages (from-to)94-100
    Number of pages7
    JournalNeural Networks
    Volume45
    DOIs
    Publication statusPublished - 2013

    Fingerprint

    Dive into the research topics of 'Learning the pseudoinverse solution to network weights'. Together they form a unique fingerprint.

    Cite this