Taming the reservoir : feedforward training for recurrent neural networks

Oliver Obst, Martin Riedmiller

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

4 Citations (Scopus)

Abstract

Recurrent neural networks are successfully used for tasks like time series processing and system identification. Many of the approaches to train these networks, however, are often regarded as too slow, too complicated, or both. Reservoir computing methods like echo state networks or liquid state machines are an alternative to the more traditional approaches. Echo state networks have the appeal that they are simple to train, and that they have shown to be able to produce excellent results for a number of benchmarks and other tasks. One disadvantage of echo state networks, however, is the high variability in their performance due to a randomly connected hidden layer. Ideally, an efficient and more deterministic way to create connections in the hidden layer could be found, with a performance better than randomly connected hidden layers but without excessively iterating over the same training data many times. We present an approach - tamed reservoirs - that makes use of efficient feedforward training methods, and performs better than echo state networks for some time series prediction tasks. Moreover, our approach reduces some of the variability since all recurrent connections in the network are trained.
Original languageEnglish
Title of host publicationProceedings of the 2012 International Joint Conference on Neural Networks (IJCNN): 10-15 June 2012, Brisbane, Qld.
PublisherIEEE
Number of pages7
ISBN (Print)9781467314909
DOIs
Publication statusPublished - 2012
EventInternational Joint Conference on Neural Networks -
Duration: 10 Jun 2012 → …

Conference

ConferenceInternational Joint Conference on Neural Networks
Period10/06/12 → …

Keywords

  • neural networks (computer science)
  • reservoir computing

Fingerprint

Dive into the research topics of 'Taming the reservoir : feedforward training for recurrent neural networks'. Together they form a unique fingerprint.

Cite this