Limits to the fault-tolerance of a feedforward neural network with learning

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

37 Citations (Scopus)

Abstract

Input data and hardware fault tolerance of neural networks are discussed. It is shown that fault-tolerant behavior is not self-evident but must be activated by an appropriate learning scheme. Practical limitations are demonstrated by an example of neural character recognition. The results show that the effects of learning and synapse weight decay on fault tolerance largely influence the practicality of large-scale silicon implementations. It is anticipated that, owing to implementation issues, such as the use of volatile memories, some neural VLSI architectures will not be sufficiently fault tolerant.

Original languageEnglish
Title of host publicationDigest of Papers - FTCS (Fault-Tolerant Computing Symposium)
PublisherPubl by IEEE
Pages228-235
Number of pages8
ISBN (Print)081862051X
Publication statusPublished - 1990
Externally publishedYes
Event20th International Symposium on Fault-Tolerant Computing - FTCS 20 - Chapel Hill, NC, USA
Duration: 26 Jun 199028 Jun 1990

Publication series

NameDigest of Papers - FTCS (Fault-Tolerant Computing Symposium)
ISSN (Print)0731-3071

Conference

Conference20th International Symposium on Fault-Tolerant Computing - FTCS 20
CityChapel Hill, NC, USA
Period26/06/9028/06/90

Fingerprint

Dive into the research topics of 'Limits to the fault-tolerance of a feedforward neural network with learning'. Together they form a unique fingerprint.

Cite this