Combining STDP and binary networks for reinforcement learning from images and sparse rewards

Sérgio F. Chevtchenko, Teresa B. Ludermir

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Spiking neural networks (SNNs) aim to replicate energy efficiency, learning speed and temporal processing of biological brains. However, accuracy and learning speed of such networks is still behind reinforcement learning (RL) models based on traditional neural models. This work combines a pre-trained binary convolutional neural network with an SNN trained online through reward-modulated STDP in order to leverage advantages of both models. The spiking network is an extension of its previous version, with improvements in architecture and dynamics to address a more challenging task. We focus on extensive experimental evaluation of the proposed model with optimized state-of-the-art baselines, namely proximal policy optimization (PPO) and deep Q network (DQN). The models are compared on a grid-world environment with high dimensional observations, consisting of RGB images with up to 256 × 256 pixels. The experimental results show that the proposed architecture can be a competitive alternative to deep reinforcement learning (DRL) in the evaluated environment and provide a foundation for more complex future applications of spiking networks.

Original languageEnglish
Pages (from-to)496-506
Number of pages11
JournalNeural Networks
Volume144
DOIs
Publication statusPublished - Dec 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2021 Elsevier Ltd

Keywords

  • Binary neural networks
  • Reinforcement learning
  • Spiking neural networks
  • STDP

Fingerprint

Dive into the research topics of 'Combining STDP and binary networks for reinforcement learning from images and sparse rewards'. Together they form a unique fingerprint.

Cite this