Abstract
Spiking neural networks (SNNs) aim to replicate energy efficiency, learning speed and temporal processing of biological brains. However, accuracy and learning speed of such networks is still behind reinforcement learning (RL) models based on traditional neural models. This work combines a pre-trained binary convolutional neural network with an SNN trained online through reward-modulated STDP in order to leverage advantages of both models. The spiking network is an extension of its previous version, with improvements in architecture and dynamics to address a more challenging task. We focus on extensive experimental evaluation of the proposed model with optimized state-of-the-art baselines, namely proximal policy optimization (PPO) and deep Q network (DQN). The models are compared on a grid-world environment with high dimensional observations, consisting of RGB images with up to 256 × 256 pixels. The experimental results show that the proposed architecture can be a competitive alternative to deep reinforcement learning (DRL) in the evaluated environment and provide a foundation for more complex future applications of spiking networks.
| Original language | English |
|---|---|
| Pages (from-to) | 496-506 |
| Number of pages | 11 |
| Journal | Neural Networks |
| Volume | 144 |
| DOIs | |
| Publication status | Published - Dec 2021 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2021 Elsevier Ltd
Keywords
- Binary neural networks
- Reinforcement learning
- Spiking neural networks
- STDP