Abstract
This paper addresses the challenges posed by frequent memory access during simulations of large-scale spiking neural networks involving synaptic plasticity. We focus on the memory accesses performed during a common synaptic plasticity rule since this can be a significant factor limiting the efficiency of the simulations. We propose neuron models that are represented by only three state variables, which are engineered to enforce the appropriate neuronal dynamics. Additionally, memory retrieval is executed solely by fetching postsynaptic variables, promoting a contiguous memory storage and leveraging the capabilities of burst mode operations to reduce the overhead associated with each access. Different plasticity rules could be implemented despite the adopted simplifications, each leading to a distinct synaptic weight distribution (i.e., unimodal and bimodal). Moreover, our method requires fewer average memory accesses compared to a naive approach. We argue that the strategy described can speed up memory transactions and reduce latencies while maintaining a small memory footprint.
Original language | English |
---|---|
Article number | 1450640 |
Number of pages | 11 |
Journal | Frontiers in Neuroscience |
Volume | 18 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:Copyright © 2024 Urbizagastegui, van Schaik and Wang.
Notes
TBA ASKeywords
- digital simulation
- large scale
- memory architecture
- neuromorphic computing
- synaptic plasticity