Memory-efficient neurons and synapses for spike-timing-dependent-plasticity in large-scale spiking networks

Pablo Urbizagastegui, André van Schaik, Runchun Wang

Research output: Contribution to journalArticlepeer-review

4 Downloads (Pure)

Abstract

This paper addresses the challenges posed by frequent memory access during simulations of large-scale spiking neural networks involving synaptic plasticity. We focus on the memory accesses performed during a common synaptic plasticity rule since this can be a significant factor limiting the efficiency of the simulations. We propose neuron models that are represented by only three state variables, which are engineered to enforce the appropriate neuronal dynamics. Additionally, memory retrieval is executed solely by fetching postsynaptic variables, promoting a contiguous memory storage and leveraging the capabilities of burst mode operations to reduce the overhead associated with each access. Different plasticity rules could be implemented despite the adopted simplifications, each leading to a distinct synaptic weight distribution (i.e., unimodal and bimodal). Moreover, our method requires fewer average memory accesses compared to a naive approach. We argue that the strategy described can speed up memory transactions and reduce latencies while maintaining a small memory footprint.

Original languageEnglish
Article number1450640
Number of pages11
JournalFrontiers in Neuroscience
Volume18
DOIs
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
Copyright © 2024 Urbizagastegui, van Schaik and Wang.

Notes

TBA AS

Keywords

  • digital simulation
  • large scale
  • memory architecture
  • neuromorphic computing
  • synaptic plasticity

Fingerprint

Dive into the research topics of 'Memory-efficient neurons and synapses for spike-timing-dependent-plasticity in large-scale spiking networks'. Together they form a unique fingerprint.

Cite this