Simulating spiking neural networks with 8-bit floating-point numbers

Pablo Urbizagastegui, Andre Van Schaik, Runchun Wang

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

1 Citation (Scopus)

Abstract

Performing brain simulations that match the size and dynamic nature of real brains is arduous but essential for understanding neural mechanisms underlying animal behaviour. To address this challenge, this paper proposes an 8-bit floating-point format (minifloat) for the purpose of efficient simulation of biological spiking neural networks in digital hardware. We present models employing minifloat variables, as well as multiplication and addition arithmetics. Other low-precision data types are considered to elucidate the feasibility and advantages of minifloat. Despite the inherent floating-point errors, minifloat models effectively simulate balanced networks that reproduce activity patterns observed in cortical networks. Our results suggest that low-precision floating-point data types are a viable alternative for spiking neural network simulations that could also improve scalability and data throughput.

Original languageEnglish
Title of host publicationIEEE ISCAS 2025 Symposium Proceedings: IEEE International Symposium on Circuits and Systems, London, UK, May 25-28, 2025
Place of PublicationU.S.
PublisherIEEE
Number of pages5
ISBN (Electronic)9798350356830
DOIs
Publication statusPublished - 2025
Event2025 IEEE International Symposium on Circuits and Systems, ISCAS 2025 - London, United Kingdom
Duration: 25 May 202528 May 2025

Conference

Conference2025 IEEE International Symposium on Circuits and Systems, ISCAS 2025
Country/TerritoryUnited Kingdom
CityLondon
Period25/05/2528/05/25

Keywords

  • floating-point
  • imprecise computing
  • low precision
  • neuromorphic engineering
  • spiking neural networks

Fingerprint

Dive into the research topics of 'Simulating spiking neural networks with 8-bit floating-point numbers'. Together they form a unique fingerprint.

Cite this