TY - JOUR
T1 - Neuromorphic engineering needs closed-loop benchmarks
AU - Milde, Moritz B.
AU - Afshar, Saeed
AU - Xu, Ying
AU - Marcireau, Alexandre
AU - Joubert, Damien
AU - Ramesh, Bharath
AU - Bethi, Yeshwanth
AU - Ralph, Nicholas O.
AU - El Arja, Sami
AU - Dennler, Nik
AU - Schaik, André van
AU - Cohen, Gregory
N1 - Publisher Copyright:
Copyright © 2022 Milde, Afshar, Xu, Marcireau, Joubert, Ramesh, Bethi, Ralph, El Arja, Dennler, van Schaik and Cohen.
PY - 2022/2/14
Y1 - 2022/2/14
N2 - Neuromorphic engineering aims to build (autonomous) systems by mimicking biological systems. It is motivated by the observation that biological organisms"”from algae to primates"”excel in sensing their environment, reacting promptly to their perils and opportunities. Furthermore, they do so more resiliently than our most advanced machines, at a fraction of the power consumption. It follows that the performance of neuromorphic systems should be evaluated in terms of real-time operation, power consumption, and resiliency to real-world perturbations and noise using task-relevant evaluation metrics. Yet, following in the footsteps of conventional machine learning, most neuromorphic benchmarks rely on recorded datasets that foster sensing accuracy as the primary measure for performance. Sensing accuracy is but an arbitrary proxy for the actual system's goal"”taking a good decision in a timely manner. Moreover, static datasets hinder our ability to study and compare closed-loop sensing and control strategies that are central to survival for biological organisms. This article makes the case for a renewed focus on closed-loop benchmarks involving real-world tasks. Such benchmarks will be crucial in developing and progressing neuromorphic Intelligence. The shift towards dynamic real-world benchmarking tasks should usher in richer, more resilient, and robust artificially intelligent systems in the future.
AB - Neuromorphic engineering aims to build (autonomous) systems by mimicking biological systems. It is motivated by the observation that biological organisms"”from algae to primates"”excel in sensing their environment, reacting promptly to their perils and opportunities. Furthermore, they do so more resiliently than our most advanced machines, at a fraction of the power consumption. It follows that the performance of neuromorphic systems should be evaluated in terms of real-time operation, power consumption, and resiliency to real-world perturbations and noise using task-relevant evaluation metrics. Yet, following in the footsteps of conventional machine learning, most neuromorphic benchmarks rely on recorded datasets that foster sensing accuracy as the primary measure for performance. Sensing accuracy is but an arbitrary proxy for the actual system's goal"”taking a good decision in a timely manner. Moreover, static datasets hinder our ability to study and compare closed-loop sensing and control strategies that are central to survival for biological organisms. This article makes the case for a renewed focus on closed-loop benchmarks involving real-world tasks. Such benchmarks will be crucial in developing and progressing neuromorphic Intelligence. The shift towards dynamic real-world benchmarking tasks should usher in richer, more resilient, and robust artificially intelligent systems in the future.
UR - https://hdl.handle.net/1959.7/uws:69020
U2 - 10.3389/fnins.2022.813555
DO - 10.3389/fnins.2022.813555
M3 - Article
SN - 1662-4548
VL - 16
JO - Frontiers in Neuroscience
JF - Frontiers in Neuroscience
M1 - 813555
ER -