Live demonstration: real-time audio and visual inference on the RAMAN TinyML accelerator

Adithya Krishna, Ashwin Rajesh, Hitesh Pavan Oleti, Anand Chauhan, Shankaranarayanan H., André Van Schaik, Mahesh Mehendale, Chetan Singh Thakur

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

Abstract

The setup includes a host PC, camera and microphone sensors, and a Pynq-Z2 FPGA board. The neuromorphic cochlear model and RAMAN accelerator for neural network inference are deployed on the FPGA. The ARM processor on the FPGA sends the image received, cochleagram and the classified outputs to the PC to be visualized.
Original languageEnglish
Title of host publicationISCAS 2024: IEEE International Symposium on Circuits and Systems
Subtitle of host publicationMay 19-22, 2024, Singapore
Place of PublicationU.S.
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages1
ISBN (Electronic)9798350330991
DOIs
Publication statusPublished - 2024
Event2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024 - Singapore, Singapore
Duration: 19 May 202422 May 2024

Publication series

NameProceedings - IEEE International Symposium on Circuits and Systems
ISSN (Print)0271-4310

Conference

Conference2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024
Country/TerritorySingapore
CitySingapore
Period19/05/2422/05/24

Fingerprint

Dive into the research topics of 'Live demonstration: real-time audio and visual inference on the RAMAN TinyML accelerator'. Together they form a unique fingerprint.

Cite this