Neuromorphic audio-visual sensor fusion on a sound-localizing robot

Vincent Yue-Sek Chan, Craig T. Jin, Andre van Schaik

    Research output: Contribution to journalArticlepeer-review

    18 Citations (Scopus)

    Abstract

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
    Original languageEnglish
    Article numberArt. 21
    Number of pages9
    JournalFrontiers in Neuroscience
    Volume6
    Issue numberFeb.
    DOIs
    Publication statusPublished - 2012

    Keywords

    • acoustic localization
    • multisensor data fusion
    • neuromorphic engineering
    • online learning

    Fingerprint

    Dive into the research topics of 'Neuromorphic audio-visual sensor fusion on a sound-localizing robot'. Together they form a unique fingerprint.

    Cite this