Multimodal speech animation from electromagnetic articulography data

Guillaume Gibert, Virginie Attina, Mark Tiede, Rikke Bundgaard-Nielsen, Christian Kroos, Benjawan Kasisopa, Eric Vatikiotis-Bateson, Catherine T. Best

    Research output: Chapter in Book / Conference PaperConference Paperpeer-review

    1 Citation (Scopus)

    Abstract

    ![CDATA[Virtual humans have become part of our everyday life (movies, internet, and computer games). Even though they are more and more realistic, their speech capabilities are, most of the time, limited and not coherent and/or not synchronous with the corresponding acoustic signal. We describe a method to convert a virtual human avatar (animated through key frames and interpolation) into a more naturalistic talking head. Speech-capabilities were added to the avatar using real speech production data. Electromagnetic articulography (EMA) data provided lip, jaw and tongue trajectories of a speaker involved in face to face communication. An articulatory model driving jaw, lip and tongue movements was built. Constraining the key frame values, a corresponding high definition tongue articulatory model was developed. The resulting avatar was able to produce visible and partly occluded facial speech movements coherent and synchronous with the acoustic signal.]]
    Original languageEnglish
    Title of host publicationProceedings of 20th European Signal Processing Conference (EUSIPCO): Palace of the Parliament, August 27-31, 2012, Bucharest, Romania
    PublisherIEEE
    Pages2807-2811
    Number of pages5
    Publication statusPublished - 2012
    EventEuropean Signal Processing Conference -
    Duration: 27 Aug 2012 → …

    Publication series

    Name
    ISSN (Print)2076-1465

    Conference

    ConferenceEuropean Signal Processing Conference
    Period27/08/12 → …

    Keywords

    • avatars (virtual reality)
    • speech synthesis

    Fingerprint

    Dive into the research topics of 'Multimodal speech animation from electromagnetic articulography data'. Together they form a unique fingerprint.

    Cite this