Building speaker-specific lip models for talking heads from 3D face data

Takaaki Kuratate, Marcia Riley

    Research output: Chapter in Book / Conference PaperConference Paper

    Abstract

    When creating realistic talking head animations, accurate modelling of speech articulators is important for speech perceptibility. Previous lip modeling methods such as simple numerical lip modeling focus on creating a general lip model without incorporating lip speaker variations. Here we present a method for creating accurate speaker-specific lip representations that retain the individual characteristics of a speaker's lips via an adaptive numerical approach using 3D scanned surface and MRI data. By adjusting spline parameters automatically to minimize the error between node points of the lip model and the raw 3D surface, new 3D lips are created efficiently and easily. The resulting lip models will be used in our talking head animation system to evaluate auditory-visual speech perception, and to analyze our 3D face database for statistically relevant lip features.
    Original languageEnglish
    Title of host publicationProceedings of International Conference on Auditory-Visual Speech Processing (AVSP 2010), Hakone, Kanagawa, Japan, 30 Sep. - 3 Oct. 2010
    PublisherAVSP
    Pages101-106
    Number of pages6
    Publication statusPublished - 2010
    EventInternational Conference on Auditory-Visual Speech Processing -
    Duration: 29 Aug 2013 → …

    Conference

    ConferenceInternational Conference on Auditory-Visual Speech Processing
    Period29/08/13 → …

    Keywords

    • three-dimensional imaging
    • talking heads

    Fingerprint

    Dive into the research topics of 'Building speaker-specific lip models for talking heads from 3D face data'. Together they form a unique fingerprint.

    Cite this