主æË" Ã‚�Ã¥Ë" Ã¢â‚¬ Ã¥Ë" Ã¢â‚¬ æžÂ�ãÂ�«ã‚Ë" ÃƒÂ£Ã¢â‚¬Å¡Ã¢â‚¬Â¹ÃƒÂ£Ã†'ªã‚¢ãÆ'«ã‚¿ã‚¤ãÆ' ãÆ'Ë" ÃƒÂ£Ã†'¼ã‚­ãÆ'³ã‚°ãÆ'˜ãÆ'Æ'ãÆ'‰ã‚·ã‚¹ãÆ'†ãÆ'Â

Translated title of the contribution: Real-time talking head system based on principal component analysis

Takaaki Kuratate, Keisuke Kinoshita

    Research output: Contribution to journalArticle

    Abstract

    In this paper we describe an animation system that can map a person's facial motion to a wide selection of realistic face models in real-time. The motion is obtained from a motion capture system that measures 3D positions of infra-red LED markers placed on a subject's face. Using a 3D laser scanner, we also scan nine predefined postures specific to speech production for the same subject. Target faces are generated from 3D mesh points also measured with a laser scanner. Transformation between the motion and static postures is computed based on linear mapping and PCA (Principal Component Analysis). With this method only a small number of parameters are required to generate facial animation: three parameters corresponding to the dominant principal components to control face motion, and six parameters to control rigid head motion. By reducing the parameter space and distributing the processing between two networked computers, motion capture processing, parameter transformation, and high-quality realistic facial animation synthesis is made possible in realtime.
    Translated title of the contributionReal-time talking head system based on principal component analysis
    Original languageJapanese
    JournalJournal of the Institute of Image Electronics Engineers of Japan
    Publication statusPublished - 2005

    Keywords

    • computer animation
    • computer simulation
    • data processing
    • face perception
    • facial expression
    • three-dimensional imaging

    Fingerprint

    Dive into the research topics of 'Real-time talking head system based on principal component analysis'. Together they form a unique fingerprint.

    Cite this