Evaluating a virtual speech cuer

Guillaume Gibert, Gerard Bailly, Frederic Elisei

    Research output: Chapter in Book / Conference PaperConference Paper

    Abstract

    ![CDATA[This paper presents the virtual speech cuer built in the context of the ARTUS project aiming at watermarking hand and face gestures of a virtual animated agent in a broadcasted audiovisual sequence. For deaf televiewers that master cued speech, the animated agent can be then superimposed - on demand and at the reception - on the original broadcast as an alternative to subtitling. The paper presents the multimodal text-to-speech synthesis system and the first evaluation performed by deaf users.]]
    Original languageEnglish
    Title of host publicationProceedings of the 9th International Conference on Spoken Language Processing (INTERSPEECH 2006-ICSLP)
    PublisherISCA
    Number of pages4
    Publication statusPublished - 2006
    EventInternational Conference on Spoken Language Processing -
    Duration: 1 Jan 2006 → …

    Conference

    ConferenceInternational Conference on Spoken Language Processing
    Period1/01/06 → …

    Keywords

    • cued speech
    • evaluation
    • audiovisual speech synthesis
    • deaf
    • lipreading
    • speech synthesis

    Fingerprint

    Dive into the research topics of 'Evaluating a virtual speech cuer'. Together they form a unique fingerprint.

    Cite this