Speaker discriminability for visual speech modes

Jeesun Kim, Chris Davis, Christian Kroos, Harold Hill

    Research output: Chapter in Book / Conference PaperConference Paperpeer-review

    Abstract

    ![CDATA[Does speech mode affect recognizing people from their visual speech? We examined 3D motion data from 4 talkers saying 10 sentences (twice). Speech was in noise, in quiet or whispered. Principal Component Analyses (PCAs) were conducted and speaker classification was determined by Linear Disciminant Analysis (LDA). The first five PCs for the rigid motion and the first 10 PCs each for the non-rigid motion and the combined motion were input to a series of LDAs for all possible combinations of PCs that could be constructed using the retained PCs. The discriminant functions and classification coefficients were determined on the training data to predict the talker of the test data. Classification performance for both the in-noise and whispered speech modes were superior to the in-quiet one. Superiority of classification was found even if only the first PC (jaw motion) was used, i.e., measures of jaw motion when speaking in noise or whispering hold promise for bimodal person recognition or verification.]]
    Original languageEnglish
    Title of host publicationProceedings of the 10th Annual Conference of the International Speech Communication Association (INTERSPEECH 2009): Brighton, U.K., 6-10 September, 2009
    PublisherISCA
    Pages2259-2262
    Number of pages4
    Publication statusPublished - 2009
    EventInternational Speech Communication Association. Conference -
    Duration: 9 Sept 2012 → …

    Publication series

    Name
    ISSN (Print)1990-9772

    Conference

    ConferenceInternational Speech Communication Association. Conference
    Period9/09/12 → …

    Fingerprint

    Dive into the research topics of 'Speaker discriminability for visual speech modes'. Together they form a unique fingerprint.

    Cite this