TY - GEN
T1 - Speaker discriminability for visual speech modes
AU - Kim, Jeesun
AU - Davis, Chris
AU - Kroos, Christian
AU - Hill, Harold
PY - 2009
Y1 - 2009
N2 - ![CDATA[Does speech mode affect recognizing people from their visual speech? We examined 3D motion data from 4 talkers saying 10 sentences (twice). Speech was in noise, in quiet or whispered. Principal Component Analyses (PCAs) were conducted and speaker classification was determined by Linear Disciminant Analysis (LDA). The first five PCs for the rigid motion and the first 10 PCs each for the non-rigid motion and the combined motion were input to a series of LDAs for all possible combinations of PCs that could be constructed using the retained PCs. The discriminant functions and classification coefficients were determined on the training data to predict the talker of the test data. Classification performance for both the in-noise and whispered speech modes were superior to the in-quiet one. Superiority of classification was found even if only the first PC (jaw motion) was used, i.e., measures of jaw motion when speaking in noise or whispering hold promise for bimodal person recognition or verification.]]
AB - ![CDATA[Does speech mode affect recognizing people from their visual speech? We examined 3D motion data from 4 talkers saying 10 sentences (twice). Speech was in noise, in quiet or whispered. Principal Component Analyses (PCAs) were conducted and speaker classification was determined by Linear Disciminant Analysis (LDA). The first five PCs for the rigid motion and the first 10 PCs each for the non-rigid motion and the combined motion were input to a series of LDAs for all possible combinations of PCs that could be constructed using the retained PCs. The discriminant functions and classification coefficients were determined on the training data to predict the talker of the test data. Classification performance for both the in-noise and whispered speech modes were superior to the in-quiet one. Superiority of classification was found even if only the first PC (jaw motion) was used, i.e., measures of jaw motion when speaking in noise or whispering hold promise for bimodal person recognition or verification.]]
UR - http://handle.uws.edu.au:8081/1959.7/562439
UR - http://www.isca-speech.org/archive/interspeech_2009/index.html
M3 - Conference Paper
SP - 2259
EP - 2262
BT - Proceedings of the 10th Annual Conference of the International Speech Communication Association (INTERSPEECH 2009): Brighton, U.K., 6-10 September, 2009
PB - ISCA
T2 - International Speech Communication Association. Conference
Y2 - 9 September 2012
ER -