TY - GEN
T1 - Human perception of emotional responses to changes in auditory attributes of humanoid agents
AU - Zou, Zhao
AU - Alnajjar, Fady
AU - Lwin, Michael
AU - Al Mahmud, Abdullah
AU - Swavaf, Muhammed
AU - Khan, Aila
AU - Mubin, Omar
PY - 2024
Y1 - 2024
N2 - Human-robot interaction has emerged as an increasingly prominent discourse within the domain of robotic technologies. In this context, the interaction of both visual and verbal cues assumes an essential role in shaping user experiences. The research problem of this study revolves around investigating the potential impact of auditory attribute alterations in humanoid agents, namely robots and avatars, on users’ emotional responses. The study recruited a participant cohort comprising 14 individuals, aged 18 to 35, to engage in an experimental process of observing avatar videos with distinctive auditory attributes. These attributes encompassed two voice pitches, specifically alto voice and bass voice, as well as two speech styles denoted as frozen style and casual style. Through data collection, a repository of 13,600 data points was amassed from the participants, and subsequently subjected to rigorous analysis via the ANOVA methodology. The empirical findings demonstrate that users reveal sensitive, emotional responsiveness when faced with avatar videos characterized by varying auditory attributes. This pilot study establishes a foundational framework poised to guide future research undertakings aimed at inspiring user experiences through the deliberate manipulation of auditory attributes inherent in humanoid robots and avatars.
AB - Human-robot interaction has emerged as an increasingly prominent discourse within the domain of robotic technologies. In this context, the interaction of both visual and verbal cues assumes an essential role in shaping user experiences. The research problem of this study revolves around investigating the potential impact of auditory attribute alterations in humanoid agents, namely robots and avatars, on users’ emotional responses. The study recruited a participant cohort comprising 14 individuals, aged 18 to 35, to engage in an experimental process of observing avatar videos with distinctive auditory attributes. These attributes encompassed two voice pitches, specifically alto voice and bass voice, as well as two speech styles denoted as frozen style and casual style. Through data collection, a repository of 13,600 data points was amassed from the participants, and subsequently subjected to rigorous analysis via the ANOVA methodology. The empirical findings demonstrate that users reveal sensitive, emotional responsiveness when faced with avatar videos characterized by varying auditory attributes. This pilot study establishes a foundational framework poised to guide future research undertakings aimed at inspiring user experiences through the deliberate manipulation of auditory attributes inherent in humanoid robots and avatars.
UR - https://hdl.handle.net/1959.7/uws:75399
U2 - 10.1007/978-981-99-8715-3_2
DO - 10.1007/978-981-99-8715-3_2
M3 - Conference Paper
SN - 9789819987153
SP - 13
EP - 21
BT - Social Robotics: 15th International Conference, ICSR 2023, Doha, Qatar, December 3-7, 2023, Proceedings, Part I
PB - Springer
T2 - International Conference on Social Robotics
Y2 - 3 December 2023
ER -