出版社:The Editorial Committee of the Interdisciplinary Information Sciences
摘要:The McGurk effect is one of the typical phenomena caused by human multi-modal information processing between auditory and visual speech perception. In this paper, we investigated the relation between the degree of the McGurk effect and the perceived impression by speech sounds and moving images of the talker's face. As stimuli, uttered speech sounds were combined with moving images of a different talker's face. These stimuli were presented to observers, who were asked to respond to what the talker was saying. At the same time, they were asked to report their subjective impressions of these stimuli. Matching between the voice and moving image was used as the index of the judgment. Results showed that matching between a voice and a talker's facial movements affected the degree of the McGurk effect, suggesting that audio–visual kansei information affects phoneme perception.