出版社:Information and Media Technologies Editorial Board
摘要:In this paper, we propose a new approach to emotion recognition. Prosodic features are currently used in most emotion recognition algorithms. However, emotion recognition algorithms using prosodic features are not sufficiently accurate. Therefore, we focused on the phonetic features of speech for emotion recognition. In particular, we describe the effectiveness of Mel-frequency Cepstral Coefficients (MFCCs) as the feature for emotion recognition. We focus on the precise classification of MFCC feature vectors, rather than their dynamic nature over an utterance. To realize such an approach, the proposed algorithm employs multi-template emotion classification of the analysis frames. Experimental evaluations show that the proposed algorithm produces 66.4% recognition accuracy in speaker-independent emotion recognition experiments for four specific emotions. This recognition accuracy is higher than the accuracy obtained by the conventional prosody-based and MFCC-based emotion recognition algorithms, which confirms the potential of the proposed algorithm.