期刊名称:International Journal of Computer Science & Technology
印刷版ISSN:2229-4333
电子版ISSN:0976-8491
出版年度:2018
卷号:9
期号:3
语种:English
出版社:Ayushmaan Technologies
摘要:Facial Expression Recognition (FER) has essential real world applications. Its applications include, but are not limited to, Human Computer Interaction (HCI), psychology and telecommunications. It remains a challenging problem and active research topic in computer vision, and many novel methods have been proposed to tackle the automatic facial expression recognition problem. The main challenge here is to perform decoupling of the rigid facial changes due to the head-pose and non-rigid facial changes due to the expression, as they are non-linearly coupled in images. Another challenge is how to effectively exploit the information from multiple views (or different facial features) in order to facilitate the expression classification. Thus, accounting for the fact that each view of a facial expression is just a different manifestation of the same underlying facial expression related content is expected to result in more effective classifiers for the target task. The facial expression image sequence contains not only image appearance information in the spatial domain, but also evolution details in the temporal domain. The image appearance information together with the expression evolution information can further enhance recognition performance. Although the dynamic information provided is useful, there are challenges regarding how to capture this information reliably and robustly. For instance, a facial expression sequence normally constitutes of one or more onset, apex and offset phases. In order to capture temporal information and make temporal information of training and query sequences comparable, correspondences between different temporal phases need to be established. As facial actions over time are different across subjects, it remains an open issue how a common temporal feature for each expression among the population can be effectively encoded while suppressing subject-specific facial shape variations. In this work, a new dynamic facial expression process is created using Efficient Distance Measures for Emotion detection.