摘要:This paper presents a new system for recognitionand imitating of a set of facial expressions using the visualinformation acquired by the robot. Besides, the proposed systemdetects and imitates the interlocutor head’s pose and motion.The approach described in this paper is used for human-robotinteraction (HRI), and it consists of two consecutive stages: i) avisual analysis of the human facial expression in order to estimatethe interlocutor’s emotional state (i.e., happiness, sadness, anger,fear, neutral) using a Bayesian approach, which is achieved inreal time; and ii) an estimate of the user’s head pose and motion.This information updates the knowledge of the robot about thepeople in its field of view, and thus, allows the robot to use it forfuture actions and interactions. In this paper, both, human facialexpression and head motion, are imitated by Muecas, a 12 degreeof freedom (DOF) robotics Head. This paper also introduces theconcept of the human and robot facial expression models, whichare included inside of a new cognitive module that builds andupdates selective representations of the robot and the agents inits environment for enhancing future HRI. Experimental resultsdemonstrate the quality of the detection and imitation usingdifferent scenarios with Muecas.
关键词:Robotics;Facial Expression Recognition; Imitation; Human Robot Interaction.