摘要:the Emognition dataset is dedicated to testing methods for emotion recognition (ER) from physiological responses and facial expressions . We collected data from 43 participants who watched short flm clips eliciting nine discrete emotions: amusement, awe, enthusiasm, liking, surprise, anger, disgust, fear, and sadness . Three wearables were used to record physiological data: EEG, BVP (2x), HR, EDA, SKT, ACC (3x), and GYRO (2x); in parallel with the upper-body videos . After each flm clip, participants completed two types of self-reports: (1) related to nine discrete emotions and (2) three afective dimensions: valence, arousal, and motivation . The obtained data facilitates various ER approaches, e .g ., multimodal ER, EEG- vs . cardiovascular-based ER, discrete to dimensional representation transitions . The technical validation indicated that watching flm clips elicited the targeted emotions . It also supported signals’ high quality.