摘要:This work intended to develop a unique multimodal prototypical by integrating several electroencephalogram (EEG) information sources subjected to neutral, undesirable, and optimistic aural stimulation to differentiate between Stress affected persons and controls. A depression identification model was constructed by fusing EEG data from many modalities using a featurelevel fusion approach. Simultaneous EEG recordings were made on 86 Stress patients and 92 normal controls when exposed to various auditory stimuli. Then, lined and nonlinear characteristics were removed and chosen from the EEG data for each modality to generate modality-specific features. Additionally, a direct combination approach was employed to combine the EEG characteristics from various modalities to create a universal feature vector and identify numerous robust factors. The classification precision of each classifier, viz. the k-nearest neighbor (KNN), the decision tree (D.T.), and the support vector machine (SVM), is associated with positive findings. The KNN classifier had the maximum classification precision of 86.98 percent when optimistic and undesirable audio stimuli were combined, suggesting that the synthesis modality may attain a better rate of depression detection than the separate modality schemes. Additionally, feature weighting was accomplished using genetic algorithms to enhance the recognition framework's overall performance. This work may contribute to developing a new method for diagnosing depression.