首页    期刊浏览 2024年07月05日 星期五
登录注册

文章基本信息

  • 标题:DSTL: Solution to Limitation of Small Corpus in Speech Emotion Recognition
  • 本地全文:下载
  • 作者:Ying Chen ; Zhongzhe Xiao ; Xiaojun Zhang
  • 期刊名称:Journal of Artificial Intelligence Research
  • 印刷版ISSN:1076-9757
  • 出版年度:2019
  • 卷号:66
  • 页码:381-410
  • 出版社:American Association of Artificial
  • 摘要:Traditional machine learning methods share a common hypothesis: training and testing datasets must be in a common feature space with the same distribution. However, in reality, the labeled target data may be rare, so that target space does not share the same feature space or distribution as an available training set (source domain). To address the mismatch of domains, we propose a Dual-Subspace Transfer Learning (DSTL) framework that considers both the common and specific information of the two domains. In DSTL, a latent common subspace is first learned to preserve the data properties and reduce the discrepancy of domains. Then, we propose a mapping strategy to transfer the sourcespecific information to the target subspace. The integration of the domain-common and specific information constructs the proposed DSTL framework. In comparison to the stateart-of works, the main contribution of our work is that the DSTL framework not only considers the commonalities, but also exploits the specific information. Experiments on three emotional speech corpora verify the effectiveness of our approach. The results show that the methods which include both domain-common and specific information perform better than the baseline methods which only exploit the domain commonalities.
  • 其他摘要:Traditional machine learning methods share a common hypothesis: training and testing datasets must be in a common feature space with the same distribution. However, in reality, the labeled target data may be rare, so that target space does not share the same feature space or distribution as an available training set (source domain). To address the mismatch of domains, we propose a Dual-Subspace Transfer Learning (DSTL) framework that considers both the common and specific information of the two domains. In DSTL, a latent common subspace is first learned to preserve the data properties and reduce the discrepancy of domains. Then, we propose a mapping strategy to transfer the sourcespecific information to the target subspace. The integration of the domain-common and specific information constructs the proposed DSTL framework. In comparison to the stateart-of works, the main contribution of our work is that the DSTL framework not only considers the commonalities, but also exploits the specific information. Experiments on three emotional speech corpora verify the effectiveness of our approach. The results show that the methods which include both domain-common and specific information perform better than the baseline methods which only exploit the domain commonalities.
  • 关键词:machine learning;speech processing;data mining
  • 其他关键词:machine learning;speech processing;data mining
国家哲学社会科学文献中心版权所有