首页    期刊浏览 2024年10月05日 星期六
登录注册

文章基本信息

  • 标题:Robust Training under Linguistic Adversity
  • 本地全文:下载
  • 作者:Yitong Li ; Trevor Cohn ; Timothy Baldwin
  • 期刊名称:Conference on European Chapter of the Association for Computational Linguistics (EACL)
  • 出版年度:2017
  • 卷号:2017
  • 页码:21-27
  • 语种:English
  • 出版社:ACL Anthology
  • 摘要:Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods. Empirically, we evaluate our method with a convolutional neural model across a range of sentiment analysis datasets. Compared with a baseline and the dropout method, our method achieves better overall performance.
国家哲学社会科学文献中心版权所有