首页    期刊浏览 2025年07月31日 星期四
登录注册

文章基本信息

  • 标题:We Need To Talk About Random Splits
  • 本地全文:下载
  • 作者:Anders Søgaard ; Sebastian Ebert ; Jasmijn Bastings
  • 期刊名称:Conference on European Chapter of the Association for Computational Linguistics (EACL)
  • 出版年度:2021
  • 卷号:2021
  • 页码:1823-1832
  • DOI:10.18653/v1/2021.eacl-main.156
  • 语种:English
  • 出版社:ACL Anthology
  • 摘要:(CITATION) argued for using random splits rather than standard splits in NLP experiments. We argue that random splits, like standard splits, lead to overly optimistic performance estimates. We can also split data in biased or adversarial ways, e.g., training on short sentences and evaluating on long ones. Biased sampling has been used in domain adaptation to simulate real-world drift; this is known as the covariate shift assumption. In NLP, however, even worst-case splits, maximizing bias, often under-estimate the error observed on new samples of in-domain data, i.e., the data that models should minimally generalize to at test time. This invalidates the covariate shift assumption. Instead of using multiple random splits, future benchmarks should ideally include multiple, independent test sets instead; if infeasible, we argue that multiple biased splits leads to more realistic performance estimates than multiple random splits.
国家哲学社会科学文献中心版权所有