首页    期刊浏览 2024年10月05日 星期六
登录注册

文章基本信息

  • 标题:Memorization vs. Generalization : Quantifying Data Leakage inNLPPerformance Evaluation
  • 本地全文:下载
  • 作者:Aparna Elangovan ; Jiayuan He ; Karin Verspoor
  • 期刊名称:Conference on European Chapter of the Association for Computational Linguistics (EACL)
  • 出版年度:2021
  • 卷号:2021
  • 页码:1325-1335
  • DOI:10.18653/v1/2021.eacl-main.113
  • 语种:English
  • 出版社:ACL Anthology
  • 摘要:Public datasets are often used to evaluate the efficacy and generalizability of state-of-the-art methods for many tasks in natural language processing (NLP). However, the presence of overlap between the train and test datasets can lead to inflated results, inadvertently evaluating the model’s ability to memorize and interpreting it as the ability to generalize. In addition, such data sets may not provide an effective indicator of the performance of these methods in real world scenarios. We identify leakage of training data into test data on several publicly available datasets used to evaluate NLP tasks, including named entity recognition and relation extraction, and study them to assess the impact of that leakage on the model’s ability to memorize versus generalize.
国家哲学社会科学文献中心版权所有