首页    期刊浏览 2024年11月26日 星期二
登录注册

文章基本信息

  • 标题:Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data
  • 本地全文:下载
  • 作者:Sanjeev Kumar Karn ; Francine Chen ; Yan-Ying Chen
  • 期刊名称:Conference on European Chapter of the Association for Computational Linguistics (EACL)
  • 出版年度:2021
  • 卷号:2021
  • 页码:245-254
  • 语种:English
  • 出版社:ACL Anthology
  • 摘要:Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions. Existing systems first disentangle the posts by threads and then extract summaries from those threads. A major issue with such systems is error propagation from the disentanglement component. While end-to-end trainable summarization system could obviate explicit disentanglement, such systems require a large amount of labeled data. To address this, we propose to pretrain an end-to-end trainable hierarchical encoder-decoder system using synthetic interleaved texts. We show that by fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a traditional two-step system by 22%. We also compare against transformer models and observed that pretraining with synthetic data both the encoder and decoder outperforms the BertSumExtAbs transformer model which pretrains only the encoder on a large dataset.
国家哲学社会科学文献中心版权所有