首页    期刊浏览 2024年07月19日 星期五
登录注册

文章基本信息

  • 标题:Training Tips for the Transformer Model
  • 本地全文:下载
  • 作者:Martin Popel ; Ondřej Bojar
  • 期刊名称:The Prague Bulletin of Mathematical Linguistics
  • 印刷版ISSN:0032-6585
  • 电子版ISSN:1804-0462
  • 出版年度:2018
  • 卷号:110
  • 期号:1
  • 页码:43-70
  • DOI:10.2478/pralin-2018-0002
  • 语种:English
  • 出版社:Walter de Gruyter GmbH
  • 摘要:This article describes our experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model ( Vaswani et al., 2017 ). We examine some of the critical parameters that affect the final translation quality, memory usage, training stability and training time, concluding each experiment with a set of recommendations for fellow researchers. In addition to confirming the general mantra “more data and larger models”, we address scaling to multiple GPUs and provide practical tips for improved training regarding batch size, learning rate, warmup steps, maximum sentence length and checkpoint averaging. We hope that our observations will allow others to get better results given their particular hardware and data constraints.
国家哲学社会科学文献中心版权所有