首页    期刊浏览 2024年12月01日 星期日
登录注册

文章基本信息

  • 标题:Cross-lingual Visual Pre-training for Multimodal Machine Translation
  • 本地全文:下载
  • 作者:Ozan Caglayan ; Menekse Kuyu ; Mustafa Sercan Amac
  • 期刊名称:Conference on European Chapter of the Association for Computational Linguistics (EACL)
  • 出版年度:2021
  • 卷号:2021
  • 页码:1317-1324
  • DOI:10.18653/v1/2021.eacl-main.112
  • 语种:English
  • 出版社:ACL Anthology
  • 摘要:Pre-trained language models have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation language modelling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.
国家哲学社会科学文献中心版权所有