首页    期刊浏览 2025年02月21日 星期五
登录注册

文章基本信息

  • 标题:Interpretability for Morphological Inflection: from Character-level Predictions to Subword-level Rules
  • 本地全文:下载
  • 作者:Tatyana Ruzsics ; Olga Sozinova ; Ximena Gutierrez-Vasques
  • 期刊名称:Conference on European Chapter of the Association for Computational Linguistics (EACL)
  • 出版年度:2021
  • 卷号:2021
  • 页码:3189-3201
  • DOI:10.18653/v1/2021.eacl-main.278
  • 语种:English
  • 出版社:ACL Anthology
  • 摘要:Neural models for morphological inflection have recently attained very high results. However, their interpretation remains challenging. Towards this goal, we propose a simple linguistically-motivated variant to the encoder-decoder model with attention. In our model, character-level cross-attention mechanism is complemented with a self-attention module over substrings of the input. We design a novel approach for pattern extraction from attention weights to interpret what the model learn. We apply our methodology to analyze the model’s decisions on three typologically-different languages and find that a) our pattern extraction method applied to cross-attention weights uncovers variation in form of inflection morphemes, b) pattern extraction from self-attention shows triggers for such variation, c) both types of patterns are closely aligned with grammar inflection classes and class assignment criteria, for all three languages. Additionally, we find that the proposed encoder attention component leads to consistent performance improvements over a strong baseline.
国家哲学社会科学文献中心版权所有