首页    期刊浏览 2024年12月01日 星期日
登录注册

文章基本信息

  • 标题:Contextualizer: Connecting the Dots of Context with Second-Order Attention
  • 本地全文:下载
  • 作者:Diego Maupomé ; Marie-Jean Meurs
  • 期刊名称:Information
  • 电子版ISSN:2078-2489
  • 出版年度:2022
  • 卷号:13
  • 期号:6
  • 页码:290
  • DOI:10.3390/info13060290
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:Composing the representation of a sentence from the tokens that it comprises is difficult, because such a representation needs to account for how the words present relate to each other. The Transformer architecture does this by iteratively changing token representations with respect to one another. This has the drawback of requiring computation that grows quadratically with respect to the number of tokens. Furthermore, the scalar attention mechanism used by Transformers requires multiple sets of parameters to operate over different features. The present paper proposes a lighter algorithm for sentence representation with complexity linear in sequence length. This algorithm begins with a presumably erroneous value of a context vector and adjusts this value with respect to the tokens at hand. In order to achieve this, representations of words are built combining their symbolic embedding with a positional encoding into single vectors. The algorithm then iteratively weighs and aggregates these vectors using a second-order attention mechanism, which allows different feature pairs to interact with each other separately. Our models report strong results in several well-known text classification tasks.
国家哲学社会科学文献中心版权所有