首页    期刊浏览 2024年11月25日 星期一
登录注册

文章基本信息

  • 标题:A Bidirectional Context Embedding Transformer for Automatic Speech Recognition
  • 本地全文:下载
  • 作者:Lyuchao Liao ; Francis Afedzie Kwofie ; Zhifeng Chen
  • 期刊名称:Information
  • 电子版ISSN:2078-2489
  • 出版年度:2022
  • 卷号:13
  • 期号:2
  • 页码:69
  • DOI:10.3390/info13020069
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:Transformers have become popular in building end-to-end automatic speech recognition (ASR) systems. However, transformer ASR systems are usually trained to give output sequences in the left-to-right order, disregarding the right-to-left context. Currently, the existing transformer-based ASR systems that employ two decoders for bidirectional decoding are complex in terms of computation and optimization. The existing ASR transformer with a single decoder for bidirectional decoding requires extra methods (such as a self-mask) to resolve the problem of information leakage in the attention mechanism This paper explores different options for the development of a speech transformer that utilizes a single decoder equipped with bidirectional context embedding (BCE) for bidirectional decoding. The decoding direction, which is set up at the input level, enables the model to attend to different directional contexts without extra decoders and also alleviates any information leakage. The effectiveness of this method was verified with a bidirectional beam search method that generates bidirectional output sequences and determines the best hypothesis according to the output score. We achieved a word error rate (WER) of 7.65%/18.97% on the clean/other LibriSpeech test set, outperforming the left-to-right decoding style in our work by 3.17%/3.47%. The results are also close to, or better than, other state-of-the-art end-to-end models.
国家哲学社会科学文献中心版权所有