首页    期刊浏览 2024年09月20日 星期五
登录注册

文章基本信息

  • 标题:Neural Models of Text Normalization for Speech Applications
  • 本地全文:下载
  • 作者:Hao Zhang ; Richard Sproat ; Axel H. Ng
  • 期刊名称:Computational Linguistics
  • 印刷版ISSN:0891-2017
  • 电子版ISSN:1530-9312
  • 出版年度:2019
  • 卷号:45
  • 期号:2
  • 页码:293-337
  • DOI:10.1162/coli_a_00349
  • 语种:English
  • 出版社:MIT Press
  • 摘要:Machine learning, including neural network techniques, have been applied to virtually every domain in natural language processing. One problem that has been somewhat resistant to effective machine learning solutions istext normalizationfor speech applications such as text-to-speech synthesis (TTS). In this application, one must decide, for example, that123is verbalized asone hundred twenty threein123 pagesbut asone twenty threein123 King Ave.For this task, state-of-the-art industrial systems depend heavily on hand-written language-specific grammars.We propose neural network models that treat text normalization for TTS as a sequence-to-sequence problem, in which the input is a text token in context, and the output is the verbalization of that token. We find that the most effective model, in accuracy and efficiency, is one where the sentential context is computed once and the results of that computation are combined with the computation of each token in sequence to compute the verbalization. This model allows for a great deal of flexibility in terms of representing the context, and also allows us to integrate tagging and segmentation into the process.These models perform very well overall, but occasionally they will predict wildly inappropriate verbalizations, such as reading3 cmasthree kilometers. Although rare, such verbalizations are a major issue for TTS applications. We thus use finite-statecovering grammarsto guide the neural models, either during training and decoding, or just during decoding, away from such “unrecoverable” errors. Such grammars can largely be learned from data.
国家哲学社会科学文献中心版权所有