首页    期刊浏览 2024年11月24日 星期日
登录注册

文章基本信息

  • 标题:Arousal States as a Key Source of Variability in Speech Perception and Learning
  • 本地全文:下载
  • 作者:William L. Schuerman ; Bharath Chandrasekaran ; Matthew K. Leonard
  • 期刊名称:Languages
  • 印刷版ISSN:2226-471X
  • 出版年度:2022
  • 卷号:7
  • 期号:1
  • 页码:19
  • DOI:10.3390/languages7010019
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:The human brain exhibits the remarkable ability to categorize speech sounds into distinct, meaningful percepts, even in challenging tasks like learning non-native speech categories in adulthood and hearing speech in noisy listening conditions. In these scenarios, there is substantial variability in perception and behavior, both across individual listeners and individual trials. While there has been extensive work characterizing stimulus-related and contextual factors that contribute to variability, recent advances in neuroscience are beginning to shed light on another potential source of variability that has not been explored in speech processing. Specifically, there are task-independent, moment-to-moment variations in neural activity in broadly-distributed cortical and subcortical networks that affect how a stimulus is perceived on a trial-by-trial basis. In this review, we discuss factors that affect speech sound learning and moment-to-moment variability in perception, particularly arousal states—neurotransmitter-dependent modulations of cortical activity. We propose that a more complete model of speech perception and learning should incorporate subcortically-mediated arousal states that alter behavior in ways that are distinct from, yet complementary to, top-down cognitive modulations. Finally, we discuss a novel neuromodulation technique, transcutaneous auricular vagus nerve stimulation (taVNS), which is particularly well-suited to investigating causal relationships between arousal mechanisms and performance in a variety of perceptual tasks. Together, these approaches provide novel testable hypotheses for explaining variability in classically challenging tasks, including non-native speech sound learning.
国家哲学社会科学文献中心版权所有