首页    期刊浏览 2024年11月24日 星期日
登录注册

文章基本信息

  • 标题:Adversarial vulnerabilities of human decision-making
  • 本地全文:下载
  • 作者:Amir Dezfouli ; Richard Nock ; Peter Dayan
  • 期刊名称:Proceedings of the National Academy of Sciences
  • 印刷版ISSN:0027-8424
  • 电子版ISSN:1091-6490
  • 出版年度:2020
  • 卷号:117
  • 期号:46
  • 页码:29221-29228
  • DOI:10.1073/pnas.2016921117
  • 出版社:The National Academy of Sciences of the United States of America
  • 摘要:Adversarial examples are carefully crafted input patterns that are surprisingly poorly classified by artificial and/or natural neural networks. Here we examine adversarial vulnerabilities in the processes responsible for learning and choice in humans. Building upon recent recurrent neural network models of choice processes, we propose a general framework for generating adversarial opponents that can shape the choices of individuals in particular decision-making tasks toward the behavioral patterns desired by the adversary. We show the efficacy of the framework through three experiments involving action selection, response inhibition, and social decision-making. We further investigate the strategy used by the adversary in order to gain insights into the vulnerabilities of human choice. The framework may find applications across behavioral sciences in helping detect and avoid flawed choice.
  • 关键词:decision-making ; recurrent neural networks ; reinforcement learning
国家哲学社会科学文献中心版权所有