首页    期刊浏览 2024年08月30日 星期五
登录注册

文章基本信息

  • 标题:Fast reinforcement learning with generalized policy updates
  • 本地全文:下载
  • 作者:André Barreto ; Shaobo Hou ; Diana Borsa
  • 期刊名称:Proceedings of the National Academy of Sciences
  • 印刷版ISSN:0027-8424
  • 电子版ISSN:1091-6490
  • 出版年度:2020
  • 卷号:117
  • 期号:48
  • 页码:30079-30087
  • DOI:10.1073/pnas.1907370117
  • 出版社:The National Academy of Sciences of the United States of America
  • 摘要:The combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision-making problems that are currently intractable. One obstacle to overcome is the amount of data needed by learning systems of this type. In this article, we propose to address this issue through a divide-and-conquer approach. We argue that complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel. By associating each task with a reward function, this problem decomposition can be seamlessly accommodated within the standard reinforcement-learning formalism. The specific way we do so is through a generalization of two fundamental operations in reinforcement learning: policy improvement and policy evaluation. The generalized version of these operations allow one to leverage the solution of some tasks to speed up the solution of others. If the reward function of a task can be well approximated as a linear combination of the reward functions of tasks previously solved, we can reduce a reinforcement-learning problem to a simpler linear regression. When this is not the case, the agent can still exploit the task solutions by using them to interact with and learn about the environment. Both strategies considerably reduce the amount of data needed to solve a reinforcement-learning problem.
  • 关键词:artificial intelligence ; reinforcement learning ; generalized policy improvement ; generalized policy evaluation ; successor features
国家哲学社会科学文献中心版权所有