首页    期刊浏览 2024年12月01日 星期日
登录注册

文章基本信息

  • 标题:State-space segmentation for faster training reinforcement learning
  • 本地全文:下载
  • 作者:Jongrae Kim
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2022
  • 卷号:55
  • 期号:25
  • 页码:235-240
  • DOI:10.1016/j.ifacol.2022.09.352
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractNonlinear control problems have been the main subjects in control engineering from theoretical and applicational aspects. Reinforcement learning shows promising results for solving highly nonlinear control problems. Among many variants of reinforcement learning, Deep Deterministic Policy Gradient (DDPG) considers continuous control signals, which makes it an ideal candidate for solving nonlinear control problems. The training requires frequently, however, a large number of computations. To improve the convergence of DDPG, we present a state-space segmentation method dividing the state-space to expand the target space defined by the best reward. An inverted pendulum control example demonstrates the performance of the proposed segmentation method.
  • 关键词:Keywordsreinforcement learninglearning convergencerewardlinear control
国家哲学社会科学文献中心版权所有