首页    期刊浏览 2024年11月28日 星期四
登录注册

文章基本信息

  • 标题:Proxy Functions for Approximate Reinforcement Learning
  • 本地全文:下载
  • 作者:Eduard Alibekov ; Jiří Kubalík ; Robert Babuška
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2019
  • 卷号:52
  • 期号:11
  • 页码:224-229
  • DOI:10.1016/j.ifacol.2019.09.145
  • 语种:English
  • 出版社:Elsevier
  • 摘要:Approximate Reinforcement Learning (RL) is a method to solve sequential decisionmaking and dynamic control problems in an optimal way. This paper addresses RL for continuous state spaces which derive the control policy by using an approximate value function (V-function). The standard approach to derive a policy through the V-function is analogous to hill climbing: at each state the RL agent chooses the control input that maximizes the right-hand side of the Bellman equation. Although theoretically optimal, the actual control performance of this method is heavily influenced by the local smoothness of the V-function; a lack of smoothness results in undesired closed-loop behavior with input chattering or limit-cycles. To circumvent these problems, this paper provides a method based on Symbolic Regression to generate a locally smooth proxy to the V-function. The proposed method has been evaluated on two nonlinear control benchmarks: pendulum swing-up and magnetic manipulation. The new method has been compared with the standard policy derivation technique using the approximate V-function and the results show that the proposed approach outperforms the standard one with respect to the cumulative return.
  • 关键词:Keywordsreinforcement learningcontinuous state spaceoptimal controlpolicy derivationV-function
国家哲学社会科学文献中心版权所有