首页    期刊浏览 2024年09月15日 星期日
登录注册

文章基本信息

  • 标题:A reinforcement learning method with closed-loop stability guarantee
  • 本地全文:下载
  • 作者:Pavel Osinenko ; Lukas Beckenbach ; Thomas Göhrt
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2020
  • 卷号:53
  • 期号:2
  • 页码:8043-8048
  • DOI:10.1016/j.ifacol.2020.12.2237
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractReinforcement learning (RL) in the context of control systems offers wide possibilities of controller adaptation. Given an infinite-horizon cost function, the so-called critic of RL approximates it with a neural net and sends this information to the controller (called “actor”). However, the issue of closed-loop stability under an RL-method is still not fully addressed. Since the critic delivers merely an approximation to the value function of the corresponding infinite-horizon problem, no guarantee can be given in general as to whether the actor’s actions stabilize the system. Different approaches to this issue exist. The current work offers a particular one, which, starting with a (not necessarily smooth) control Lyapunov function (CLF), derives an online RL-scheme in such a way that practical semi-global stability property of the closed-loop can be established. The approach logically continues the work of the authors on parameterized controllers and Lyapunov-like constraints for RL, whereas the CLF now appears merely in one of the constraints of the control scheme. The analysis of the closed-loop behavior is done in a sample-and-hold (SH) manner thus offering a certain insight into the digital realization. The case study with a non-holonomic integrator shows the capabilities of the derived method to optimize the given cost function compared to a nominal stabilizing controller.
  • 关键词:KeywordsReinforcement learning controlStability of nonlinear systemsLyapunov methods
国家哲学社会科学文献中心版权所有