首页    期刊浏览 2024年09月15日 星期日
登录注册

文章基本信息

  • 标题:Addressing infinite-horizon optimization in MPC via Q-learning
  • 本地全文:下载
  • 作者:Lukas Beckenbach ; Pavel Osinenko ; Stefan Streif
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2018
  • 卷号:51
  • 期号:20
  • 页码:60-65
  • DOI:10.1016/j.ifacol.2018.10.175
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractModel predictive control (MPC) is the standard approach to infinite-horizon optimal control which usually optimizes a finite initial fragment of the cost function so as to make the problem computationally tractable. Globally optimal controllers are usually found by Dynamic Programming (DP). The computations involved in DP are notoriously hard to perform, especially in online control. Therefore, different approximation schemes of DP, the so-called “critics”, were suggested for infinite-horizon cost functions. This work proposes to incorporate such a critic into dual-mode MPC as a particular means of addressing infinite-horizon optimal control. The proposed critic is based on Q-learning and is used for online approximation of the infinite-horizon cost. Stability of the new approach is analyzed and certain sufficient stabilizing constraints on the critic are derived. A case study demonstrates the applicability.
  • 关键词:KeywordsNonlinear MPCinfinite-horizon optimizationreinforcement learning
国家哲学社会科学文献中心版权所有