首页    期刊浏览 2024年11月24日 星期日
登录注册

文章基本信息

  • 标题:Reinforcement Learning of the Prediction Horizon in Model Predictive Control
  • 本地全文:下载
  • 作者:Eivind Bøhn ; Sebastien Gros ; Signe Moe
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2021
  • 卷号:54
  • 期号:6
  • 页码:314-320
  • DOI:10.1016/j.ifacol.2021.08.563
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractModel predictive control (MPC) is a powerful trajectory optimization control technique capable of controlling complex nonlinear systems while respecting system constraints and ensuring safe operation. The MPC’s capabilities come at the cost of a high online computational complexity, the requirement of an accurate model of the system dynamics, and the necessity of tuning its parameters to the specific control application. The main tunable parameter affecting the computational complexity is the prediction horizon length, controlling how far into the future the MPC predicts the system response and thus evaluates the optimality of its computed trajectory. A longer horizon generally increases the control performance, but requires an increasingly powerful computing platform, excluding certain control applications. The performance sensitivity to the prediction horizon length varies over the state space, and this motivated adaptive horizon model predictive control (AHMPC), which adapts the prediction horizon according to some criteria. In this paper we propose to learn the optimal prediction horizon as a function of the state using reinforcement learning (RL). We show how the RL learning problem can be formulated and test our method on two control tasks — showing clear improvements over the fixed horizon MPC scheme — while requiring only minutes of learning.
  • 关键词:KeywordsAdaptive horizon model predictive controlReinforcement learning control
国家哲学社会科学文献中心版权所有