首页    期刊浏览 2024年11月28日 星期四
登录注册

文章基本信息

  • 标题:Optimization of the Model Predictive Control Update Interval Using Reinforcement Learning ⁎
  • 本地全文:下载
  • 作者:Eivind Bøhn ; Sebastien Gros ; Signe Moe
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2021
  • 卷号:54
  • 期号:14
  • 页码:257-262
  • DOI:10.1016/j.ifacol.2021.10.362
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractIn control applications there is often a compromise that needs to be made with respect to the complexity and performance of the controller, and the computational resources that are available. For instance, the typical hardware platform in embedded control applications is a microcontroller with limited memory and processing power, and for battery powered applications the control system can account for a significant portion of the energy consumption. We propose a controller architecture in which the computational cost is explicitly optimized along with the control objective. This is achieved by a three-part architecture where a high-level, computationally expensive controller generates plans, which a computationally simpler controller executes by compensating for prediction errors, while a recomputation policy decides when the plan should be recomputed. In this paper, we employ model predictive control (MPC) as the high-level plan-generating controller, a linear state feedback controller as the simpler compensating controller, and reinforcement learning (RL) to learn the recomputation policy. Simulation results for the classic control task of balancing an inverted pendulum show that not only is the total processor time reduced by 60% — the RL policy is even able to uncover a non-trivial synergistic relationship between the MPC and the state feedback controller - improving the control performance by 20% over the MPC alone.
  • 关键词:Keywordsmodel predictive controlreinforcement learningevent-driven control
国家哲学社会科学文献中心版权所有