首页    期刊浏览 2024年09月29日 星期日
登录注册

文章基本信息

  • 标题:Off-Policy Q-Learning for Anti-Interference Control of Multi-Player Systems ⁎
  • 本地全文:下载
  • 作者:Jinna Li ; Zhenfei Xiao ; Tianyou Chai
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2020
  • 卷号:53
  • 期号:2
  • 页码:9189-9194
  • DOI:10.1016/j.ifacol.2020.12.2180
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractThis paper develops a novel off-policy game Q-learning algorithm to solve the anti-interference control problem for discrete-time linear multi-player systems using only data without requiring system matrices to be known. The primary contribution of this paper lies in that the Q-learning strategy employed in the proposed algorithm is implemented in an off-policy policy iteration approach other than on-policy learning due to the well-known advantages of off-policy Q-learning over on-policy Q-learning. All of the players work hard together for the goal of minimizing their common performance index meanwhile defeating the disturbance that tries to maximize the specific performance index, and finally they reach the Nash equilibrium of the game resulting in satisfying disturbance attenuation condition. In order to find the solution to the Nash equilibrium, the anti-interference control problem is first transformed into an optimal control problem. Then an off-policy Q-learning algorithm is proposed in the framework of typical adaptive dynamic programming (ADP) and game architecture, such that control policies of all players can be learned using only measured data. Comparative simulation results are provided to verify the effectiveness of the proposed method.
  • 关键词:KeywordsH∞controloff-policy Q-learninggame theoryNash equilibrium
国家哲学社会科学文献中心版权所有