首页    期刊浏览 2025年05月25日 星期日
登录注册

文章基本信息

  • 标题:Deep Decentralized Reinforcement Learning for Cooperative Control
  • 本地全文:下载
  • 作者:Florian Köpf ; Samuel Tesfazgi ; Michael Flad
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2020
  • 卷号:53
  • 期号:2
  • 页码:1555-1562
  • DOI:10.1016/j.ifacol.2020.12.2181
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractIn order to collaborate efficiently with unknown partners in cooperative control settings, adaptation of the partners based on online experience is required. The rather general and widely applicable control setting, where each cooperation partner might strive for individual goals while the control laws and objectives of the partners are unknown, entails various challenges such as the non-stationarity of the environment, the multi-agent credit assignment problem, the alter-exploration problem and the coordination problem. We propose new, modular deep decentralized Multi-Agent Reinforcement Learning mechanisms to account for these challenges. Therefore, our method uses a time-dependent prioritization of samples, incorporates a model of the system dynamics and utilizes variable, accountability-driven learning rates and simulated, artificial experiences in order to guide the learning process. The effectiveness of our method is demonstrated by means of a simulated, nonlinear cooperative control task.
  • 关键词:KeywordsReinforcement LearningDeep LearningLearning ControlShared ControlDecentralized ControlMachine LearningNon-stationary SystemsNonlinear Control
国家哲学社会科学文献中心版权所有