首页    期刊浏览 2024年07月19日 星期五
登录注册

文章基本信息

  • 标题:Multi-Agent Distributed Deep Deterministic Policy Gradient for Partially Observable Tracking
  • 本地全文:下载
  • 作者:Dongyu Fan ; Haikuo Shen ; Lijing Dong
  • 期刊名称:Actuators
  • 电子版ISSN:2076-0825
  • 出版年度:2021
  • 卷号:10
  • 期号:10
  • 页码:268
  • DOI:10.3390/act10100268
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:In many existing multi-agent reinforcement learning tasks, each agent observes all the other agents from its own perspective. In addition, the training process is centralized, namely the critic of each agent can access the policies of all the agents. This scheme has certain limitations since every single agent can only obtain the information of its neighbor agents due to the communication range in practical applications. Therefore, in this paper, a multi-agent distributed deep deterministic policy gradient (MAD3PG) approach is presented with decentralized actors and distributed critics to realize multi-agent distributed tracking. The distinguishing feature of the proposed framework is that we adopted the multi-agent distributed training with decentralized execution, where each critic only takes the agent’s and the neighbor agents’ policies into account. Experiments were conducted in the distributed tracking tasks based on multi-agent particle environments where <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>N</mi><mspace width="3.33333pt"></mspace><mo>(</mo><mi>N</mi><mo>=</mo><mn>3</mn><mo>,</mo><mi>N</mi><mo>=</mo><mn>5</mn><mo>)</mo></mrow></semantics></math></inline-formula> agents track a target agent with partial observation. The results showed that the proposed method achieves a higher reward with a shorter training time compared to other methods, including MADDPG, DDPG, PPO, and DQN. The proposed novel method leads to a more efficient and effective multi-agent tracking.
国家哲学社会科学文献中心版权所有