首页    期刊浏览 2024年10月06日 星期日
登录注册

文章基本信息

  • 标题:An In-Network Parameter Aggregation using DPDK for Multi-GPU Deep Learning
  • 本地全文:下载
  • 作者:Masaki Furukawa ; Tomoya Itsubo ; Hiroki Matsutani
  • 期刊名称:International Journal of Networking and Computing
  • 印刷版ISSN:2185-2847
  • 出版年度:2021
  • 卷号:11
  • 期号:2
  • 页码:516-532
  • 语种:English
  • 出版社:International Journal of Networking and Computing
  • 摘要:In distributed deep neural network using remote GPU nodes, communication occurs iteratively between remote nodes for gradient aggregation. This communication latency limits the benefit of distributed training with faster GPUs. In this paper, we therefore propose to offload the gradient aggregation to a DPDK (Data Plane Development Kit) based network switch between a host machine and remote GPUs. In this approach, the aggregation process is completed in the network using extra computation resources in the network switch and efficiently overlapped without increasing workload on remote nodes. The proposed DPDK-based switch supports reliable communication protocols for exchanging gradients data and can handle a part of MPI over TCP-based communication. We evaluate the proposed switch when GPUs and the host communicate with a standard IP communication over 40GbE, a PCI Express (PCIe) over 40Gbit Ethernet (40GbE) product and MPI communication over 10GbE, respectively. The evaluation results using a standard IP communication show that the aggregation is accelerated by 2.2-2.5x compared to the aggregation executed by a host machine. The results using the PCIe over 40GbE product show that the proposed switch outperforms the aggregation done by the host machine by 1.16x. The evaluations using MPI communication using Jetson Xaviers cluster show that the proposed switch provides up to 5.5-5.8x faster reduction operations than the conventional method.
国家哲学社会科学文献中心版权所有