首页    期刊浏览 2024年11月29日 星期五
登录注册

文章基本信息

  • 标题:Accelerating Training of Deep Neural Networks on GPU using CUDA
  • 本地全文:下载
  • 作者:D.T.V. Dharmajee Rao ; K.V. Ramana
  • 期刊名称:International Journal of Intelligent Systems and Applications
  • 印刷版ISSN:2074-904X
  • 电子版ISSN:2074-9058
  • 出版年度:2019
  • 卷号:11
  • 期号:5
  • 页码:18-26
  • DOI:10.5815/ijisa.2019.05.03
  • 出版社:MECS Publisher
  • 摘要:The development of fast and efficient training algorithms for Deep Neural Networks has been a subject of interest over the past few years because the biggest drawback of Deep Neural Networks is enormous cost in computation and large time is consumed to train the parameters of Deep Neural Networks. This aspect motivated several researchers to focus on recent advancements of hardware architectures and parallel programming models and paradigms for accelerating the training of Deep Neural Networks. We revisited the concepts and mechanisms of typical Deep Neural Network training algorithms such as Backpropagation Algorithm and Boltzmann Machine Algorithm and observed that the matrix multiplication constitutes major portion of the work-load for the Deep Neural Network training process because it is carried out for a huge number of times during the training of Deep Neural Networks. With the advent of many-core GPU technologies, a matrix multiplication can be done very efficiently in parallel and this helps a lot training a Deep Neural Network not consuming time as it used to be a few years ago. CUDA is one of the high performance parallel programming models to exploit the capabilities of modern many-core GPU systems. In this paper, we propose to modify Backpropagation Algorithm and Boltzmann Machine Algorithm with CUDA parallel matrix multiplication and test on many-core GPU system. Finally we discover that the planned strategies achieve very quick training of Deep Neural Networks than classic strategies.
  • 关键词:Deep Neural Networks;Matrix multiplication;CUDA;Many-core GPU systems
国家哲学社会科学文献中心版权所有