首页    期刊浏览 2025年06月22日 星期日
登录注册

文章基本信息

  • 标题:A Parallel Approach for Backpropagation Learning of Neural Networks
  • 本地全文:下载
  • 作者:Crespo M. ; Piccoli F. ; Printista M.
  • 期刊名称:Journal of Computer Science and Technology
  • 印刷版ISSN:1666-6046
  • 电子版ISSN:1666-6038
  • 出版年度:1999
  • 卷号:1
  • 期号:1
  • 出版社:Iberoamerican Science & Technology Education Consortium
  • 摘要:Neural nets learn by training, not by being programmed. Learning is the process of adjustmentof the neural network to external stimuli. After learning it is expected that the network willshow and abilities. By recall we mean the capability to recognise inputsfrom the training set, that is to say, those patterns presented to the network during the learningprocess. By generalisation we mean the ability to produce reasonable outputs associated withnew inputs of the same total pattern space. These properties are attained during the slowprocess of learning. Many approaches to speedup the training process has been devised bymeans of parallelism.The backpropagation algorithm (BP) is one of the most popular learning algorithms and manyapproaches to parallel implementations has been studied [5][6][9][10][12][15].To parallelise BP either the network or the training pattern space is partitioned. In , the nodes and weights of the neural network are distributed among diverseprocessors. Hence the computations due to node activations, node errors and weight changesare parallelised. In the whole neural net is replicated in differentprocessors and the weight changes due to distinct training patterns are parallelised.This paper shows the design of two distributed supports for parallel learning of neural networksusing a pattern partitioning approach. Results on speedup in learning and its impact on recalland generalisation are shown. Also a useful application of neural nets as a decisor for incomingtask allocation in a distributed system is discussed
国家哲学社会科学文献中心版权所有