出版社:Academy & Industry Research Collaboration Center (AIRCC)
摘要:In this paper we study the improvement in the performance of Artificial Neural Networks (ANN) by using parallel programming in GPU or FPGA architectures. It is well known that ANN can be parallelized according to particular characteristics of the training algorithm. We discuss both approaches: the software (GPU) and the Hardware (FPGA). Different training strategies are discussed: the Perceptron training unit, the Support Vector Machines (SVM) and Spiking Neural Networks (SNN). The different approaches are evaluated by the training speed and performance. On the other hand, algorithms were coded by authors in the hardware, like Nvidia card, FPGA or sequential circuits that depends on methodology used, to compare learning time with between GPU and CPU. Also, the main applications were made for recognition pattern, like acoustic speech, odor and clustering According to literature, GPU has a great advantage compared to CPU, this in the learning time except when it implies rendering of images, despite several architectures of Nvidia cards and CPU's. Also, in the survey we introduce a brief description of the types of ANN and its techniques of execution to be related with results of researching.