出版社:University of Malaya * Faculty of Computer Science and Information Technology
摘要:Presents a new learning model for neural networks. It combines two previous modifications to the back propagation algorithm with a new scheme for pruning less contributory training cycles to achieve faster training. The total training time is reduced by predicting future weights at regular intervals from the nature of the previous weight changes. The oscillations among different patterns are reduced by updating the weights as a function of sum of errors of all input patterns. The predefined error level is adjusted by aborting the training session at an early stage when the weight changes become insignificant with respect to changes in iterations. This model was used to recognize Bengali alphabets and a significant reduction in training time was observed.