摘要:AbstractArtificial neural networks are powerful tools for many information processing tasks such as pattern recognition, data mining, optimization, and prediction. It is a significant problem to find optimal structures of artificial neural networks for drawing out their high computational performance. Downsizing of network structure is also an issue to be considered for hardware implementation of large-scale neural networks. In this study, we propose a pruning method to find a compact structure of feedforward neural networks with high generalization ability in classification problems. Our method evaluates the significance of neuron nodes using the information on weight variation in the training process and prune the insignificant nodes preferentially unless the classification accuracy is degraded. Numerical experiments with several benchmark datasets show that the proposed method is effective compared with other methods.