摘要:Although the uniform convergence of extreme learning machine (ELM) has been proved for any continues probability distribution, the variances of random numbers initializing input-layer weights and hidden-layer biases indeed have the obvious impact on generalization performance of ELM. In this paper, we validate this effect by testing the classification accuracies of ELMs initialized by the random numbers with different variances. We select three commonly-used probability distributions (i.e., Uniform, Gamma and Normal) and 30 UCI data sets to conduct our comparative study. The experimental results present some important and valuable observations and instructions: (1) Uniform and Gamma distributions with the smaller variances usually make ELMs get the higher training and testing accuracies; (2) In comparison with Normal distribution, the variances of Uniform and Gamma distributions have the significant impact on classification performance of ELMs; (3) Uniform and Gamma distributions with the larger variances could seriously degrade the classification capability of ELMs; (4) ELMs initialized by Uniform and Gamma distributions with the larger variances generally needs the more hidden-layer nodes to achieve the equivalent classification accuracies with ones having the smaller variances; and (5) Normal distribution are more easily lead to the over-fitting of ELMs.
其他关键词:Extreme learning machine, random initialization, probability distribution, variance.