摘要:The generalization performance of extreme learning machine (ELM) is influenced by the random initializations to input-layer weights and hidden-layer biases. In this paper, we demonstrate this conclusion through testing the classification accuracies of ELMs corresponding to different random initializations. 30 UCI data sets and 24 continuous probability distributions are employed in this experimental study. The final results present the following important and valuable observations and conclusions, i.e., (1) the probability distributions with symmetrical and bell-shaped probability density functions (e.g., Hyperbolic Secant, Student's-t, Laplace and Normal) always bring about the higher training accuracies and easily cause the over-fitting of ELM; (2) ELMs with random input-layer weights and hidden-layer biases chosen from heavy-tailed distributions (e.g., Gamma, Rayleigh and Frechet) have the better generalization performances; and (3) the light-tailed distributions (e.g., Central Chi-Squared, Erlang, F, Gumbel and Logistic) are usually unsuited to initialize the input-layer weights and hidden-layer biases for ELM. All these provide the useful enlightenments for practical applications of ELMs in different fields.
关键词:Extreme learning machine; ELM; generalization performance; random initialization.