摘要:AbstractEcho State Networks (ESNs) are widely-used Recurrent Neural Networks. They are dynamical systems including, in state-space form, a nonlinear state equation and a linear output transformation. The common procedure to train ESNs is to randomly select the parameters of the state equation, and then to estimate those of the output equation via a standard least squares problem. Such a procedure is repeated for different instances of the random parameters characterizing the state equation, until satisfactory results are achieved. However, this trial-and-error procedure is not systematic and does not provide any guarantee about the optimality of the identification results. To solve this problem, we propose to complement the identification procedure of ESNs by applying results in scenario optimization. The resulting training procedure is theoretically sound and allows one to link precisely the number of identification instances to a guaranteed optimality bound on relevant performance indexes, such as the Root Mean Square error and the FIT index of the estimated model evaluated over a validation data-set. The proposed procedure is finally applied to the simulated model of apHneutralization process: the obtained results confirm the validity of the approach.
关键词:KeywordsEcho State NetworksDeep LearningNeural Network TrainingGuaranteed OptimalityScenario Optimization