This paper addresses some issues on the weighted linear integration of modular neural networks (MNN: a paradigm of hybrid multi-learning machines).
First, from the general meaning of variable weights and variable elements synthesis, three basic kinds of integrated models are discussed that are intrinsic-factors-determined, extrinsic-factors-determined, and hybrid-factors-determined. The authors point out: integrations dominated by both of the internal and external elements are highly correlative with not only the historical quality of the sub-networks, but also with the environment in which the information is processed.
In the sense of the mean of square error (MSE), several sufficient conditions to improve the whole system's performance are given while deleting one/some sub-networks in all the networks population. Meanwhile, when the whole performance of the current MNN system possesses is unsatisfactory, a corresponding improved strategy which need add one/some sub-networks is presented.
For the optimal weights vector under the framework of the weighted sum of the sub-networks' outputs, we point out some constraints forms of the sub-networks' integrated weights are unreasonable and present a general form while the corresponding computational algorithms are described briefly.
The authors present a new training algorithm of sub-networks named “'Expert in one thing and good at many' (EOGM).” In this algorithm, every sub-network is trained on a primary dataset with some of its near neighbors as the accessorial datasets. Simulated results with a kind of dynamic integration methods show the effectiveness of these algorithms, where the performance of the algorithm with EOGM is better than that of the algorithm with a common training method.