Hybrid and ensemble methods in machine learning have attracted a great attention of the scientific community over the last years [Zhou, 12]. Multiple, ensemble learning models have been theoretically and empirically shown to provide significantly better performance than single weak learners, especially while dealing with high dimensional, complex regression and classification problems [Brazdil, 09], [Okun, 08]. Adaptive hybrid systems has become essential in computational intelligence and soft computing, as being able to deal with evolving components [Lughofer, 11], non-stationary environments [Sayed-Mouchaweh, 12] and concept drift (as presented in the first paper of this special issue, see below). Another main reason for their popularity is the high complementary of its components. The integration of the basic technologies into hybrid machine learning solutions [Cios, 02] facilitate more intelligent search and reasoning methods that match various domain knowledge with empirical data to solve advanced and complex problems [Sun, 00].
Both ensemble models and hybrid methods make use of the information fusion concept but in slightly different way. In case of ensemble classifiers, multiple but homogeneous, weak models are combined (e.g., see [Kajdanowicz, 10]), typically at the level of their individual output, using various merging methods, which can be grouped into fixed (e.g., majority voting), and trained combiners (e.g., decision templates) [Kuncheva, 04]. Hybrid methods, in turn, combine completely different, heterogeneous machine learning approaches [Castillo, 07], [Corchado, 10]. They both, however, may considerably improve quality of reasoning and boost adaptivity of the entire solutions. For that reason, ensemble and hybrid methods have found application in numerous real word problems ranging from person recognition, through medical diagnosis, bioinformationcs, recommender systems and text/music classification to financial forecasting [Castillo, 07], [Okun, 11], [Bergstra, 06], [Kempa, 11].