期刊名称:International Journal of Applied Mathematics and Computer Science
电子版ISSN:2083-8492
出版年度:2019
卷号:29
期号:1
页码:1-18
DOI:10.2478/amcs-2019-0012
出版社:De Gruyter Open
摘要:Instance selection is often performed as one of the preprocessing methods which, along with feature selection, allows a
significant reduction in computational complexity and an increase in prediction accuracy. So far, only few authors have considered
ensembles of instance selection methods, while the ensembles of final predictive models attract many researchers.
To bridge that gap, in this paper we compare four ensembles adapted to instance selection: Bagging, Feature Bagging,
AdaBoost and Additive Noise. The last one is introduced for the first time in this paper. The study is based on empirical
comparison performed on 43 datasets and 9 base instance selection methods. The experiments are divided into three
scenarios. In the first one, evaluated on a single dataset, we demonstrate the influence of the ensembles on the compression–
accuracy relation, in the second scenario the goal is to achieve the highest prediction accuracy, and in the third one both
accuracy and the level of dataset compression constitute a multi-objective criterion. The obtained results indicate that ensembles
of instance selection improve the base instance selection algorithms except for unstable methods such as CNN
and IB3, which is achieved at the expense of compression. In the comparison, Bagging and AdaBoost lead in most of the
scenarios. In the experiments we evaluate three classifiers: 1NN, kNN and SVM. We also note a deterioration in prediction
accuracy for robust classifiers (kNN and SVM) trained on data filtered by any instance selection methods (including the
ensembles) when compared with the results obtained when the entire training set was used to train these classifiers.