摘要:With emergence of new techniques, data in many fields are getting larger and larger, especially in dimensionality aspect. The high dimensionality of data may pose great challenges to traditional learning algorithms. In fact, many of features in large volume of data are redundant and noisy. Their presence not only degrades the performance of learning algorithms, but also confuses end-users in the post-analysis process. Thus, it is necessary to eliminate irrelevant features from data before being fed into learning algorithms. Currently, many endeavors have been attempted in this field and many outstanding feature selection methods have been developed. Among different evaluation criteria, mutual information has also been widely used in feature selection because of its good capability of quantifying uncertainty of features in classification tasks. However, the mutual information estimated on the whole dataset cannot exactly represent the correlation between features. To cope with this issue, in this paper we firstly re-estimate mutual information on identified instances dynamically, and then introduce a new feature selection method based on conditional mutual information. Performance evaluations on sixteen UCI datasets show that our proposed method achieves comparable performance to other well-established feature selection algorithms in most cases.
关键词:Pattern classification, feature selection, mutual information, data mining, pattern recognition;68T99