首页    期刊浏览 2024年07月19日 星期五
登录注册

文章基本信息

  • 标题:Dimensionality reduction using singular vectors
  • 本地全文:下载
  • 作者:Majid Afshar ; Hamid Usefi
  • 期刊名称:Scientific Reports
  • 电子版ISSN:2045-2322
  • 出版年度:2021
  • 卷号:11
  • 期号:1
  • 页码:1
  • DOI:10.1038/s41598-021-83150-y
  • 出版社:Springer Nature
  • 摘要:Abstract A common problem in machine learning and pattern recognition is the process of identifying the most relevant features, specifically in dealing with high-dimensional datasets in bioinformatics. In this paper, we propose a new feature selection method, called Singular-Vectors Feature Selection (SVFS). Let $$D= [A \mid \mathbf {b}]$$ D = [ A ∣ b ] be a labeled dataset, where $$\mathbf {b}$$ b is the class label and features (attributes) are columns of matrix A . We show that the signature matrix $$S_A=I-A^{\dagger }A$$ S A = I - A † A can be used to partition the columns of A into clusters so that columns in a cluster correlate only with the columns in the same cluster. In the first step, SVFS uses the signature matrix $$S_D$$ S D of D to find the cluster that contains $$\mathbf {b}$$ b . We reduce the size of A by discarding features in the other clusters as irrelevant features. In the next step, SVFS uses the signature matrix $$S_A$$ S A of reduced A to partition the remaining features into clusters and choose the most important features from each cluster. Even though SVFS works perfectly on synthetic datasets, comprehensive experiments on real world benchmark and genomic datasets shows that SVFS exhibits overall superior performance compared to the state-of-the-art feature selection methods in terms of accuracy, running time, and memory usage. A Python implementation of SVFS along with the datasets used in this paper are available at https://github.com/Majid1292/SVFS .
  • 其他摘要:Abstract A common problem in machine learning and pattern recognition is the process of identifying the most relevant features, specifically in dealing with high-dimensional datasets in bioinformatics. In this paper, we propose a new feature selection method, called Singular-Vectors Feature Selection (SVFS). Let $$D= [A \mid \mathbf {b}]$$ D = [ A ∣ b ] be a labeled dataset, where $$\mathbf {b}$$ b is the class label and features (attributes) are columns of matrix A . We show that the signature matrix $$S_A=I-A^{\dagger }A$$ S A = I - A † A can be used to partition the columns of A into clusters so that columns in a cluster correlate only with the columns in the same cluster. In the first step, SVFS uses the signature matrix $$S_D$$ S D of D to find the cluster that contains $$\mathbf {b}$$ b . We reduce the size of A by discarding features in the other clusters as irrelevant features. In the next step, SVFS uses the signature matrix $$S_A$$ S A of reduced A to partition the remaining features into clusters and choose the most important features from each cluster. Even though SVFS works perfectly on synthetic datasets, comprehensive experiments on real world benchmark and genomic datasets shows that SVFS exhibits overall superior performance compared to the state-of-the-art feature selection methods in terms of accuracy, running time, and memory usage. A Python implementation of SVFS along with the datasets used in this paper are available at https://github.com/Majid1292/SVFS .
国家哲学社会科学文献中心版权所有