首页    期刊浏览 2024年10月07日 星期一
登录注册

文章基本信息

  • 标题:A COMPARISON OF REGULARIZED LINEAR DISCRIMINANT FUNCTIONS FOR POORLY - POSED CLASSIFICATION PROBLEMS
  • 本地全文:下载
  • 作者:L. A. Thompson ; Wade Davis ; Phil D. Young
  • 期刊名称:Journal of Data Science
  • 印刷版ISSN:1680-743X
  • 电子版ISSN:1683-8602
  • 出版年度:2019
  • 卷号:17
  • 期号:1
  • 页码:1-36
  • DOI:10.6339/JDS.201901_17(1).0001
  • 出版社:Tingmao Publish Company
  • 摘要:For statistical classification problems where the total sample size is slightly greater than the feature dimension, regularized statistical discriminant rules may reduce classification error rates. We review ten dispersion-matrix regularization approaches, four for the pooled sample covariance matrix, four for the inverse pooled sample covariance matrix, and two for a diagonal covariance matrix, for use in Anderson’s (1951) linear discriminant function (LDF). We compare these regularized classifiers against the traditional LDF for a variety of parameter configurations, and use the estimated expected error rate (EER) to assess performance. We also apply the regularized LDFs to a well-known real-data example on colon cancer. We found that no regularized classifier uniformly outperformed the others. However, we found that the more contemporary classifiers (e.g., Thomaz and Gillies, 2005; Tong et al., 2012; and Xu et al., 2009) tended to outperform the older classifiers, and that certain simple methods (e.g., Pang et al., 2009; Thomaz and Gillies, 2005; and Tong et al., 2012) performed very well, questioning the need for involved cross-validation in estimating regularization parameters. Nonetheless, an older regularized classifier proposed by Smidt and McDonald (1976) yielded consistently low misclassification rates across all scenarios, despite the shape of the true covariance matrix. Finally, our simulations showed that regularized classifiers that relied primarily on asymptotic approximations with respect to the training sample size rarely outperformed the traditional LDF, and are thus not recommended. We discuss our results as they pertain to the effect of high dimension, and offer general guidelines for choosing a regularization method for poorly-posed problems..
  • 关键词:Poorly-posed classification problems; Shrinkage estimator; Eigenvalue adjustment; Expected error rate
国家哲学社会科学文献中心版权所有