期刊名称:APSIPA Transactions on Signal and Information Processing
印刷版ISSN:2048-7703
电子版ISSN:2048-7703
出版年度:2018
卷号:7
页码:1-9
DOI:10.1017/ATSIP.2018.22
出版社:Cambridge University Press
摘要:The covariance matrix of signals is one of the most essential information in multivariate analysis and other signal processing techniques. The estimation accuracy of a covariance matrix is degraded when some eigenvalues of the matrix are almost duplicated. Although the degradation is theoretically analyzed in the asymptotic case of infinite variables and observations, the degradation in finite cases are still open. This paper tackles the problem using the Bayesian approach, where the learning coefficient represents the generalization error. The learning coefficient is derived in a special case, i.e., the covariance matrix is spiked (all eigenvalues take the same value except one) and a shrinkage estimation method is employed. Our theoretical analysis shows a non-monotonic property that the learning coefficient increases as the difference of eigenvalues increases until a critical point and then decreases from the point and converged to the distinct case. The result is validated by numerical experiments.