首页    期刊浏览 2024年10月05日 星期六
登录注册

文章基本信息

  • 标题:Asymptotic Optimality of One-Group Shrinkage Priors in Sparse High-dimensional Problems
  • 本地全文:下载
  • 作者:Prasenjit Ghosh ; Arijit Chakrabarti
  • 期刊名称:Bayesian Analysis
  • 印刷版ISSN:1931-6690
  • 电子版ISSN:1936-0975
  • 出版年度:2017
  • 卷号:12
  • 期号:4
  • 页码:1133-1161
  • DOI:10.1214/16-BA1029
  • 语种:English
  • 出版社:International Society for Bayesian Analysis
  • 摘要:We study asymptotic optimality of inference in a high-dimensional sparse normal means model using a broad class of one-group shrinkage priors. Assuming that the proportion of non-zero means is known, we show that the corresponding Bayes estimates asymptotically attain the minimax risk (up to a multiplicative constant) for estimation with squared error loss. The constant is shown to be 1 for the important sub-class of “horseshoe-type” priors proving exact asymptotic minimaxity property for these priors, a result hitherto unknown in the literature. An empirical Bayes version of the estimator is shown to achieve the minimax rate in case the level of sparsity is unknown. We prove that the resulting posterior distributions contract around the true mean vector at the minimax optimal rate and provide important insight about the possible rate of posterior contraction around the corresponding Bayes estimator. Our work shows that for rate optimality, a heavy tailed prior with sufficient mass around zero is enough, a pole at zero like the horseshoe prior is not necessary. This part of the work is inspired by Pas et al. (2014). We come up with novel unifying arguments to extend their results over the general class of priors. Next we focus on simultaneous hypothesis testing for the means under the additive 0−1 loss where the means are modeled through a two-groups mixture distribution. We study asymptotic risk properties of certain multiple testing procedures induced by the class of one-group priors under study, when applied in this set-up. Our key results show that the tests based on the “horseshoe-type” priors asymptotically achieve the risk of the optimal solution in this two-groups framework up to the correct constant and are thus asymptotically Bayes optimal under sparsity (ABOS). This is the first result showing that in a sparse problem a class of one-group priors can exactly mimic the performance of an optimal two-groups solution asymptotically. Our work shows an intrinsic technical connection between the theories of minimax estimation and simultaneous hypothesis testing for such one-group priors.
国家哲学社会科学文献中心版权所有