摘要:We define a new Bayesian predictor called the posterior weighted median (PWM) and compare its performance to several other predictors including the Bayes model average under squared error loss, the Barbieri-Berger median model predictor, the stacking predictor, and the model average predictor based on Akaike’s information criterion. We argue that PWM generally gives better performance than other predictors over a range of M-complete problems. This range is between the M-closed-M-complete boundary and the M-complete-M-open boundary. Indeed, as a problem gets closer to M-open, it seems that M-complete predictive methods begin to break down. Our comparisons rest on extensive simulations and real data examples.