首页    期刊浏览 2025年02月20日 星期四
登录注册

文章基本信息

  • 标题:Predicting inflation: does the quantity theory help?
  • 作者:Bachmeier, Lance J. ; Swanson, Norman R.
  • 期刊名称:Economic Inquiry
  • 印刷版ISSN:0095-2583
  • 出版年度:2005
  • 期号:July
  • 语种:English
  • 出版社:Western Economic Association International
  • 摘要:Inflation forecasting has played a key role in recent U.S. monetary policy, and this has led to a renewed search for variables that serve as good indicators of future inflation. One frequently used indicator, based on the Phillips curve, is the unemployment rate or a similar measure of the output gap, as in Gerlach and Svensson (2003), Clark and McCracken (2003), Mankiw (2001), and Gali and Gertler (1999). The Phillips curve is believed by many to be the preferred tool for forecasting inflation (see, e.g., Mankiw 2001; Stock and Watson 1999a; Blinder 1997) though as argued by Sargent (1999) its use in formulating monetary policy is not without controversy. Another approach, based on the quantity theory of money, uses monetary aggregates to predict inflation. Despite the strong theoretical motivation for this approach, though, there is little evidence that measures of the nominal money supply are useful for predicting inflation relative to a conventional unemployment rate Phillips curve model; see Stock and Watson (1999a) for a detailed analysis of the forecast performance of popular inflation indicators. Stock and Watson's results indicate that even simple univariate time-series models generally forecast about as well as models that include measures of the money supply, so that it is hard to make the case that nominal money supply data have any predictive content for inflation. (1)
  • 关键词:Inflation (Economics);Inflation (Finance);United States economic conditions

Predicting inflation: does the quantity theory help?


Bachmeier, Lance J. ; Swanson, Norman R.


I. INTRODUCTION

Inflation forecasting has played a key role in recent U.S. monetary policy, and this has led to a renewed search for variables that serve as good indicators of future inflation. One frequently used indicator, based on the Phillips curve, is the unemployment rate or a similar measure of the output gap, as in Gerlach and Svensson (2003), Clark and McCracken (2003), Mankiw (2001), and Gali and Gertler (1999). The Phillips curve is believed by many to be the preferred tool for forecasting inflation (see, e.g., Mankiw 2001; Stock and Watson 1999a; Blinder 1997) though as argued by Sargent (1999) its use in formulating monetary policy is not without controversy. Another approach, based on the quantity theory of money, uses monetary aggregates to predict inflation. Despite the strong theoretical motivation for this approach, though, there is little evidence that measures of the nominal money supply are useful for predicting inflation relative to a conventional unemployment rate Phillips curve model; see Stock and Watson (1999a) for a detailed analysis of the forecast performance of popular inflation indicators. Stock and Watson's results indicate that even simple univariate time-series models generally forecast about as well as models that include measures of the money supply, so that it is hard to make the case that nominal money supply data have any predictive content for inflation. (1)

This article evaluates inflation forecasts made by models that allow for prices, money, and output to be cointegrated, and in the process reexamines the question of whether monetary aggregates have marginal predictive content for inflation. Our work is motivated in part by economic theory, as the presence of a cointegrating relationship among the series we look at corresponds to an implicit assumption that prices, the money supply, and output "hang together" in the long run, an implicit feature of most analyses based on the quantity theory.

From a statistical point of view, a system with cointegrated regressors does not have a finite-order vector autoregressive (VAR) representation, so that a VAR in differences will be misspecified and may not forecast well regardless of the relevance of the included variables. Our analysis is therefore focused on the questions, "Are there gains, in terms of forecast accuracy, from imposing the restriction that prices, money, and output are cointegrated?", "Does it matter whether cointegrating restrictions are imposed a priori based on economic theory, or can they be estimated?", and "Do models imposing cointegration among prices, money, and output forecast inflation as well as the Phillips curve and other alternative models?"

The econometric framework that we employ is similar to that of Stock and Watson (1999a) but differs from theirs in two ways. First, Stock and Watson (1999a) consider one-year horizon inflation forecasts, whereas we consider forecast horizons of up to five years. This is potentially important in our context, as we include versions of the quantity theory of money in our analysis, a theory that arguably may not yield substantive gains to forecasting in the short run. Additionally, future inflation at many horizons is in general of interest to policy makers (even if the weight attached to inflation at different horizons is a matter of individual preference), so that long-run predictions are only unuseful if and when they fail to have marginal predictive content for inflation. (2) A second difference between our work and that of Stock and Watson (1999a) is that some of our models differ from theirs, including those that impose quantity theory-based cointegrating restrictions, for example. In these types of models we (1) impose a cointegration restriction derived from the assumption of stationary velocity, and (2) estimate cointegrating restrictions. We also examine a fairly broad variety of (linear) models, including simple autoregressive (AR) models in levels and differences; conventional unemployment rate Phillips curve models; and VAR models in levels and differences with money, prices, and output. As a strawman model with which to compare our "best" models, we use various random walk models, and all models are evaluated using standard loss criteria, such as mean square forecast error as well as tests of equal predictive accuracy.

Our approach is to consider alternative h-quarter ahead inflation predictions from the models mentioned. We analyze two different periods, one from 1979:4-1992:4 and one from 1993:1-2003:2. These periods are analyzed separately because, as is shown below, Johansen (1988, 1991) trace tests find cointegrating ranks of at least 1 through 1992 and 0 thereafter, so that it is reasonable to allow for the possibility of a structural break around 1993. (3) Sequences of one-quarter to five-year ahead predictions are made for the period 1979:4-1992:4, with one sequence of predictions constructed for each model and for each forecast horizon. This is done by reestimating each model in a recursive fashion, using observations through 1979:4-h for the first forecast and observations through 1992:4-h for the last forecast, for h = 1, ..., 20 quarters. By focusing part of our attention on the period 1979-92, we are able to assess whether predictions made using cointegrating restrictions estimated over a period for which it is well accepted that cointegration was present dominate predictions made without imposing cointegration. Furthermore, if estimated cointegrating restrictions over this period fail to yield predictive performance improvements, while restrictions imposed a priori based on economic theory do yield improvements, then we have direct evidence that the lack of success of cointegration type models in forecasting noted widely in the literature may be due in large part to parameter estimation error. We also carry out a version of the exercise for the period from 1993:1-2003:2. Before describing our findings, it is worth stressing that there is much evidence that preexisting cointegrating relations broke down in the 1990s, as shown, for instance, by Carlson et al. (2000). However, we are interested in a wide range of forecast horizons, and cointegration tests are implicitly based on one-step-ahead forecasting models. Thus, the failure of empirical cointegration tests does not imply that there are not long-run restrictions among the variables that will not yield improved long-run predictions. (4)

Our findings are clear-cut and can be summarized as follows. First, by allowing for prices, money, and output to be cointegrated, and by considering a variety of forecast horizons, there is evidence that M2 has marginal predictive content for inflation. For the earlier time period, a vector error correction (VEC) model consistent with the quantity theory forecasts better than other models for many horizons, including an AR model, thus justifying the use of M2 as an inflation indicator. Over the more recent period, the VEC model no longer forecasts well, which is not surprising given the breakdown of cointegration mentioned. However, the VAR model in differences does forecast well for that time period relative to an AR benchmark. This leads us to conclude that there is strong and robust evidence in favor of M2 as an inflation indicator.

Second, our findings supporting the usefulness of imposing cointegration are limited to the case where velocity is restricted to be stationary. For the period 1979:4-1992:4, we find that (1) a VEC model that imposes stationary velocity typically forecasts better than a VAR in differences; (2) forecasts from the VEC with stationary velocity also dominate, at all forecast horizons, those made using a VEC for which the cointegrating rank and vectors are estimated; and (3) forecasts from a VAR in differences dominate the forecasts, at all horizons, made using a VEC with estimated cointegrating rank and vectors. These findings are suggestive. For example, we thus have evidence that when the cointegrating restriction(s) are estimated, we do better by simply using a VAR in differences. This corresponds to the finding of Clements and Hendry (1996), Hoffman and Rasche (1996), and Lin and Tsay (1996) that VEC models do not usually predict better than VAR models. What is interesting, though, is that when we impose the parameter (cointegration) restriction directly, based on theory, the VEC model does outperform the VAR model. This in turn suggests that one reason for VEC failure in practical applications may be imprecise estimation of cointegration vector(s) and/or cointegration space ranks, rather than incorrect model specification. Put another way, theory is important and should be incorporated whenever possible. Given this finding, we perform a series of Monte Carlo experiments to investigate the importance of cointegration vector rank and parameter estimation error on VEC model forecasts. Using simulated data calibrated to be consistent with the historical U.S. record, we find that for some configurations, the impact of cointegration vector rank and parameter estimation error on VEC model forecasts is substantial.

A second set of Monte Carlo experiments is also run because we find that a random walk model is the only model (including the Phillips curve model) that the VEC model does not dominate at long horizons. Although such a finding is not important for our analysis, because we are interested in determining which variables have marginal predictive content for inflation, and comparison with the random walk model cannot answer this question, it is common to use a random walk model as a benchmark in out-of-sample forecast comparisons. A common interpretation of the failure of a model based on economic theory to forecast better than a random walk model is that the theory-based model is incorrectly specified. This interpretation is investigated using simulated data, calibrated to be consistent with the historical U.S. data, for two data- generating processes (DGPs), a second-order AR (AR(2)) model and a VAR model. We show that for samples as large as 500, it is difficult to reject the null hypothesis that an AR(1) model forecasts as well as an AR(2) model, even when the data are generated according to an AR(2) process, and a random walk model usually forecasts better than a VAR model, even when the data are generated according to a VAR model. This serves to point out that one needs to be cautious when interpreting the results of out-of-sample forecast comparisons with atheoretical time-series models, because parameter estimation error can cause correctly specified econometric models to forecast poorly. In particular, results from this experiment suggest that failure of an estimated version of a particular theoretical model to outperform a strawman random walk model in forecasting should not be taken as evidence that the theoretical model is not useful.

The remainder of the article is organized as follows. Section II discusses the data used in our empirical investigation, while section III outlines the methodology used. Quantitative findings are presented in section IV, and section V discusses the results of our Monte Carlo experiments. In section VI, concluding remarks and directions for future research are given.

II. THE DATA

All data were downloaded from the Federal Reserve Economic Database on the Federal Reserve Bank of St. Louis Web site. All data are quarterly U.S. figures for the period 1959:1 to 2003:2. For the price level, [P.sub.t], we use the gross domestic product (GDP) implicit price deflator. In accordance with this choice of price index, we also use gross domestic product, [Q.sub.t], in chained 1996 dollars as our measure of real output. The money supply, [M.sub.t], data we use are seasonally adjusted M2 figures. This choice of monetary aggregate is obviously not without its drawbacks. Barnett and Serletis (2000), for example, contains a number of contributions which show the importance of using a monetary services index rather than simple sum M2. Although the points made in these papers are important and valid, Diewert (2000) points out that divisia money supply measures require arbitrary choices in their construction, and these arbitrary choices can have a significant impact on empirical analysis. It is therefore natural to expect policy makers, at least in principle, to be interested in findings based on simple sum M2 measures. (5) Finally, unemployment, [U.sub.t], is the seasonally adjusted civilian unemployment rate.

III. METHODOLOGY

To begin, consider the equation of exchange, namely

(1) [M.sub.t][V.sub.t] = [P.sub.t][Q.sub.t],

where [P.sub.t], [M.sub.t], and [Q.sub.t] are as defined, and [V.sub.t] is the velocity of money with respect to nominal output. Now, assume that the natural logarithms of [P.sub.t], [M.sub.t], and [Q.sub.t] are I(1), using the terminology of Engle and Granger (1987). This assumption is standard in the literature testing for a cointegrating relationship between prices, money, and output, although as discussed by Culver and Papell (1997) there has been some debate as to whether prices are I(1) or 1(2). Unit root tests show that a unit root can be rejected for the first difference, but not the level, or the logged level for all three series. (6) In addition, for the time being, assume that [v.sub.t] = log([V.sub.t]) is I(0); see, for example, Feldstein and Stock (1994) or Estrella and Mishkin (1997) for a discussion of this assumption. Now, rearranging (1),

(2) [v.sub.t] = [p.sub.t] - [m.sub.t] + [q.sub.t],

where lowercase letters signify the use of natural logarithms. Assuming that there exists a VAR representation of [p.sub.t], [m.sub.t], and [q.sub.t], the assumption of stationary velocity implies: (1) that there exists a cointegrating restriction among [p.sub.t], [m.sub.t], and [q.sub.t]; and (2) that the cointegrating vector linking the variables is (1, -1, 1), up to a scalar multiple. The Granger representation theorem of Engle and Granger (1987) then states that the VAR in levels can be written as a VEC model with price equation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where [z.sub.t-1] = [alpha]'([p.sub.t-1], [m.sub.t-1], [q.sub.t-1])', [alpha] is a 3x1 vector of constants (i.e., the cointegration vector), [[epsilon].sub.t] is an error term, and l denotes the number of lags included in the VEC model. This is our benchmark model, used to predict inflation, where we assume that [alpha] = (1, -1, 1)', and is referred to as the "quantity theory VEC model." Alternatively, rather than fixing [alpha] (and assuming that [v.sub.t] is stationary), we estimate 0t using the methodology of Johansen (1988, 1991), allowing for the possibility that there may be no cointegration, so that [alpha] = 0, and [z.sub.t-1] is not included in the model. (7) This model is referred to as our "estimated VEC model." (8)

The two preceding models, as well as all of our other models, can be written as restricted versions of the following VEC model:

(3) [y.sub.t+h] = [[beta].sub.0] + [beta](L)[x.sub.t] + [phi][z.sub.t] + [[epsilon].sub.t+h], h = 1, ..., [h.sub.max],

where [y.sub.t+h] is a scalar equal to either [p.sub.t+h] or [DELTA][p.sub.t+h], [[beta].sub.0] is a constant, [beta](L) is a matrix polynomial in the lag operator L, [x.sub.t] is a vector of explanatory variables, and [h.sub.max] = 20. When computing forecasts for the VEC models, we set [y.sub.t+h] = [DELTA][p.sub.t+h] and [x.sub.t] = ([DELTA][p.sub.t], [DELTA][m.sub.t], [DELTA][q.sub.t])'. Note also that [phi] is restricted to be equal to zero for models specified in levels or models specified in differences that do not allow for cointegration. For a given value of h, (3) is reestimated recursively (i.e., reestimated before each new forecast is constructed) to yield a sequence of 53 real-time rolling inflation forecasts for 1979:4-1992:4 and 42 rolling real-time inflation forecasts for 1993:1-2003:2. This procedure is then carried out again for a new value of h, and is repeated until [h.sub.max] sequences of real-time h-step ahead forecasts are constructed. Forecasts are produced using models estimated with "increasing" windows of data, so that the estimation sample for all forecasts begins with 1959:1. All estimations in this article are based on the principle of maximum likelihood, and all lags (l) are reestimated at each point in time (i.e., before each new prediction is constructed) and for each h, using the Schwarz (1978) information criterion (SIC), which is widely known to dominate other lag selection criteria (such as the Akaike [1973, 1974] information criterion) when the objective is to produce optimal forecasting models. See Swanson and White (1995, 1997) for evidence on the forecast performance of models selected by the SIC. (9) Based on the model, we define our prediction of cumulative inflation at period t + h as [[bar.[pi]].sub.t+h] = [[summation].sup.h.sub.k=1] [DELTA][[bar.p].sub.t+k|t], which implies that [[bar.p].sub.t+h|t] = [p.sub.t] + [[bar.[pi]].sub.t+h], where in all cases the |t symbol denotes conditioning on information available at time t.

It remains to specify the rest of the models that will serve as "competitors" for the quantity theory VEC model (see Table 1 for alternative models). One clear candidate is a VAR model with [y.sub.t+h] = [DELTA][p.sub.t+h], [x.sub.t] = ([DELTA][p.sub.t], [DELTA][m.sub.t], [DELTA][q.sub.t])', and [phi] = 0, which is the same as the VEC model except for the restriction that [phi] = 0. This model, thus, does not allow for prices, money, and output to be cointegrated. It is henceforth called our "VAR in differences model." An alternative to the VAR in differences model shall be called the 'WAR in levels" model, and sets [y.sub.t+h] = [p.sub.t+h], [x.sub.t] : ([p.sub.t], [m.sub.t], [q.sub.t])', and [phi] = 0. Because this model involves regression with I(1) variables, inference based on the estimated coefficients is not standard, based on the work of Sims et al. (1990). However, our objective is prediction and not inference, so this does not pose a problem for us. In addition, note that by estimating the VAR model in levels, we are allowing for cointegration among the variables, although the model is inefficient in the sense that we are not imposing the cointegrating restriction.

In addition to the VEC and VAR models, we estimate a conventional unemployment rate Phillips curve, which is shown in Stock and Watson (1999a) to be quite robust and is seldom beaten in their forecasting experiments except when their new index of aggregate activity based on 168 economic indicators is used. This model is called our "differences Phillips curve model," and sets [y.sub.t+h] = [DELTA][p.sub.t+h], [x.sub.t] = ([DELTA][p.sub.t], [U.sub.t])', and [phi] = 0. A levels version of this model, for which [y.sub.t+h] = [p.sub.t+h], [x.sub.t] = ([p.sub.t], [U.sub.t])', and [phi] = 0, is called the "levels Phillips curve model." We follow Stock and Watson (1999a) in assuming the NAIRU is constant and omitting supply shock variables and therefore use just the unemployment rate when making Phillips curve forecasts. Finally, we also estimate differences and levels versions of a "simple AR model," with [y.sub.t+h] = [DELTA][p.sub.t+h], [x.sub.t] = [DELTA] [p.sub.t], and [phi] = 0 for the differences version and [y.sub.t+h] = [y.sub.t+h], [x.sub.t] = [p.sub.t], and [phi] = 0 for the levels version, and various random walk models including: [DELTA][p.sub.t+h] = [DELTA][p.sub.t] + [[epsilon].sub.t+h], h = 1, ..., [h.sub.max] (differences random walk model) and [p.sub.t+h] = [[beta].sub.0] + [p.sub.t] + [[epsilon].sub.t+h], h = 1, ..., [h.sub.max] (random walk with drift model).

Given forecast and actual inflation values, we compare the predictive accuracy of the models. This is done by first forming real-time prediction errors as [e.sub.t+h|t] = [[pi].sub.t+h] - [[bar.[pi]].sub.t+h], for each model, and for each value of h, so that the out-of-sample forecast period runs from 1979:4 to 1992:4 for the first subsample and from 1993:1 to 2003:2 for the second time period. (10) Then, predictions from the alternative models are compared using the forecast mean square error (MSE) criterion; see Swanson and White (1995, 1997) for a discussion of this and similar criteria. Because these criteria are only point estimates, we additionally construct predictive accuracy tests, along the lines of Diebold and Mariano (1995; hereafter DM), West (1996), McCracken (2000, 2004), Clark and McCracken (2001), Chao et al. (2001), Corradi et al. (2001), and Corradi and Swanson (2002). (11) Our benchmark model is fixed to be the quantity theory VEC model, and we compare the benchmark against each of the other models to find out which one "wins" our prediction contest (see further discussion).

IV. QUANTITATIVE FINDINGS

Cointegration Tests and Parameter Estimates

Figure 1 a shows recursive estimates of the cointegration rank among [p.sub.t], [m.sub.t], and [q.sub.t]. The cointegrating rank is almost always estimated to be 1 or 2 until the end of 1992, then falls to 0 and remains there through the end of the sample, consistent with the recent literature including Estrella and Mishkin (1997). (12) Figures 2b and 2c show the estimated coefficients on M2 and real GDP, respectively, for the cointegration vector associated with the largest eigenvalue in the model. In 1975 the estimated coefficient on M2 begins to fluctuate (even though the cointegration vector remains statistically significant), but by 1978 the estimated coefficient is again close to -1. The coefficient on output does not fluctuate as much as the coefficient on M2, though it does show large fluctuations in 1975, 1976 and 1982. Overall, all large deviations of the estimated coefficients from the restriction associated with the quantity theory VEC model appear to have been transitory.

[FIGURES 1-2 OMITTED]

Forecast Evaluation Results for 1979:4-1992:4

Our quantity theory model imposes important restrictions on a VEC model. An unrestricted VEC model for which the cointegration rank and vector are estimated offers two advantages over the quantity theory VEC model. Namely, it allows for cases in which the cointegrating relationship cannot be identified a priori, and it allows for the possibility that the system is evolving over time. Figure 2a shows test statistics for the ENC-REG forecast emcompassing test of Clark and McCracken (2001), with significantly positive statistics implying a rejection of the null hypothesis that quantity theory VEC model forecasts contribute nothing in the presence of the estimated VEC model forecasts. Thus, for this comparison, we conclude that there are advantages to imposing the cointegration vector rank and coefficients if the test statistics are positive. Clark and McCracken (2001) show that the ENC-REG test statistic has a distribution that depends on the number of excess parameters, the number out-of-sample predictions, and the size of the sample used to estimate the forecasting model. One of the important contributions of Clark and McCracken (2001) is to show that using standard normal critical values will result in conservative inference. (13) As the number of excess parameters gets large, or if the number of out-of-sample forecasts is small, the test statistic has an approximately normal distribution according to the tables in Clark and McCracken (2000, 2001). (14) It is clear from Figure 2 that cointegration vector parameter and/or cointegrating rank estimation error is very important for forecasts in samples of the size available for this exercise. (15) Notice also that the quantity theory VEC does increasingly better as the horizon increases, suggesting that the quantity theory is particularly useful for long-run prediction.

Furthermore, note that in Figure 2b, we cannot reject the hypothesis that the VAR in differences dominates the estimated VEC model at all horizons, at least for conventional significance levels (in this panel, a significantly positive statistic implies that the estimated VEC forecasts are not encompassed by the VAR in differences forecasts). In other words, if we had estimated the cointegration vector and rank each time a forecast was made, rather than imposing stationary velocity and a cointegrating rank of unity, we would have reached a conclusion that the quantity theory VEC was not useful and would have concluded that imposing cointegration never improves out-of-sample prediction in our context! These findings at least partially explain previous findings that imposing cointegration often does not result in improvement over forecasts constructed using VAR models. In short, we find that there can be large gains from a priori knowledge of the cointegrating vector and rank, and that economic theory plays an important role, at least when our objective is prediction. One reason why this is the case appears to be that parameter and cointegration rank estimation error is large in our framework, as is shown via a series of Monte Carlo experiments in the next section.

Figure 3 presents graphs of ENC-REG statistics for comparison of the quantity theory VEC model with various other alternative models. The dashed lines in each graph are 90% and 95% critical values, so that a statistic above the lower dashed line indicates a rejection of the hypothesis that the quantity theory VEC model forecasts are encompassed by the alternative model forecasts at a significance level of 10%, and a statistic above both dashed lines indicates rejection at a significance level of 5%. When the models are nested, the encompassing test is a test for the relevance of the additional parameters, or equivalently whether the quantity theory VEC model forecast better than the alternative model.

[FIGURE 3 OMITTED]

Comparison with the VAR in levels (Figure 3b) is of some interest here, because this model allows for cointegration of unknown form, a point made by Sims et al. (1990). Note that in this case, the quantity theory VEC model dominates at all horizons, so that failure to impose the correct cointegrating restriction, which leads to estimation inefficiency in the levels VAR model, also leads to poorer predictions. Similar results are obtained for the other levels models. As noted in the introduction, Phillips curve models are generally believed to provide the best inflation forecasts. As might be expected, Figure 3e shows that the differences Phillips curve forecasts encompass the quantity theory VEC forecasts at short horizons, and the quantity theory VEC model becomes relevant for horizons of two years or more, suggesting that the long run (at which time the quantity theory begins to be useful) is perhaps not very long! Similar results are obtained when comparing the quantity theory VEC model with the differences AR model. At short horizons, the simple AR model forecasts about as well as the quantity theory VEC model, a result consistent with the findings of Stock and Watson (1999a). For the longer horizons, though, the quantity theory VEC model performs much better, and we have evidence that the variables in the quantity theory VEC model, including M2, have marginal predictive content for inflation beyond that in the autoregressive model.

Figure 3 also shows the results of comparing the quantity theory VEC model with the differences random walk model, and we observe that for many forecast horizons, the quantity theory VEC model forecasts have little to add to the differences random walk model forecasts. There are several reasons, though, that this is not evidence that the quantity theory VEC model is useless, at least for purposes of monetary policy. First, the differences random walk model is not a reasonable policy model, because it contains no control variables and merely summarizes the historical time-series properties of the inflation series. Furthermore, the relevant question for policy is whether the variables in the quantity theory VEC model contain information about future inflation, and comparison with the differences random walk model cannot answer this question. Finally, failure of the quantity theory VEC model to forecast better than the differences random walk model does not necessarily imply that

the quantity theory VEC model is incorrectly specified. In fact, the Monte Carlo experiments discussed in the next section show that a parsimonious but misspecified time-series model may forecast better than a correctly specified model due to parameter estimation error.

Forecast Evaluation Results for 1993:1-2003:2

As discussed, the estimated cointegration rank fell to 0 starting in the fourth quarter of 1992, consistent with the claim of Carlson et al. (2000) that there was a breakdown in the stability of M2 demand in the early 1990s. Carlson et al. (2000), for instance, present evidence that the instability was due to a one-time shift of household wealth from the components of M2 into stock and bond mutual funds. Aggressive marketing efforts by the mutual fund industry in the late 1980s informed households about the existence and higher returns on stock and bond mutual funds. Holdings in these funds are not counted as part of M2, so that the shift caused an increase in M2 velocity and caused the demand for M2 to be unstable.

This motivates considering the period after 1993 separately. For brevity, we only report several of the forecast comparisons and focus our discussion on the marginal predictive content of M2 for inflation. The ENC-REG test gave unusual results for forecasts made after the structural break, possibly because the structural break is the source of most of the variation in the data. For this reason, we rely solely on the comparison of MSE by means of DM forecast comparisons. (16) Figure 4a shows DM statistics for comparison of the quantity theory VEC model and AR model, with positive statistics meaning the quantity theory VEC model did better for that horizon. It is clear that continuing to impose cointegration throughout this time period would have been a mistake, because the quantity theory VEC model had an MSE much larger than the AR model. Consistent with our prior expectations, the quantity theory VEC has not been a good inflation forecasting model in recent years. Of course, if these series were no longer cointegrated after 1992, that would suggest the use of a VAR model in differences, not a VEC model. Figure 4b shows DM statistics for comparison of the VAR model in differences to the AR model, where positive statistics indicate the VAR model forecast better. For all horizons of one year or more, the VAR model does better. Figure 4c tells a similar story for comparison of the estimated VEC model and AR model. Figures 4b and 4c should be similar, because the estimated VEC model only imposes statistically significant cointegration vectors, and for most of the forecasts the cointegration rank was estimated to be zero, so that the estimated VEC model reduced to a VAR model in differences.

[FIGURE 4 OMITTED]

Our empirical findings can be summarized as follows. In the earlier period, money, prices, and output were cointegrated, so that a VEC model performed better than the AR model, whenever the cointegration vector was imposed a priori rather than estimated. Parameter estimation error apparently prevented the estimated VEC model from forecasting well, a possibility that is investigated extensively in the next section. For the recent time period, the quantity theory VEC model forecasts poorly, whereas the VAR model in differences forecasts well. There is nothing surprising about this; other authors have already demonstrated problems with M2 cointegrating relationships over this time period, and our cointegration tests find that cointegration broke down at the end of 1992.

We conclude this section with two final comments. First, it is possible that the quantity theory VEC model will again forecast well in the future, based for instance on the claim of Carlson et al. (2000) that the breakdown of the cointegrating relationship was due to a one-time structural change. Second, structural change is often a problem for macroeconomic forecasting, but that is not the case here. The estimated VEC model allows the data to determine the date of any structural breaks, yet is consistent with our out-of-sample forecasting methodology, so that our findings are not in any way contingent on use of the full sample to identify the break. Overall, there is strong evidence of out-of-sample causality from money growth to inflation, provided one is careful to impose the correct order of integration on the data.

V. MONTE CARLO EXPERIMENTS

In this section, we investigate the importance of parameter estimation error for the forecasts of several of the models considered.

The first set of comparisons is designed to study the importance of cointegration vector rank and parameter estimation error on forecasts from VEC models. In particular, 5,000 samples of data were generated using the following DGP:

(4) [DELTA][Y.sub.t] = [a.sub.3] + [b.sub.3][DELTA][Y.sub.t-1] + [c.sub.3][Z.sub.t-1] + [[epsilon].sub.3t],

where [Y.sub.t] = ([p.sub.t], [m.sub.t], [q.sub.t])', with [p.sub.t], [m.sub.t], and [q.sub.t] defined as above, [DELTA] is the first difference operator, [[epsilon].sub.3t] ~ IN(0, [[summation].sub.3]), with [[summation].sub.3] a 3x3 matrix, and [Z.sub.t-1] = d[Y.sub.t-1], with d is an rx3 matrix of cointegration vectors, r is the rank of the cointegrating space (which is either 0, 1, or 2), and [a.sub.3], [b.sub.3], [c.sub.3], and [[summation].sub.3] are parameters estimated using historical U.S. data. In all of our comparisons, data are generated with one lag of [Y.sub.t] and cointegrating rank, r, equal to unity, and d either estimated from the historical U.S. data or set equal to (1, -1, 1). We estimate the parameters of (4) using four different sample periods: the entire sample, covering 1959:1-1999:4; the period prior to the well-known monetarist experiment, covering 1959:1-1979:3; the period 1979:4-1989:3; and the period 1989:4-1999:4. (17) Given data generated according to (6), two prediction models are estimated, including: (1) versions of (4) where r and d are estimated, corresponding to the estimated VEC model; (2) versions of (4) where r = 0 is imposed, corresponding to the VAR in differences model. Note that we have generated the data according to a VEC model in all cases, so that we should expect the estimated VEC prediction model to perform well, assuming that coefficients are estimated with sufficiently little parameter estimation error, for example. Results from this experiment are gathered in Table 2. The results vary across the different DGPs, but two patterns emerge. First, for small samples (T = 100), imprecise estimates of the cointegrating vector parameters and rank generally prevent the VEC model forecasts from dominating the VAR in differences forecasts, and in many cases the VAR in differences model even forecasts more accurately. Second, as the sample size grows, the VEC model forecasts begin to dominate more often, and for some DGPs the VEC model almost always forecasts better for T = 500.

Our second Monte Carlo experiment is designed to show that parsimonious time-series models will often forecast better than more heavily parameterized, but correctly specified rival models, likely due to parameter estimation error. Specifically, we generate data according to two DGPs. The first DGP is an AR(2) process:

(5) [DELTA][p.sub.t] = [a.sub.1] + [b.sub.1][DELTA][p.sub.t-1] + [c.sub.1][DELTA][p.sub.t-2] + [[epsilon].sub.1t],

where [p.sub.t] and [DELTA] are defined as before, so that [DELTA][p.sub.t] is the percentage change in the price level from period t - 1 to period t, [[epsilon].sub.1t] ~ IN(0, [[sigma].sup.2.sub.1]), and [a.sub.1], [b.sub.1], [c.sub.1], and [[sigma].sup.2.sub.1] are parameters estimated using historical U.S. data for the period 1959:1-1999:4. The second DGP is a VAR(1) process:

(6) [DELTA][Y.sub.t] = [a.sub.2] + [b.sub.2][DELTA][Y.sub.t-1] + [[epsilon].sub.2t],

where [Y.sub.t] = ([p.sub.t], [m.sub.t], [q.sub.t])', with [p.sub.t], [m.sub.t], [q.sub.t] and [DELTA] defined as before; [[epsilon].sub.2t] ~ IN(0, [[summation].sub.2]), with [[summation].sub.2] a 3x3 matrix; and [a.sub.2], [b.sub.2], and [[summation].sub.2] are parameters estimated using historical U.S. data for the period 1959:1-1999:4. Given these DGPs, 5,000 samples of varying lengths (T = 164, which corresponds to the actual sample size used in the empirical work above, and T = 300, 500) were generated. For each sample generated from the DGP given in equation (5), both AR(1) and AR(2) models were fitted, and one-step-ahead forecasts were compared using the DM test. Although the AR(2) model is correctly specified, it requires the estimation of an additional parameter beyond that of the AR(1) model, so that it is not clear which model will forecast better out of sample. For each sample generated according to DGP (6), one-step-ahead forecasts are compared for the differences random walk and VAR in differences models analyzed in the previous section. Again, even though the VAR in differences model is correctly specified, there is no reason to expect that it will forecast better than the differences random walk model, as the lag length and several other parameters need to be estimated for the VAR in differences model. As a final metric for assessing the importance of parameter estimation error, "true" model forecasts, for which the model parameters are imposed a priori to be equal to their true values, rather than estimated, are also included for all of the comparisons.

Table 3 shows the percentage of times the DM test was able to reject the null hypothesis that the AR(1) and AR(2) models forecast equally well, given that the DGP is an AR(2) model. It shows results for two comparisons, where the AR(1) model forecasts are compared to those of an AR(2) model for which the coefficients are estimated (the AR(2) model comparisons), and also where the AR(1) model forecasts are compared to those of an AR(2) model where the true coefficients are imposed rather than estimated (the true model) comparisons. We see that for samples of size 164, which is the sample size used in the empirical work, the power is never more than 20%. This means that in practice we would have mistakenly concluded that an AR(1) model is the correct specification 80% of the time. As expected, the power of the test increases with the sample size, but is never more than 51% when the AR(2) model parameters are estimated, even for samples of size 500.

Table 4 has related results for two comparisons. In the first (the VAR model comparisons), differences random walk model forecasts are compared to VAR in differences forecasts, with estimated lag lengths and coefficients. In the second (the true model comparisons), differences random walk model forecasts are compared to forecasts from a first-order VAR model where the coefficients are imposed to be equal to their true values rather than estimated. The results depend on the specification, but when the VAR parameters are estimated, the random walk model almost always does better. In fact, for all of the configurations, the DM "statistics are never greater than 1.96, but are often less than -1.96, with negative DM statistics implying that the random walk model forecasts better. On the other hand, for the true model comparisons, very few of the DM statistics are negative, and the percentage of DM statistics greater than 1.96 is greater than 80% for all but three cases. In nearly all cases, then, a VAR model where the lag length and coefficients are known a priori will forecast better than a random walk model, but when the lag length and coefficients need to be estimated, the random walk model forecasts better.

VI. CONCLUDING REMARKS

In this article, we show that M2 has marginal predictive content for inflation, but it is important to correctly specify the number of unit roots. For the period 1979-92, there is strong evidence that money, prices, and output were cointegrated, and for this time period imposing a cointegrating restriction among prices, money, and output that is implied by the quantity theory yields predictions that are superior to those from a variety of other models, including an AR benchmark, a VAR in differences, and a version of the Phillips curve. Johansen (1988, 1991) trace tests find a breakdown of this cointegrating relationship in 1992. Consistent with our expectations, we find for the period 1993-2003 that a VAR in differences model forecasts better than an AR model, whereas the VEC model fares poorly when compared to the AR model. Taken together, these results provide robust evidence that M2 continues to have value as an inflation indicator.

Our finding that imposing cointegration is useful for forecasting inflation is, however, limited to the case where the cointegration vector is imposed rather than estimated. When the cointegration vector is estimated, the corresponding VEC model always forecasts much worse than a VAR in differences model. This suggests that previous work that has found that VEC models do not forecast better than VAR models may be in some part due to the presence of cointegration vector parameter estimation error. We support this notion by presenting Monte Carlo evidence showing that the effect of parameter estimation error on VEC model forecasts is so substantive that it results in many cases to VAR models forecast-dominating VEC models, even if the true model is a VEC, and the cointegrating rank is known. We additionally present evidence that failure to beat a random walk model is not in itself a useful yardstick for measuring the validity of a theoretical model, at least if the objective is forecasting. This is done in part by using data simulated to be consistent with the historical U.S. record over the 1959:1-1999:4 period, and showing that a random walk model usually forecasts better than a VAR model for which the lag length and coefficients are estimated, even when the true DGP is a first-order VAR model. Given this result and other related arguments, we conclude that use of a random walk as a strawman model in analyses such as ours is not warranted.

Some limitations of the current article and directions for future research are the following. First, all of the forecasting models herein are simple linear models. Nonlinear models, including those evaluated by Stock and Watson (1999b), may offer forecasting gains. Second, although we have considered only one long-run relationship, it might be of interest to consider some of the many other cointegrating relationships that have been proposed in the literature, both domestic as in Ahmed and Rogers (2000) and international as in Ahmed et al. (1993). Finally, more work needs to be done on the definition of an appropriate monetary aggregate. Attempts to exploit forecasting relationships between monetary aggregates and policy objectives have been subject to criticism, because in practice it takes too long to detect flaws in the monetary aggregate or parameter instability. Although there has been important work done that deals with the problem of instability, much remains to be done in this area, particularly in the area of ex ante analysis of instability.

ABBREVIATIONS

AR: Autoregressive

DGP: Data Generating Process

DM: Diebold and Mariano (1995)

GDP: Gross Domestic Product

MSE: Mean Square Error

SIC: Schwarz Information Criteria

VAR: Vector Autoregressive

VEC: Vector Error Correction
TABLE 1
Summary of Forecasting Models

Benchmark model

1. Quantity theory VEC model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Alternative models

2. VAR in differences model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

3. VAR in levels model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

4. Simple AR model (differences)

[DELTA][p.sub.t+h] = [[beta].sub.0] + [l-1.summation over (i=0)]
[[beta].sub.pi][DELTA][p.sub.t-i] + [[epsilon].sub.t+h]

5. Simple AR model (levels)

[p.sub.t+h] = [[beta].sub.0] + [l-1.summation over (i=0)]
[[beta].sub.pi][p.sub.t-i] + [[epsilon].sub.t+h]

6. Differences Phillips curve model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

7. Levels Phillips curve model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

8. Differences random walk model

[DELTA][p.sub.t+h] = [DELTA][p.sub.t] + [[epsilon].sub.t+h]

9. Random walk with drift model

[p.sub.t+h] = [[beta].sub.0] + [p.sub.t] + [[epsilon].sub.t+h]

10. Estimated VEC model

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

TABLE 2
Monte Carlo Results: DM Statistics for Comparison of Estimated VEC
and Differences VAR

 P = (1/3)T P = (1/2)T

Sample A B C A B C

I. Cointegration vector (1, -1, 1) used in true DGP

1959:1-1999:4 T = 100 0.11 0.34 0.68 0.10 0.33 0.68
 T = 250 0.01 0.08 0.36 0.01 0.06 0.29
 T = 500 0.00 0.01 0.11 0.00 0.00 0.05
1959:1-1979:3 T = 100 0.15 0.47 0.80 0.15 0.46 0.80
 T = 250 0.12 0.46 0.83 0.10 0.46 0.83
 T = 500 0.11 0.49 0.84 0.10 0.49 0.84
1979:4-1989:3 T = 100 0.16 0.44 0.77 0.14 0.43 0.78
 T = 250 0.12 0.46 0.82 0.11 0.44 0.81
 T = 500 0.10 0.46 0.83 0.09 0.45 0.82
1989:4-1999:4 T = 100 0.21 0.54 0.83 0.21 0.55 0.85
 T = 250 0.16 0.44 0.77 0.14 0.43 0.77
 T = 500 0.13 0.37 0.67 0.11 0.35 0.67

II. Estimated cointegration vector used in true DGP

1959:1-1999:4 T = 100 0.07 0.26 0.60 0.07 0.25 0.59
 T = 250 0.00 0.04 0.26 0.00 0.03 0.18
 T = 500 0.00 0.01 0.09 0.00 0.00 0.04
1959:1-1979:3 T = 100 0.04 0.16 0.45 0.03 0.13 0.41
 T = 250 0.01 0.06 0.21 0.01 0.03 0.12
 T = 500 0.06 0.16 0.31 0.03 0.09 0.23
1979:4-1989:3 T = 100 0.20 0.52 0.81 0.18 0.51 0.81
 T = 250 0.17 0.54 0.84 0.18 0.54 0.85
 T = 500 0.15 0.48 0.82 0.14 0.51 0.83
1989:4-1999:4 T = 100 0.26 0.68 0.94 0.29 0.73 0.95
 T = 250 0.19 0.61 0.91 0.21 0.65 0.93
 T = 500 0.18 0.53 0.84 0.17 0.55 0.86

 P = (2/3)T

Sample A B C

I. Cointegration vector (1, -1, 1) used in true DGP

1959:1-1999:4 T = 100 0.09 0.32 0.68
 T = 250 0.01 0.06 0.27
 T = 500 0.00 0.00 0.03
1959:1-1979:3 T = 100 0.13 0.44 0.79
 T = 250 0.09 0.44 0.83
 T = 500 0.10 0.49 0.85
1979:4-1989:3 T = 100 0.12 0.41 0.77
 T = 250 0.10 0.43 0.81
 T = 500 0.09 0.44 0.82
1989:4-1999:4 T = 100 0.21 0.57 0.87
 T = 250 0.12 0.44 0.79
 T = 500 0.09 0.34 0.66

II. Estimated cointegration vector used in true DGP

1959:1-1999:4 T = 100 0.07 0.24 0.59
 T = 250 0.00 0.02 0.14
 T = 500 0.00 0.00 0.02
1959:1-1979:3 T = 100 0.03 0.12 0.41
 T = 250 0.00 0.01 0.07
 T = 500 0.01 0.04 0.13
1979:4-1989:3 T = 100 0.16 0.49 0.81
 T = 250 0.17 0.56 0.86
 T = 500 0.13 0.51 0.84
1989:4-1999:4 T = 100 0.34 0.77 0.97
 T = 250 0.23 0.71 0.96
 T = 500 0.18 0.60 0.89

Notes: A refers to percentage of cases in 5,000 replications where
the DM statistic was less than or equal to -1, assuming an MSE loss
function. B refers to percentage of cases where the DM statistic was
less than or equal to 0. C refers to the percentage of cases where
the DM statistic was less than or equal to 1. A negative DM statistic
implies the VAR in differences model performed better.

TABLE 3
Monte Carlo Results: Power of Test of [H.sub.0]:AR(1)
Model Forecasts as Well as AR(2) Model

Comparison OOS Period Sample Size Power

AR(2) model P = (2/3)T T = 164 0.20
 T = 300 0.34
 T = 500 0.51
True model T = 164 0.38
 T = 300 0.47
 T = 500 0.60
AR(2) model P = (1/2)T T = 164 0.18
 T = 300 0.29
 T = 500 0.42
True model T = 164 0.31
 T = 300 0.38
 T = 500 0.50
AR(2) model P = (1/3)T T = 164 0.20
 T = 300 0.28
 T = 500 0.38
True model T = 164 0.29
 T = 300 0.34
 T = 500 0.43

Notes: The last column of numerical entries shows the
power of the DM predictive ability test to determine
whether an AR(2) model forecasts significantly better,
one step ahead, than an AR(1) model, under MSE loss.
The DGP is an AR(2) model, with parameters estimated
using historical U.S. data for the period 1959:1-1999:4.
Power of the test indicates the percentage of times in
5,000 replications that the predictive ability test rejected
equal forecast accuracy of AR(1) and AR(2) models at a
significance level of 95%, where critical values are taken from
McCracken (2004). AR(2) model refers to comparison of
the AR(1) model forecasts with AR(2) model forecasts,
where the parameters of both models are estimated. True
model refers to comparison of the AR(1) model forecasts
with AR(2) model forecasts, where the parameters of the
AR(2) model are imposed to be equal to their true values.

TABLE 4
Monte Carlo Results: Forecast Comparison of VAR and Random Walk Models

 DM [less DM [less
 than or DM [less than or
 Sample equal to] than or equal
Comparison OOS Period Size -1.96 equal to] 0 to] 1.96

VAR model P = (2/3)T T = 164 0.60 0.98 1.00
 T = 300 0.86 1.00 1.00
 T = 500 0.97 1.00 1.00
True model T = 164 0.00 0.00 0.17
 T = 300 0.00 0.00 0.03
 T = 500 0.00 0.00 0.00
VAR model P = (1/2)T T = 164 0.47 0.97 1.00
 T = 300 0.76 1.00 1.00
 T = 500 0.93 1.00 1.00
True model T = 164 0.00 0.00 0.32
 T = 300 0.00 0.00 0.08
 T = 500 0.00 0.00 0.01
VAR model P = (1/3)T T = 164 0.36 0.95 1.00
 T = 300 0.57 0.99 1.00
 T = 500 0.82 1.00 1.00
True model T = 164 0.00 0.01 0.47
 T = 300 0.00 0.00 0.24
 T = 500 0.00 0.00 0.06

Notes: See notes to Table 3.


(1.) Similar evidence can be found in Leeper and Roush (2002, 2003). Inflation forecasts using many other variables, such as commodity prices, interest rates, exchange rates, and wages have also been studied by Stock and Watson (1999a, 2003) and many other authors. This article does not consider forecasts made using these variables. We also do not consider the quality of forecasts made by the private sector, as is done by Croushore (1998).

(2.) The argument that short-run inflation stabilization is not a feasible objective, and therefore that monetary policy should primarily be concerned with inflation at long horizons goes back at least to Friedman (1959). See Amato and Laubach (2000) for one approach to determining the forecast horizon(s) of interest to a central bank.

(3.) For a comprehensive and interesting discussion of the cointegration properties of our data in the 1990s, see Carlson et al. (2000).

(4.) For more on this topic, see Christoffersen and Diebold (1998).

(5.) It should in general be of interest to carry out empirical investigations with both varieties of monetary aggregates. Swanson (1998), for example, does this and finds little difference between empirical findings based on the two different types of aggregates when using monetary services index data available on the St. Louis Federal Reserve Bank Web site.

(6.) Augmented Dickey Fuller unit root tests with covariates, according to the procedure outlined in Elliot and Janssen (2003), were run on the natural logarithms of all variables, with lags selected according to the approach outlined in Ng and Perron (1995), and all were found to be I(1).

(7.) The approach of estimating the cointegrating restriction is standard in the literatures on stable money demand and on money income causality, for example.

(8.) It is also standard in this literature to include a nominal interest rate among the variables in the cointegrating relationship. We do not include an interest rate variable because as in Watson (1994), a strong theoretical argument can be made that real interest rates should be stationary. Given our assumption that inflation is stationary, this implies that nominal interest rates are stationary.

(9.) An alternative method for forming multistep forecasts would be to iterate on a one-step forecasting model. A recent paper by Marcellino et al. (2004) compares the performance of different multistep forecasting procedures.

(10.) Given that we have 10 different models and [h.sub.max] = 20, a total of 19,000 different predictions and prediction errors are calculated.

(11.) See Corradi and Swanson (forthcoming) for a review of the literature on predictive accuracy testing.

(12.) The failure to reject the null of no cointegration for a short while in the 1980s is due to our choice of a 5% significance level; trace test statistics are very close to the 5% critical value over this period. There would be an estimated cointegration rank of 0 after 1993 for any reasonable choice of significance level.

(13.) Note that the tests studied by Clark and McCracken (2001) are one-sided.

(14.) Other papers considering inference for nested models include McCracken (2000), Chao et al. (2001), and Corradi and Swanson (2002).

(15.) The construction of our forecasts involves departure from the assumptions made by Clark and McCracken (2001) in two ways. Clark and McCracken study one-step-ahead forecasts where the number of excess parameters in the larger model is the same at each point in time. Our methodology allows the number of lags included to change each time a forecast is made, which means that the number of excess parameters changes through time. Additionally, we evaluate longer forecast horizons. We therefore report standard normal critical values as a guide, and note that the results in this section would be strengthened in some cases if other critical values were used.

(16.) For Figure 4a, this makes no difference, as the quantity theory VEC model has a higher MSE than the AR model at nearly every horizon, as indicated by negative DM statistics, so that formal testing of the hypothesis that the quantity theory VEC model forecasts better than the AR model is really not necessary. For figures 4b and 4c, the DM statistics are quite large and agree with the ENC-REG test results.

(17.) See section II for a description of the historical U.S. data used to estimate the parameters of the DGPs in this section.

REFERENCES

Ahmed, S., and J. H. Rogers. "Inflation and the Great Ratios: Long Term Evidence from the U.S." Journal of Monetary Economics, 45(1), 2000, 3-35.

Ahmed, S., B. W. Ickes, P. Wang, and B. S. Yoo. "International Business Cycles." American Economic Review, 83(3), 1993, 335-59.

Akaike, H. "Information Theory and an Extension of the Maximum Likelihood Principle," in 2nd International Symposium on Information Theory, edited by B. N. Petrov and F. Csaki. Budapest: Akademiai Kiado, 1973, 267-81.

--. "A New Look at the Statistical Model Identification." IEEE Transactions on Automatic Control, AC-19, 1974, 716-23.

Amato, J. D., and T. Laubach. "Forecast-Based MonetaryPolicy." Bank for International Settlements Working Paper No.89, 2000.

Barnett, W. A., and A. Serletis, eds. The Theory of Monetary Aggregation. New York: Elsevier, 2000.

Blinder, A. S. "Is There a Core of Practical Macroeconomics That We Should All Believe?" American Economic Review, 87(2), 1997, 240-43.

Carlson, J. B., D. L. Hoffman, B. D. Keen, and R. H. Rasche. "Results of a Study of the Stability of Cointegrating Relations Comprised of Broad Monetary Aggregates." Journal of Monetary Economics, 46(2), 2000, 345-83.

Chao, J., V. Corradi, and N. R. Swanson. "An Out of Sample Test for Granger Causality." Macroeconomic Dynamics, 5(4), 2001, 598-620.

Christoffersen, P., and F. X. Diebold. "Cointegration and Long-Horizon Forecasting." Journal of Business and Economic Statistics, 16(4), 1998, 450-58.

Clark, T. E., and M. W. McCracken. "Not-for-Publication Appendix to Tests of Forecast Accuracy and Encompassing for Nested Models." Working Paper, Federal Reserve Bank of Kansas City, 2000.

--. "Tests of Forecast Accuracy and Encompassing for Nested Models." Journal of Econometrics, 105(1), 2001, 85-110.

--. "The Predictive Content of the Output Gap for Inflation: Resolving In-Sample and Out-of-Sample Evidence." Federal Reserve Bank of Kansas City Working Paper 03-06, 2003.

Clements, M. P., and D. F. Hendry. "Intercept Corrections and Structural Change." Journal of Applied Econometrics, 11(5), 1996, 475-94.

Corradi, V., and N. R. Swanson. "A Consistent Test for Nonlinear Out of Sample Predictive Accuracy." Journal of Econometrics, 110(2), 2002, 353-81.

--. "Predictive Density Evaluation," in Handbook of Economic Forecasting, edited by C. W. J. Granger, G. Elliot, and A. Timmerman. Amsterdam: Elsevier, forthcoming.

Corradi, V., N. R. Swanson, and C. Olivetti. "Predictive Ability with Cointegrated Variables." Journal of Econometrics, 104(2), 2001, 315 58.

Croushore, D. "Evaluating Inflation Forecasts." Federal Reserve Bank of Philadelphia Working Paper 98-14, 1998.

Culver, S. E., and D. H. Papell. "Is There a Unit Root in the Inflation Rate? Evidence from Sequential Break and Panel Data Models." Journal of Applied Econometrics, 12(4), 1997, 435-44.

Diebold, F. X., and R. S. Mariano. "Comparing Predictive Accuracy." Journal of Business and Economic Statistics, 13(3), 1995, 253-63.

Diewert, W. E. "Preface to the Theory of Monetary Aggregation," in The Theory of Monetary Aggregation, edited by W. A. Barnett and A. Serletis. New York: Elsevier, 2000.

Elliott, G., and M. Jansson. "Testing for Unit Roots with Stationary Covariates." Journal of Econometrics, 115(1), 2003, 75-89.

Engle, R. F., and C. W. J. Granger. "Co-Integration and Error Correction: Representation, Estimation, and Testing." Econometrica, 55(2), 1987, 251 76.

Estrella, A., and F. S. Mishkin. "Is There a Role for Monetary Aggregates in the Conduct of Monetary Policy? " Journal of Monetary Economies, 40(2), 1997, 279-304.

Feldstein, M., and J. H. Stock. "The Use of a Monetary Aggregate to Target Nominal GDP," in Monetary Policy, edited by G. N. Mankiw. Chicago: Chicago University Press, 1994, 7-62.

Friedman, M. A Program for Monetary Stability. New York: Fordham University Press, 1959.

Gali, J., and M. Gertler. "Inflation Dynamics: A Structural Econometric Analysis." Journal of Monetary Economics, 44(2), 1999, 195-222.

Gerlach, S., and L. E. O. Svensson. "Money and Inflation in the Euro Area: A Case for Monetary Indicators?" Journal of Monetary Economics, 50(8), 2003,1649-72.

Hoffman, D. L., and R. H. Rasche. "Assessing Forecast Performance in a Cointegrated System." Journal of Applied Econometrics, 11(5), 1996, 495-517.

Johansen, S. "Statistical Analysis of Cointegrating Vectors." Journal of Economic Dynamics and Control, 12(2), 1988, 231-54.

--. "Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models." Econometrica, 59(6), 1991, 1551-80.

Leeper, E. M., and J. E. Roush. "Putting 'M' Back into Monetary Policy." NBER Working Paper 9552, 2002.

--. "Putting 'M' Back into Monetary Policy." Journal of Money, Credit, and Banking, 35(6), 2003, 1217-56.

Lin, J. L., and R. S. Tsay. "Co-Integration Constraint and Forecasting: An Empirical Examination." Journal of Applied Econometrics, 11(5), 1996, 519-38.

Mankiw, N. G. "The Inexorable and Mysterious Tradeoff between Inflation and Unemployment." Economic Journal, 111(471), 2001, 45-61.

Marcellino, M., J. H. Stock, and M. W. Watson. "A Comparison of Direct and Iterated Multistep AR Methods for Forecasting Macroeconomic Time Series." Working Paper, Princeton University, 2004.

McCracken, M. W. "Robust Out-of-Sample Inference." Journal of Econometrics, 99(2), 2000, 195-223.

--. "Asymptotics for Out of Sample Tests of Granger Causality." Working Paper, University of Missouri, 2004.

Ng, S., and P. Perron. "Unit Root Tests in ARMA Models with Data Dependent Methods for the Truncation Lag." Journal of the American Statistical Association, 90(429), 1995, 268-81.

Sargent, T. J. The Conquest of American Inflation. Princeton, NJ: Princeton University Press, 1999.

Schwarz, G. "Estimating the Dimension of a Model." Annals of Statistics, 6(2), 1978, 461-64.

Sims, C. A., J. H. Stock, and M. W. Watson. "Inference in Linear Time Series Models with Some Unit Roots." Econometrica, 58(1), 1990, 113-44.

Stock, J. H., and M. W. Watson. "Forecasting Inflation." Journal of Monetary Economics, 44(2), 1999a, 293-335.

--. "A Comparison of Linear and Nonlinear Univariate Models for Forecasting Macroeconomic Time Series," in Cointegration, Causality and Forecasting." A Festschrift in Honour of Clive W. J. Granger, edited by R. Engle and H. White. Oxford: Oxford University Press, 1999b, 1-14.

--. "Forecasting Output and Inflation: The Role of Asset Prices." Journal of Economic Literature, 41(3), 2003, 788-829.

Swanson, N. R. "Money and Output Viewed through a Rolling Window." Journal of Monetary Economics, 41(3), 1998, 455-74.

Swanson, N. R., and H. White. "A Model Selection Approach to Assessing the Information in the Term Structure Using Linear Models and Artificial Neural Networks." Journal of Business and Economic Statistics, 13(3), 1995, 265-79.

--. "A Model Selection Approach to Real-Time Macroeconomic Forecasting Using Linear Models and Artificial Neural Networks." Review of Economics and Statistics, 79(4), 1997, 540-50.

Watson, M. W. "Vector Autoregressions and Cointegration," in Handbook of Econometrics, volume 4, edited by R. F. Engle and D. L. McFadden. New York: North-Holland, 1994, 2843-915.

West, K. W. "Asymptotic Inference about Predictive Ability." Econometrica, 64(5), 1996, 1067-84.

LANCE J. BACHMEIER and NORMAN R. SWANSON *

* We are grateful to the editor, Dennis Jansen, and a referee for providing many useful comments and suggestions on an earlier draft of this article. In addition, we wish to thank Graham Elliott, Clive W. J. Granger, Allan Timmerman, and participants at the Texas Camp Econometrics Conference, the Southern Economic Association Meetings, and departmental seminars at Kansas State University, Texas A&M University, and East Carolina University for useful comments and suggestions. Bachmeier thanks the Private Enterprise Research Center at Texas A&M University for support through a Bradley Dissertation Fellowship.

Bachmeier: Assistant Professor, Department of Economics, 327 Waters Hall, Kansas State University, Manhattan, KS 66502-4001. Phone 1-785-532-4578, Fax 1-785-537-6919, E-mail lanceb@ksu.edu

Swanson: Professor, Department of Economics, Rutgers University, 75 Hamilton Street, New Brunswick, NJ 08901. Phone 1-732-932-7432, Fax 1-732-932-7416, E-mail nswanson@econ.rutgers.edu
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有