首页    期刊浏览 2024年12月01日 星期日
登录注册

文章基本信息

  • 标题:Robust monetary policy rules with unknown natural rates.
  • 作者:Williams, John C. ; Orphanides, Athanasios
  • 期刊名称:Brookings Papers on Economic Activity
  • 印刷版ISSN:0007-2303
  • 出版年度:2002
  • 期号:September
  • 语种:English
  • 出版社:Brookings Institution
  • 摘要:THE CONVENTIONAL PARADIGM for the conduct of monetary policy calls for the monetary authority to attain its objectives of a low and stable rate of inflation and full employment by adjusting its short-term interest rate instrument--in the United States, the federal funds rate--in response to economic developments. In principle, when aggregate demand and employment fall short of the economy's natural levels of output and employment, or when other deflationary concerns appear on the horizon, the central bank should ease monetary policy by bringing real interest rates below the economy's natural rate of interest for some time. Conversely, the central bank should respond to inflationary concerns by adjusting interest rates upward so as to bring real interest rates above the natural rate. In this setting, the natural rate of unemployment is the unemployment rate consistent with stable inflation; the natural rate of interest is the real interest rate consistent with unemployment being at its natural rate, and therefore with stable inflation. (2) In carrying out this strategy in practice, the policymaker would ideally have accurate, quantitative, contemporaneous readings of the natural rate of interest and the natural rate of unemployment. Under those circumstances, economic stabilization policy would be relatively straightforward.
  • 关键词:Monetary policy

Robust monetary policy rules with unknown natural rates.


Williams, John C. ; Orphanides, Athanasios


   The natural rate is an abstraction; like faith, it is seen by its
   works. One can only say that if the bank policy succeeds in
   stabilizing prices, the bank rate must have been brought in line
   with the natural rate, but if it does not, it must not have
   been. (1)


THE CONVENTIONAL PARADIGM for the conduct of monetary policy calls for the monetary authority to attain its objectives of a low and stable rate of inflation and full employment by adjusting its short-term interest rate instrument--in the United States, the federal funds rate--in response to economic developments. In principle, when aggregate demand and employment fall short of the economy's natural levels of output and employment, or when other deflationary concerns appear on the horizon, the central bank should ease monetary policy by bringing real interest rates below the economy's natural rate of interest for some time. Conversely, the central bank should respond to inflationary concerns by adjusting interest rates upward so as to bring real interest rates above the natural rate. In this setting, the natural rate of unemployment is the unemployment rate consistent with stable inflation; the natural rate of interest is the real interest rate consistent with unemployment being at its natural rate, and therefore with stable inflation. (2) In carrying out this strategy in practice, the policymaker would ideally have accurate, quantitative, contemporaneous readings of the natural rate of interest and the natural rate of unemployment. Under those circumstances, economic stabilization policy would be relatively straightforward.

However, an important difficulty that complicates policymaking in practice and may limit the scope for stabilization policy is that policymakers do not know the values of these natural rates in real time, that is, when they make policy decisions. Indeed, even in hindsight there is considerable uncertainty regarding the natural rates of unemployment and interest, and ambiguity about how best to model and estimate natural rates. Milton Friedman, arguing against natural rate-based policies in his presidential address to the American Economic Association, posited that "One problem is that [the policymaker] cannot know what the `natural' rate is. Unfortunately, we have as yet devised no method to estimate accurately and readily the natural rate of either interest or unemployment. And the `natural' rate will itself change from time to time." (3) Friedman's comments echo those made decades earlier by John H. Williams and by Gustav Cassel, who wrote of the natural rate of interest: "The bank cannot know at a certain moment what is the equilibrium rate of interest of the capital market." (4) Even earlier, Knut Wicksell stressed that "the natural rate is not fixed or unalterable in magnitude." (5) Recent research using modern statistical techniques to estimate the natural rates of unemployment, output, and interest indicates that this problem is no less relevant today than it was 35, 75, or 105 years ago.

These measurement problems appear particularly acute in the presence of structural change, when natural rates may vary unpredictably, subjecting estimates to increased uncertainty. Douglas Staiger, James Stock, and Mark Watson document that estimates of a time-varying natural rate of unemployment are very imprecise. (6) Orphanides and Simon van Norden show that estimates of the related concept of the natural rate of output (that is, potential output) are likewise plagued by imprecision. (7) Similarly, Thomas Laubach and John C. Williams document the great degree of uncertainty regarding estimates of the natural rate of interest. (8) These difficulties have led some observers to discount the usefulness of natural rate estimates for policymaking. William Brainard and George Perry conclude "that conventional estimates from a NAIRU [nonaccelerating-inflation rate of unemployment] model do not identify the full employment range with a degree of accuracy that is useful to policymaking." (9) Staiger, Stock, and Watson suggest a reorientation of monetary policy away from reliance on the natural rate of unemployment, noting that
   a rule in which monetary policy responds not to the level of the
   unemployment rate but to recent changes in unemployment without
   reference to the NAIRU (and perhaps to a measure of the deviation
   of inflation from a target rate of inflation) is immune to the
   imprecision of measurement that is highlighted in this paper. An
   interesting question is the construction of formal policy rules
   that account for the imprecision of estimation of the NAIRU. (10)


This question, coupled with the related issue of mismeasurement of the natural rate of interest, is the focus of this paper.

We employ a forward-looking quarterly model of the U.S. economy to examine the performance and robustness properties of simple interest rate policy rules in the presence of real-time mismeasurement of the natural rates of interest and unemployment. Our work builds on an active literature that has explored the implications of mismeasurement for monetary policy. (11) A key aspect of our investigation is the recognition that policymakers may be uncertain as to the true data-generating processes describing the natural rates of unemployment and interest and the extent of the mismeasurement problem that they face. As a result, standard applications of certainty equivalence based on the classic linear-quadratic-Gaussian control problem do not apply. (12) To get a handle on this difficulty, we compare the properties of policies optimized to provide good stabilization performance across a large range of alternative estimates of natural rate mismeasurement. We then examine the costs of basing policy decisions on rules that are optimized with incorrect baseline estimates of mismeasurement, that is, rules that attempt to properly account for the presence of uncertainty regarding the natural rates but inadvertently overestimate or underestimate the magnitude of the problem.

These robustness exercises point to a potentially important asymmetry with regard to possible errors in the design of policy rules attempting to account for natural rate uncertainty. We find that the costs of underestimating the extent of natural rate mismeasurement significantly exceed the costs of overestimating it. Adoption of policy rules optimized under the false presumption that misperceptions regarding the natural rates are likely to be small proves particularly costly in terms of stabilizing inflation and unemployment. By comparison, the inefficiency associated with policies incorrectly based on the presumption that misperceptions regarding the natural rates are likely to be large tends to be relatively modest. As a result, when policymakers do not possess a precise estimate of the magnitude of misperceptions regarding the natural rates, a robust strategy is to act as if the uncertainty they face is greater than their baseline estimates suggest. We show that overlooking these considerations can easily result in policies with considerably worse stabilization performance than anticipated.

Our results point toward an effective, simple strategy that is a robust solution to the difficulties associated with natural rate misperceptions. This strategy is to adopt, as guidelines for monetary policy, difference rules in which the short-term nominal interest rate is raised or lowered in response to inflation and changes in economic activity. These rules, which do not require knowledge of the natural rates of interest and unemployment and are consequently immune to likely misperceptions in these concepts, emerge as the solution to a robust control exercise from a wider family of policy rule specifications. Although these rules are not "optimal" in the sense of delivering first-best stabilization performance under the assumption that policymakers have precise knowledge of the form and magnitude of the uncertainty they face, they are robust in that they effectively ensure against major mistakes when such knowledge is not held with great confidence.

Finally, our results suggest that some important historical differences in monetary policy and macroeconomic outcomes over the past forty or so years can be traced to differences in the formulation of monetary policy that closely relate to the treatment of the natural rates. As we illustrate, misperceptions regarding the natural rates, importantly due to a steady increase in the natural rate of unemployment, could have contributed to the stagflationary outcomes of the 1970s. Paradoxically, a policy that would be optimal at stabilizing inflation and unemployment if the natural rates of unemployment and interest were known can yield dismal outcomes when the natural rates are rising and policymakers do not know it. In contrast, our analysis suggests that had policy followed a robust rule that ignores information about the levels of natural rates during the 1970s, outcomes could have been considerably better. Conversely, outcomes during the disinflationary boom of the 1990s appear consistent with the monetary authorities following a policy closer to our robust policy rules. The natural rate of unemployment apparently drifted downward significantly during the decade, which might have resulted in deflation had policymakers pursued the policy that real-time assessments of the natural rates would have dictated. In the event, policymakers during the mid-and late 1990s avoided this pitfall.

Policy in the Presence of Uncertain Natural Rates

As a starting point, we look at the nature of the problem in the context of a generalization of the simple policy rule proposed by John Taylor ten years ago. (13) Let [f.sub.t] be the nominal interest (federal funds) rate, [[pi].sub.t] the rate of inflation, and [u.sub.t] the rate of unemployment, all measured in quarter t. The Taylor rule can then be expressed by

(1) [f.sub.t] = [r.sup.*.sub.t] + [[pi].sub.t] [[theta].sub.[pi]] ([[pi].sub.t]-[[pi].sup.*] + [[theta].sub.u] ([u.sub.t]-[u.sub.t],

where [[pi].sup.*] is the policymaker's inflation target and [r.sup.*.sub.t] and [u.sup.*.sub.t] are the policymaker's estimates of the natural rates of interest and unemployment, respectively. Note that here we consider a variant of the Taylor rule that responds to the unemployment gap (the difference between the actual unemployment rate and its natural rate) instead of the output gap, recognizing that the two are related by Okun's Law. (14) As is well known, rules of this type have been found to perform quite well in terms of stabilizing economic fluctuations in model-based evaluations, at least when the natural rates of interest and unemployment are accurately measured. In his 1993 exposition, Taylor examined response parameters equal to 1/2 for the inflation gap and the output gap, which, using an Okun's coefficient of 2, corresponds to setting [[theta].sub.u], = 0.5 and [[theta].sub.u] = -1.0. We also consider a revised version of this rule with double the responsiveness of policy to the output gap ([[theta].sub.u] = -2.0 in our case), which Taylor found to yield improved stabilization performance relative to his original rule. (15)

The promising properties of rules of this type were first reported in the Brookings volume edited by Ralph Bryant, Peter Hooper, and Catherine Mann, (16) which offered detailed comparisons of the stabilization performance of various interest rate-based policy rules in several macroeconometric models. (17) However, historical experience suggests that policy guidance from this family of rules may be rather sensitive to misperceptions regarding the natural rates of interest and unemployment. The experience of the 1970s offers a particularly stark illustration of the policy errors that may result. (18)

We explore two dimensions along which the Taylor rule has been generalized, which in combination offer the potential to mitigate the problem of natural rate mismeasurement. The first aims to mitigate the effects of mismeasuring the natural rate of unemployment by partly (or even fully) replacing the response to the unemployment gap with a response to the change in the unemployment rate. (19) Although in general it is not a perfect substitute for responding to the unemployment gap directly, responding to the change in the unemployment rate is likely to be reasonably effective because it calls for easing monetary policy when unemployment is rising and tightening it when unemployment is falling. (20) The second dimension we explore is incorporation of policy inertia, represented by the presence of the lagged short-term interest rate in the policy rule. As various authors have shown, (21) rules that exhibit a substantial degree of inertia can significantly improve the stabilization performance of the Taylor rule in forward-looking models. The presence of inertia in the policy rule also reduces the influence of the estimate of the natural rate of interest on the current setting of monetary policy and, therefore, the extent to which misperceptions regarding the natural rate of interest affect policy decisions. To see this, consider a generalized Taylor rule of the form

(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The degree of policy inertia is measured by [[theta].sub.f] [greater than or equal to] 0; cases where 0 < [[theta].sub.f] < 1 are frequently referred to as "partial adjustment"; the case of [[theta].sub.f] = 1 is termed a "difference rule" or "derivative control," (22) whereas [[theta].sub.f] > 1 represents superinerential behavior. (23) These rules nest the Taylor rule as the special case when [[theta].sub.f] = [[theta].sub.[DELTA]u] = 0. (24)

To illustrate more precisely the difficulty associated with the presence of misperceptions regarding the natural rates of unemployment and interest, it is useful to distinguish the real-time estimates of the natural rates, [u.sup.*.sub.t], and [r.sup.*.sub.t], available to policymakers when policy decisions are made, from their "true" values [u.sup.*] and [r.sup.*]. If policy follows the generalized rule given by equation 2, then the "policy error" introduced in period t by misperceptions in period t is given by

(1-[[theta].sub.f]([r.sup.*.sub.t]-[r.sup.*]) + [[theta].sub.u] ([u.sup.*.sub.t]-[u.sup.*.sub.t]).

Although unintentional, these errors could subsequently induce undesirable fluctuations in the economy, worsening stabilization performance. The extent to which misperceptions regarding the natural rates translate into policy-induced fluctuations depends on the parameters of the policy rule. As is evident from the expression above, policies that are relatively unresponsive to real-time assessments of the unemployment gap, that is, those with small [[theta].sub.u], minimize the impact of misperceptions regarding the natural rate of unemployment. Similarly, inertial policies with [[theta].sub.f] near unity reduce the direct effect of misperceptions regarding the natural rate of interest. That said, inertial policies also carry forward the effects of past misperceptions of the natural rates of interest and unemployment on policy, and one must take account of this interaction in designing policies that will be robust to natural rate mismeasurement.

One policy rule that is immune to natural rate mismeasurement of the kind considered here is a "difference" rule, in which [[theta].sub.f] = 1 and [[theta].sub.u] = 0. (25)

(3) ([f.sub.t.] = [f.sub.t-1]) + [[theta].sub.[pi]] ([[pi].sub.t]-[[pi].sup.*]) + [[theta].sub.[DELTA]u]([u.sub.t]-[u.sub.t.-1] .

We note that this policy rule is as simple, in terms of the number of parameters, as the original formulation of the Taylor rule. In addition, this rule is certainly simpler to implement than the Taylor rule, because it does not require knowledge of either the natural rate of interest or the natural rate of unemployment. However, because this type of rule ignores potentially useful information about the level of the unemployment rate and the natural rates of interest and unemployment, its performance relative to the Taylor rule and the generalized rule will depend on the degree of mismeasurement and the structure of the model of the economy, as we explore below. It is also useful to note that this rule is closely related to price-level and nominal income targeting rules, stated in first-difference form.

Historical Estimates of Natural Rates

Considerable evidence suggests that the natural rates of unemployment and interest vary significantly over time. In the case of the unemployment rate, a number of factors have been put forward as underlying this variation, including changing demographics, changes in the efficiency of job matching, changes in productivity, effects of greater openness to trade, and changing rates of disability and incarceration. (26) However, a great deal of uncertainty surrounds the magnitude and timing of these effects on the natural rate of unemployment. Similarly, the natural rate of interest is likely to be influenced by variables that appear to change over time, including the rate of trend income growth, fiscal policy, and household preferences. (27) But the factors determining the natural rate of interest are not directly observed, and the quantitative relationship between them and the natural rate remains poorly understood.

Even with the benefit of hindsight and "best practice" techniques, our knowledge about the natural rates remains cloudy, and this situation is unlikely to improve in the foreseeable future. Staiger, Stock, and Watson highlight three types of uncertainty regarding natural rate estimates. (28) For estimated models with deterministic natural rates, sampling uncertainty related to the imprecision of estimates of model parameters is one source of uncertainty. Sampling uncertainty alone yields 95 percent confidence intervals of between 2 and 4 percentage points for the natural rate of unemployment, (29) and between 3 and 4 percentage points for the natural rate of interest. (30) Allowing the natural rate to change unpredictably over time adds another source of uncertainty; for example, the 95 percent confidence interval for a stochastically time-varying natural rate of interest is over 7 percentage points, twice that associated with a constant natural rate. Finally, there is considerable uncertainty and disagreement about the most appropriate approach to modeling and estimating natural rates, and this model uncertainty implies that the confidence intervals based on any particular model may understate the true degree of uncertainty that policymakers face. Importantly for the analysis in this paper, policymakers cannot be confident that their natural rate estimates are efficient or consistent, but realistically must make do with imperfect modeling and estimating methods.

Of course, in practice, policymakers are at an even greater disadvantage than the econometrician who attempts to estimate natural rates retrospectively, because policymakers must act on "one-sided," or real-time natural rate estimates, which are based only on the data available at the time the decision is made. As documented below, such estimates typically are much noisier than the smooth retrospective, or "two-sided," estimates generally reported in the literature. For a given model, the difference between the one-sided and the two-sided estimates provides an estimate of natural rate misperceptions resulting from the real-time nature of the policymaker's problem.

To illustrate the extent of these measurement difficulties, we provide comparisons of retrospective and real-time estimates of the natural rates of unemployment and interest. The various measures correspond to alternative implementations of two basic statistical methodologies that have been employed in the literature: univariate filters and multivariate unobserved-components models. The univariate filters separate the cyclical component of a series from its secular trend and use the latter as a proxy for the natural level of the detrended series. Univariate filters possess the advantages that they impose very little structure on the problem and are relatively simple to implement. Because multivariate methods bring additional information to bear on the decomposition of trend and cycle, they can provide more accurate estimates of natural rates if the underlying model is correctly specified. However, there is a great degree of uncertainty about model misspecification, especially regarding the proper modeling of low-frequency behavior, and as a result the theoretical benefits from multivariate methods may be illusory in practice.

We examine two versions each of two popular univariate filters, the Hodrick-Prescott (HP) filter and the band-pass (BP) filter described by Marianne Baxter and Robert King. (31) For the HP filter we consider two alternative implementations, one with the smoothness parameter [lambda] = 1,600, the value most commonly used in analyzing quarterly data, and one with [lambda] = 25,600, which smooths the data more and is closer to the approach advocated by Julio Rotemberg. (32) Application of the BP filter requires a choice of the range of frequencies identified as associated with the business cycle, which are to be filtered from the underlying series. We examine two popular alternatives: an eight-year window, favored by Baxter and King and by Lawrence Christiano and Terry Fitzgerald, (33) and a fifteen-year window employed by Staiger, Stock, and Watson to estimate a "trend" for the unemployment rate. (34) We apply these four univariate filters to obtain both one-sided (real time) and two-sided (retrospective) estimates of the natural rates of unemployment and interest.

We also obtain estimates of the natural rates based on two multivariate unobserved-components models, and we offer comparisons with models similar to those proposed by other authors. These models suppose that the "true" processes for the natural rates of interest and unemployment can be reasonably modeled as random walks:

(4) [u.sup.*.sub.t] = [u.sup.*.sub.t-1] + [[eta].sub.u,t], [[eta].sub.u] ~ N(0, [[sigma].sup.2.sub.[eta]u])

(5) [r.sup.*.sub.t] = [r.sup.*.sub.t-1] + [[eta].sub.r,t], [[eta].sub.r] ~ N(0, [[sigma].sup.2.sub.[eta]r]).

For the natural rate of unemployment we implement a Kalman filter model, similar to those used by Staiger, Stock, and Watson and Robert Gordon, (35) to estimate a time-varying NAIRU from an estimated Phillips curve. (36) (In what follows we treat the NAIRU and the natural rate of unemployment as synonymous.) We also examine estimates following the procedure detailed by Laurence Ball and Gregory Mankiw. (37) These authors posit a simple accelerationist Phillips curve relating the annual change in inflation to the annual unemployment rate. They estimate the natural rate of unemployment by applying the HP filter to the residuals from this relationship.

For the natural rate of interest we apply the Kalman filter to an equation relating the unemployment gap and the real interest rate gap (the difference between the real federal funds rate and the natural rate of interest). The basic specification and methodology are close to those used by Laubach and Williams, (38) but we assume that the natural rate of interest follows a random walk, whereas they allow for an explicit relationship between the natural rate and the estimated trend growth rate of GDP. The basic identifying assumption is that the unemployment gap converges to zero if the real rate gap is zero. Thus, stable inflation in this model is consistent with both the real interest rate and the unemployment rate equaling their respective natural rates. (39)

As noted above, these multivariate approaches to estimating natural rates are subject to specification error, and therefore the resulting estimates may be inefficient or inconsistent. For example, the models used to estimate the natural rate of unemployment impose the accelerationist restriction that the sum of the coefficients on lagged inflation in the inflation equation equal unity. But as Thomas Sargent demonstrated, (40) reduced-form characterizations of the Phillips curve consistent with the natural rate hypothesis do not necessarily imply this restriction, and imposing it is invalid. A very different view, which likewise comes to the conclusion that these models are misspecified, is that of Franco Modigliani and Lucas Papademos, who interpret the Phillips curve as a structural relationship but, instead of imposing the natural rate hypothesis, propose the concept of a "noninflationary rate of unemployment, or NIRU." (41) Following this approach, Brainard and Perry report estimates of the natural rate of unemployment when the assumption of constant parameters and the accelerationist restriction are relaxed. (42)

Retrospective estimates of the natural rate of unemployment exhibit variation over time and across methods at given points in time. Table 1 reports estimates of the natural rate using the methods described above, as well as the most recent NAIRU estimates by the Congressional Budget Office, (43) the Kalman filter-based NAIRU estimates of Staiger, Stock, and Watson and of Gordon, (44) and Robert Shimer's estimates based on demographic factors. (45) All of these estimates are two-sided in the sense that they use data over the whole sample period to arrive at an estimate for the natural rate at any given past quarter. Figure 1 plots a representative set of these estimates over 1969-2002; for comparison, the average rate of unemployment over that period was nearly 6 percent.

[FIGURE 1 OMITTED]

The retrospective estimates share a common pattern: generally they are relatively low at the end of the 1960s, rise during the late 1960s and 1970s, and trend downward thereafter, reaching levels in the late 1990s similar to those in the late 1960s. However, these estimates also exhibit substantial dispersion at most points in time, indicating that, even in hindsight, precisely identifying the natural rate of unemployment is quite difficult. For example, the estimates for both 1970 and 1980 cover a 2-percentage-point range.

As stressed above, the estimates of the natural rate of unemployment that are relevant for setting policy are not those shown in table 1 and figure 1, but rather the one-sided estimates that incorporate only information available at the time. Figure 2 shows such estimates for a range of the methods described above. In the case of the univariate filters, the reported series are constructed from estimates of the trend at the last available observation at each point in time. In the case of the multivariate filters, the rate estimates are likewise based only on observed data, but the estimates of the model parameters are from data for the full sample. Given the relative imprecision of many of the latter estimates, the true real-time estimates in which all model parameters are estimated using only data available at the time are likely to be considerably worse than the one-sided estimates reported here.

[FIGURE 2 OMITTED]

A striking feature of the real-time estimates obtained using the univariate filters is how much more closely they track the actual data than do the smooth, retrospective estimates reported in figure 1. This excess sensitivity of univariate filters to final observations is a well-known problem. (46) Evidently, these filters have difficulty distinguishing between cyclical and secular fluctuations in the underlying series until the subsequent evolution of the data becomes known. This problem is less evident in the multivariate filters, where the natural rate estimate is updated based on inflation surprises as opposed to movements in the unemployment rate itself.

Figures 3 and 4 plot a set of two-sided and one-sided estimates, respectively, of the natural rate of interest. Throughout this paper the real interest rate is constructed as the difference between the federal funds rate and the ex post rate of inflation (based on the GDP price index). Each figure shows two multivariate estimates (our Kalman filter estimate described above as well as that from Laubach and Williams) (47) and estimates from the same univariate filters used to estimate the natural rate of unemployment. As in the case of the natural rate of unemployment, the various techniques yield a broad range of possible retrospective and real-time estimates of the natural rate of interest over time.

[FIGURE 3-4 OMITTED]

Given the wide dispersion in these natural rate estimates, especially the more policy-relevant one-sided estimates, a natural question is whether one can discriminate between the methods according to their empirical usefulness in predicting inflation and unemployment. To test the forecasting performance of methods using the natural rate of unemployment, we compare inflation forecast errors using a simple Phillips curve model in which inflation depends on four lags of inflation, the lagged change in the unemployment rate, and two lags of the unemployment gap based on the various one-sided estimates of the natural rate of unemployment. We also consider the performance of a simple fourth-order autoregressive, or AR(4), inflation forecasting equation without any unemployment rate terms. For this exercise we use the revised data current as of this writing. As seen in the upper panel of table 2, the equations that include the unemployment gap outperform (that is, have a lower forecast standard error than) the AR(4) specification, but inflation forecasting accuracy is virtually identical across the specifications that include the unemployment gap. (48) To test the forecasting performance of methods using the natural rate of interest, we apply the same basic procedure to a simple unemployment equation, where the unemployment rate depends on two lags of itself and the lagged real interest rate gap. This yields the parallel result, shown in the lower panel of the table. Evidently, one cannot easily discriminate across specifications of the natural rates based on forecasting performance.

We now use the different natural rate estimates presented above to gauge the likely magnitude and persistence of natural rate misperceptions. We start by computing natural rate misperceptions due solely to the limitation that only observed data can be used in real time, assuming that the correct model for the natural rate is known. Given the problems of sampling and model uncertainty, we view these estimates as lower bounds on the true uncertainty of natural rate estimates. The first column of the upper panel of table 3 reports the sample standard deviations of the difference between the two-sided and the one-sided estimates of the natural rate of unemployment ([u.sup.*]-[u.sup.*]) for the various estimation methods. This standard deviation ranges from about 0.5 to 0.8, with the Kalman filter estimate lying in the center at 0.66. The lower panel of the table reports the corresponding results for estimates of the natural rate of interest. The standard deviations in this case range from 0.9 to 1.7, with the Kalman filter estimate at 1.44. In our subsequent analysis we use the estimates from our multivariate Kalman filter method as a baseline measure of the uncertainty regarding real-time perceptions of the natural rates of interest and unemployment in the historical data.

Natural rate misperceptions are highly persistent. This persistence can be characterized by the following first-order autoregressive processes:

(6) ([u.sup.*.sub.t]-[u.sup.*.sub.t]) = [[rho].sub.u]([u.sup.*.sub.t-1]-[u.sup.*.sub.t-1])+[v.sub.u,t]

(7) ([r.sup.*.sub.t]-[r.sup.*.sub.t]) = [[rho].sub.r]([r.sup.*.sub.t-1]-[r.sup.*.sub.t-1])+[v.sub.r,t],

where the errors [v.sub.u,t] and [v.sub.r,t] are assumed to be independent over time but may be correlated with each other and with other shocks realized during period t, including, importantly, the unobserved errors of the underlying processes for the natural rates, [[eta].sub.u,t] and [[eta].sub.r,t]. Table 3 also presents least squares estimates of [rho] and [[sigma].sub.v] for each of the various misperceptions measures. In all cases, misperceptions are highly persistent, with the Kalman filter estimate lying in the middle of the range on this dimension also. Note that this persistence does not necessarily imply any sort of inefficiency in the real-time estimates, but merely reflects the nature of filtering problems in general.

We now extend our analysis of the mismeasurement problem to include model uncertainty. For this purpose we compare the one-sided estimate using each method with each of the two-sided estimates. For our set of six methods this yields thirty-six measures of misperceptions for the natural rates of unemployment and interest. Table 4 summarizes the frequency distribution of the standard deviation and of the persistence measure from these alternative estimates of misperceptions. Both the standard deviations and the persistence measure of our baseline (Kalman) estimates for both natural rates, from table 3, are close to the 25th percentile as shown in table 4. Table 4 indicates generally larger and much more persistent misperceptions than those based on comparing the one- and two-sided estimates from a single model; indeed, the magnitude of misperceptions can be as much as twice that implied by the Kalman filter model. Moreover, these calculations do not reflect sampling uncertainty. In summary, combining the three forms of natural rate uncertainty suggests that conventional estimates of misperceptions based on comparing one-sided and two-sided estimates using a single estimation method are overly optimistic about the magnitude and persistence of the problem faced by policymakers.

A Simple Estimated Model of the U.S. Economy

We evaluate monetary policy rules using a simple rational expectations model, the core of which consists of the following two equations:

(8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here we use u to denote the unemployment gap and [r.sup.a] to denote the real interest rate gap based on a one-year bill. The superscript e indicates the expected value of the variable. This model combines forward-looking elements of the new synthesis model with intrinsic inflation and unemployment inertia. (49) Given the uncertainty regarding the proper specification of inflation and unemployment dynamics, later in the paper we also consider alternative specifications, including one with no intrinsic inflation and one with adaptive expectations.

The Phillips curve in this model (equation 8) relates inflation (measured as the annualized percentage change in the GDP price index) during quarter t to lagged inflation, expected future inflation, and expectations of the unemployment gap during the quarter, using the retrospective estimates of the natural rate discussed below. The estimated parameter [[phi].sub.[pi]], measures the importance of expected inflation in determining inflation. The unemployment equation (equation 9) relates the unemployment gap during quarter t to the expected future unemployment gap, two lags of the unemployment gap, and the lagged real interest rate gap. Here two elements importantly reflect forward-looking behavior. The first is the estimated parameter [[phi].sub.u], which measures the importance of expected unemployment, and the second is the duration of the real interest rate, which serves as a summary of the influence of interest rates of various maturities on economic activity. Because data on long-run inflation expectations are lacking, we limit the duration of the real rate to one year.

In estimating this model we are confronted with the difficulty that expected inflation and unemployment are not directly observed. Instrumental variables and full-information maximum likelihood methods impose the restriction that the behavior of monetary policy and the formation of expectations must be constant over time, although neither proposition appears tenable over the sample period that we consider (1969-2002). Instead we follow the approach of John Roberts and Glenn Rudebusch and use the median forecasts for inflation and unemployment in the Survey of Professional Forecasters as proxies for expectations. (50) We use the forecast from the previous quarter; that is, we assume expectations are based on information available at time t - 1. To match the inflation and unemployment data as well as possible with the forecasts, we use first announced estimates of these series. (51) Our primary sources for these data are the Real-Time Dataset for Macroeconomists and the Survey of Professional Forecasters, both currently maintained by the Federal Reserve Bank of Philadelphia. (52) Using the least squares method, we obtain the following estimates over the sample 1969:1 to 2002:2 (this choice of sample period reflects the availability of the Survey of Professional Forecasters data):

(10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Standard error of the regression (SER) = 1.38, Durbin-Watson statistic (DW) = 2.09.

(11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In these results the numbers in parentheses are the standard errors of the corresponding regression coefficients. The estimated unemployment equation also includes a constant term (not reported) that captures the average premium of the one-year Treasury bill rate we use for estimation over the average of the federal funds rate, which corresponds to the natural rate of interest estimates we employ in the model. In the model simulations we impose the expectations theory of the term structure whereby the one-year rate equals the expected average of the federal funds rate over four quarters.

In addition to the equations for inflation and the unemployment rate, we need to model the processes that generate both the true values of the natural rates of unemployment and interest and policymakers' real-time estimates of these rates. For this purpose we use our Kalman filter estimates as a baseline for the specification of the natural rate processes. Throughout the remainder of the paper, we assume that the true values for the natural rates are given by the two-sided retrospective Kalman filter estimates. Specifically, we append to the basic macroeconomic model our equations 4 and 5 for [u.sup.*] and [r.sup.*], respectively, and compute the equation residuals--the "shocks" to the true natural rates--using the two-sided Kalman filter estimates.

For the policymakers' estimates of natural rates, we assume that the difference between the true and the estimated values follows the AR(1) process described by equations 6 and 7, with the AR(1) set equal to that based on the regression using the difference between the one- and the two-sided Kalman filter estimates reported in table 3. As seen in that table, this specification approximates several common filtering methods. The residuals from these equations represent the shocks to mismeasurement under the assumption that the policymaker possesses the correctly specified Kalman filter models.

Because we are interested in the possibility that the policymakers' natural rate estimates result from a misspecified model, we allow for a range of estimates of the magnitude of natural rate mismeasurement, indexed by s, in our policy experiments. The case of s = 0 corresponds to the "best case" benchmark (a standard assumption in the policy rule literature), in which the policymaker is assumed to observe the true value of both natural rates in real time. For this case we set the residuals of the two mismeasurement equations to zero. The case of s = 1 corresponds to the assumption that the policymaker possesses the correctly specified Kalman filter models (including knowledge of all model parameters). In this case the residuals from the mismeasurement equation are set to their historical values. As discussed above, owing to the possibility of model misspecification, this calculation most likely yields a conservative figure for the magnitude of real-world natural rate misperceptions. To approximate the policymakers' use of a misspecified model of natural rates, we examine simulations where we amplify the magnitude of misperceptions by multiplying the residuals to the mismeasurement equations by s. As indicated by the results in table 4, incorporating model misspecification can yield differences between the one- and the two-sided estimates that are on average twice as large as those implied by comparing the one- and the two-sided Kalman filter estimates, implying a value of s of up to 2. (53) In addition, these calculations ignore sampling uncertainty associated with estimated models; in consideration of this source of uncertainty we also examine the case of s = 3.

For a given value of s, we estimate the variance-covariance matrix of the six model equation innovations (corresponding to equations 4-7, 10, and 11) using the historical equation residuals, where the misperception residuals are multiplied by s, as described above. Note that, by estimating the variance-covariance matrix in this way, we preserve the correlations among shocks to inflation, the unemployment rate, changes in the natural rate, and natural rate misperceptions present in the data. For example, shocks to misperceptions of [r.sup.*] are positively correlated with shocks to the unemployment rate and with misperceptions of [u.sup.*], and shocks to misperceptions of [u.sub.*] are negatively correlated with shocks to inflation.

For a given monetary policy rule of the form of equation 1, we solve for the unique stable rational expectations solution, if one exists, using Gary Anderson and George Moore's implementation of the method developed by Olivier Blanchard and Charles Kahn. (54) Given the model solution and the variance-covariance matrix of equation innovations, we then numerically compute the unconditional moments of the model. This method of computing unconditional moments is equivalent to, but computationally more efficient than, computing them from stochastic simulations of extremely long length. (55)

Policy Rule Evaluation

We now examine how uncertainty regarding the natural rates of interest and unemployment influences the design and performance of policy rules. We assume that the policymaker is interested in minimizing the loss, L, equal to the weighted sum of the unconditional squared deviations of inflation from its target, those of the unemployment rate from its true natural rate, and the change in the short-run interest rate:

(12) L = [omega]Var([pi] - [[pi].sup.*]) + (1 - [omega])Var(u - [u.sup.*]) + [sub.[psi]]Var([DELTA]f).

As a benchmark for our analysis and for comparability with earlier policy evaluation work, we consider preferences equivalent to placing equal weights on the variability of inflation and the output gap. Assuming an Okun's Law coefficient of 2, this weighting implies setting [omega] = 0.2. We include a relatively modest concern for interest rate stability, setting [psi] = 0.05. Later we show that the main qualitative results are not sensitive to changes in [omega] and [psi]. In all our experiments we assume that the policy-maker has a fixed and known inflation target, [[pi].sup.*]. (56)

We start our analysis of the effects of natural rate mismeasurement by examining macroeconomic performance under the classic and revised forms of the original Taylor rules:

(13) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (the classic rule)

(14) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (the revised rule).

The direct effects of natural rate mismeasurement on the setting of policy are transparent under these rules: a 1-percentage-point error in [r.sup.*] translates into a 1-percentage-point error in the interest rate, and a 1-percentage-point error in [u.sup.*] translates into a -1-percentage-point error in the interest rate for the classic Taylor rule and a -2-percentage-point error for the revised rule. The first panel of table 5 reports the standard deviations of the unemployment gap, the inflation rate, and the change in the federal funds rate, as well as the associated loss under the classic Taylor rule in our model, for values of s between 0 and 3. The next panel does the same for the revised Taylor rule. Figure 5 illustrates some of these results graphically, tracing out the unconditional standard deviations of inflation (top panel) and the unemployment gap (bottom panel) for our model economy when policy is based on the classic Taylor rule or the revised Taylor rule for different values of s.

[FIGURE 5 OMITTED]

Starting with the case of no misperceptions, s = 0, we see that both the classic and the revised Taylor rules are effective at stabilizing inflation and the unemployment rate gap. The revised variant of the rule is more responsive to the perceived degree of slack in labor markets and thereby achieves lower variability of both inflation and the unemployment gap, at the cost of modestly higher variability of the change in the interest rate. (57) However, policy outcomes for both rules deteriorate markedly, and increasingly so as the degree of misperception regarding the natural rates increases. For example, under the classic Taylor rule the standard deviation of inflation is 2.14 when s is assumed to be 0, but it increases to 3.67 under the assumption that s = 1, and 8.72 for s = 3. In addition, and of greater interest from a policy design perspective, figure 5 illustrates that the performance deterioration owing to natural rate uncertainty is worse for the revised Taylor rule, because it places greater emphasis on the unemployment gap. Indeed, for even modest levels of natural rate misperceptions, the classic Taylor rule performs better than the revised version, a result consistent with previous findings based on output gap mismeasurement. (58)

We now examine the efficient choices for the two parameters, [[theta].sub.[pi]] and [[theta].sub.u], that measure the responses to the inflation and unemployment gaps, respectively, under a policy rule of the same functional form as the Taylor rule with natural rate uncertainty. In this exercise we assume that the policymaker is interested in identifying a simple, fixed policy rule that can provide guidance for minimizing the weighted variances in the loss function (equation 12) with the weights described above. Figure 6 presents the optimal choices of the two parameters for various values of s. As the left-hand panel shows, the optimal responsiveness to inflation increases with uncertainty in this case. From the right-hand panel it is also evident that the optimal response to the unemployment gap drops (in absolute value) and approaches zero as the degree of mismeasurement increases to values of s beyond 2. This finding confirms the parallel result, reported by various authors, of attenuated responses to the output gap as an efficient response to uncertainty regarding the measurement of the output gap in level rules. (59)

[FIGURE 6 OMITTED]

This attenuation result contrasts with standard applications of the principle of certainty equivalence whereby, under certain conditions, the policymaker could compute the optional policy abstracting from uncertainty and apply the resulting optimal rule by substituting into it, for the unobserved values, estimates of the natural rates based on an optimal filter. (60) Rather, our result is similar to Brainard's conservatism principle, (61) where attenuation is shown to be optimal when policy effectiveness is uncertain.

Two key conditions that are necessary for the standard application of certainty equivalence are violated in our analysis. First, we focus on "simple" policy rules that respond to only a subset of the relevant state variables of the system, whereas certainty equivalence applies only to fully optimal rules. The distinction is especially important in the presence of concern about model misspecification. As discussed by Andrew Levin, Volker Wieland, and Williams, (62) simple rules appear to be more robust to general forms of model uncertainty than are rules optimized to a specific model, indicating that, in the broader context of the types of uncertainty that policymakers face, an exclusive focus on fully optimal rules may be misguided. Second, and especially relevant for our analysis, the traditional applications of certainty equivalence rely on the existence of a model of natural rates that is presumed to be true and known with certainty, which policymakers can apply to obtain "optimally" filtered estimates of the natural rates. In light of the uncertainty about how to best model and estimate the natural rate processes discussed earlier, we find this assumption untenable. (63)

We now assess the implications of ignorance regarding the precise degree of uncertainty about the natural rates that policymakers may face. We start by examining the costs of basing policy decisions on rules that are optimized with incorrect baseline estimates of this uncertainty. We examine the performance of rules optimized for natural rate mismeasurement of degree s = 0 and s = 1 when the true extent of mismeasurement may be different. The economic outcomes associated with this experiment are shown in figure 7 and the third panel of table 5, for true values of s ranging from 0 to 3. As seen in the figure, the rule optimized on the assumption of no misperceptions performs poorly even at the baseline value of s = 1, whereas the rule optimized assuming s = 1 is much more robust to natural rate mismeasurement.

[FIGURE 7 OMITTED]

These experiments point to an asymmetry in the costs associated with natural rate mismeasurement: the cost of underestimating the extent of misperceptions significantly exceeds the cost of overestimating it. Policy rules optimized under the false presumption that misperceptions regarding the natural rates are likely to be small are characterized by large responses to the unemployment gap. This can prove extremely costly. By comparison, policies incorrectly based on the presumption that misperceptions regarding the natural rates are likely to be large are more timid in their response to the unemployment gap, but this is associated with little inefficiency. In the case where there are in fact no misperceptions, the policy optimized under the assumption of s = 1 delivers modestly worse results than the policy optimized under the assumption of no misperceptions; however, in the presence of even a modest degree of misperception, the performance of the policy designed on the assumption of no misperceptions deteriorates dramatically as the degree of mismeasurement increases.

Given the potential difficulties associated with the optimized Taylor rules in the presence of natural rate mismeasurement, it is of interest to compare the performance of these rules with our alternative family of "robust" difference rules of the form given by equation 3. In the present context, this class of rules is robust to natural rate mismeasurement because natural rate estimates do not enter into the implied policy setting decision. The final row of table 5 presents the efficient choice of the parameters [[theta].sub.[pi]] and [[theta].sub.[DELTA]u] corresponding to this robust rule chosen to minimize the same loss as the optimized Taylor rules. The stabilization performance of this rule is also shown in figure 7. In this model this rule performs about as well as the Taylor rules (equation 1) when the natural rates are assumed to be known, and, consequently, it dominates these rules in the presence of uncertainty, since with greater uncertainty about misperceptions regarding the natural rates, the performance of the Taylor rules deteriorates, whereas the performance of the robust rule remains unchanged. The key reason that the robust difference rule performs so well relative to the Taylor rules, even in the absence of natural rate uncertainty, is that it incorporates a great deal of policy inertia. As noted above, this is an important ingredient of successful policies in forward-looking macroeconomic models when policymakers are concerned about interest rate variability.

Given these results, we now consider a more flexible form of policy rule that combines level and first-difference features. Figure 8 presents the optimized parameters corresponding to the generalized policy rules given in equation 2 for different values of s, which is assumed for this experiment to be known by the policymaker. If the natural rates of interest and unemployment are assumed to be known, then the efficient policy rule exhibits partial adjustment and a strong response to the unemployment gap, along with a response to inflation and the change in the unemployment rate. We now examine how the optimal policy responses are altered when the degree of mismeasurement is increased and this is known by the policymaker. First, the response to the unemployment gap diminishes sharply and approaches zero as the degree of uncertainty increases. Second, compensating for the reduced response to the unemployment gap, in the face of increased uncertainty the efficient rules call for larger responses to changes in the rate of unemployment. Third, the degree of inertia in the efficient rules increases as the degree of uncertainty rises, approaching the limiting value [[theta].sub.f] = 1. In the limit, as the degree of uncertainty increases, the generalized rule collapses to the robust difference rule.

[FIGURE 8 OMITTED]

The performance of optimized generalized rules is reported in figure 9, which repeats the experiments reported in figure 7 but uses optimized generalized policy rules. As in the case of the Taylor rules, the performance of the generalized rule optimized assuming no natural rate misperceptions deteriorates dramatically if natural rates are in fact mismeasured. In contrast, the rule optimized assuming s = 1 is quite robust to natural rate mismeasurement. As noted, this rule features a great deal of inertia and modest responses to estimates of u *. The performance of the robust difference rule, as shown in figure 9, is invariant to the degree of mismeasurement and exceeds that of the generalized rule optimized assuming s = 1 for all values of s > 1.5.

[FIGURE 9 OMITTED]

The asymmetry in outcomes due to incorrect assessments, shown in figure 9, suggests that, when policymakers do not possess a precise estimate of the magnitude of misperceptions regarding the natural rates, it may be advisable to act as if the uncertainty they face is greater than their baseline estimates. We examine this issue in greater detail with an example shown in figure 10. To facilitate comparisons, the figure plots pairs of the policy responses, [[theta].sub.u] and [[theta].sub.f], corresponding to different values of a known degree of uncertainty (from figure 8). Note in particular the location of the efficient policies corresponding to s = 0, 1, and 2 and the limiting case of difference rules ("Robust policy" in the figure).

[FIGURE 10 OMITTED]

Consider the following problem of Bayesian uncertainty regarding s. Suppose that the policymaker has a diffuse prior with support [0,2] regarding the likely value of s. By construction, the baseline estimate of uncertainty is thus s = 1. As the figure shows, however, the efficient choice based on the optimization with the diffuse prior over s corresponds to a choice of 0u and 0I that is closer to the certain efficient choice with s = 2, a worse outcome for this distribution. In this sense a policymaker with a Bayesian prior over the likely degree of uncertainty he or she may face about the natural rates should act as if confident that the degree of uncertainty is greater than the baseline estimates. Of course, complete ignorance regarding the distribution of s leads to the robust control solution, which here corresponds to the limiting case of the robust difference rule (equation 3).

The precise parameterization of the robust difference rule for our model depends on the loss function parameters [omega] and [psi]. As noted earlier, in our analysis thus far we have set [omega] = 0.2 and [psi] = 0.05, which can be interpreted as a "balanced" preference for output and inflation stability but one that exhibits relatively low concern for interest variability. For comparison, in table 6 we present alternative robust rules corresponding to different values of the loss function parameters: 0.1, 0.2, and 0.5 for [omega] and 0.05, 0.5, and 5.0 for [omega]. Given [omega], higher values for co correspond to a larger inflation response coefficient, [[theta].sub.[pi]], with a relatively small effect on [[theta].sub.[DELTA]]. Given [omega], a greater concern for interest rate smoothing reduces both response coefficients. This leads to a noticeable reduction in the standard deviation of interest rate changes, but at the cost of greater variability in both inflation and the unemployment gap.

Robustness in Alternative Models

Thus far our analysis has been conditioned on the assumption that the baseline model of the economy that we estimated above offers a reasonable characterization of the workings of the economy in our sample, including, importantly, the role of expectations. This assumption may be critical for interpreting our policy evaluation analysis and finding that the simple difference policy rule we identify offers a useful and robust benchmark for policy analysis. Given that researchers and policymakers may hold different views about the most appropriate model for characterizing the role of expectations, and given the uncertainty associated with any estimated model, it is of interest to examine whether the basic insight regarding the robustness of difference rules in the face of unknown natural rates holds in alternative models. To that end we also examined two alternative models based on the same historical data as our baseline model but reflecting quite different views regarding the role for expectations: a new synthesis model in which economic outcomes depend much more critically on expectations than in our baseline model, and an accelerationist model in which the role of rational expectations is largely assumed away.

A New Synthesis Model

In the new synthesis model we examine, no lagged terms of inflation or unemployment appear as in equations 8 and 9, the short-term interest gap enters the unemployment equation, and there is no lag in the information structure regarding expectations (that is, time t expectations):

(15) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(16) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We calibrated this model to the 1969-2002 sample so that the characteristics of the underlying data are the same as in our baseline model. As is well known, this specification does not capture the dynamic behavior of the inflation and unemployment (or output gap) data very well when the shocks to the inflation and unemployment equations, [e.sub.[pi]] and [e.sub.u], are serially uncorrelated. (64) Following Rotemberg and Michael Woodford, Bennett McCallum, (65) and others, we therefore allowed the errors [e.sub.[pi]] and [e.sub.u] to be serially correlated and estimated the model with this modification using the same data as in our baseline model, with the changes noted above. Because our unrestricted least squares estimate of [[alpha].sub.u] was essentially 0, and therefore inconsistent with the theoretical foundations of this model, we imposed a value for that parameter. We set [[alpha].sub.u] = 0.05, following the theoretically motivated calibration presented by McCallum based on a model of the output gap. (66) The resulting estimated form of this model is

(17) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(18) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Using these estimates and the associated covariance structure of the errors in this model, we computed efficient policy responses for the generalized rule (equation 2) without and with uncertainty regarding the natural rates as with our baseline model. An interesting feature of the new synthesis model that differs from our baseline model is that, in the absence of uncertainty about the natural rates, the efficient policies are superinertial, that is, [[theta].sub.f] > 1. (67) In the presence of uncertainty, of course, such policies also introduce policy errors from misperceptions about the natural rate of interest similar to policies with [[theta].sub.f] < 1. The only difference is that the sign of the error is reversed. Figure 11, which repeats for this model the experiments shown in figure 8 for our baseline model, confirms that, in the presence of greater uncertainty regarding the real-time estimates of the natural rate, the efficient policy again converges toward [[theta].sub.f] [right arrow] 1 and [[theta].sub.u] [right arrow] 0. Evidently, the difference rule in equation 3 represents the robust policy for dealing with natural rate uncertainty in this model as well as in the baseline model. This can also be confirmed in table 7, which compares the values of the loss function corresponding to the robust rule (equation 3) and the generalized rule (equation 2) optimized for s = 0. From the second row of the table it is evident that the cost of adopting the robust rule relative to the optimized one is modest when s = 0, and the benefits are considerable if the true level of uncertainty is s = 1 or higher. This is similar to the result indicated earlier for our baseline model, as shown in the first row of the table.

[FIGURE 11 OMITTED]

An Accelerationist Model

A key feature of the baseline and new synthesis models is the assumption of rational expectations. As noted above, difference rules perform reasonably well in those models even in the absence of natural rate misperceptions. In "backward-looking" models with adaptive expectations, however, difference rules generally perform poorly and may be destabilizing because of the instrument instability problem. Moreover, in such models the costs associated with responding to the change in the output gap or the unemployment gap, as opposed to their levels, tend to be much greater than in forward-looking models with rational expectations. To explore this sensitivity of policy to a different specification of expectations, we estimate a backward-looking model that imposes an accelerationist Phillips curve and assumes that expectations are unimportant for determining aggregate demand, with the exception of the real interest rate, where we retain the ex ante real rate of interest from our baseline model:

(19) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(20) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Figure 12, which parallels figures 8 and 11 for our baseline and new synthesis models, respectively, presents the simulated efficient response coefficients of the generalized rule in equation 2 for this model. Two findings are apparent. As in the baseline and new synthesis models, uncertainty regarding the natural rates raises the efficient degree of inertia in the policy rule and leads to a significant attenuation of the policy response to the unemployment gap. However, as uncertainty regarding the natural rates increases, the efficient policy for this model does not converge to the robust difference rule (equation 3) as quickly as in the other two models. Evidently, in a backward-looking world, there are costs from completely ignoring the estimated levels of the unemployment gap and the natural rate of interest, even when the uncertainty regarding natural rates is significant. The last row of table 7 confirms this. (68) However, even in this model our experiments suggest that policies should exhibit significant smoothing and an attenuated response to the unemployment gap.

[FIGURE 12 OMITTED]

As the last row of table 7 also indicates, even in this case the robust rule for this model performs better than the rule optimized under the assumption of no misperceptions when the true degree of misperceptions is as high as s = 3. However, this is a much higher threshold than that for our baseline and new synthesis models.

Robustness to Both Model and Natural Rate Uncertainty

McCallum and Taylor argue that monetary policy should be designed to perform across a wide range of reasonable models. (69) In this section we follow Levin, Wieland, and Williams and compute the optimized policy rule given priors over the three models discussed above. (70) For this experiment we assign equal weights to the three models and compute the optimal choice of parameters for the robust policy rule. The results of this exercise are reported in table 8, which follows a format similar to that of table 6, which was based on the baseline model alone. The third and fourth columns show the optimal rule parameters for the objective of minimizing the sum of the losses in the three models. The last three columns show the corresponding losses. Comparison of the two tables reveals that the optimal rule allowing for model uncertainty features slightly larger responses to the change in the unemployment rate, but the response to the inflation rate is from three to five times larger than in the baseline model. Although not shown in the table, the parameters of the generalized rule that accounts for model uncertainty lie between those of the baseline and accelerationist models.

Misperceptions and Historical Policy Outcomes

Our policy evaluation experiments highlight that overconfidence regarding the policymaker's ability to detect changes in the natural rates--that is, the pursuit of policies that are believed optimal under the false assumption that misperceptions regarding real-time assessments of the natural rates are smaller than they actually are---can have potentially disastrous consequences for economic stability. The sensitivity of economic outcomes to policy design is potentially informative for understanding the historical performance of monetary policy, especially during episodes when natural rates changed significantly and real-time assessments of these rates were likely to have been subject to substantial misperceptions. As an illustration, we perform two experiments comparing outcomes from the Taylor, optimized, and robust rules, designed to highlight some elements we find important for understanding the stagflationary experience of the 1970s and the disinflationary boom of the 1990s.

The 1970s

The stagflationary experience of the 1970s has proved a rich laboratory for understanding potential pitfalls in policy design. A number of plausible explanations that boil down to inherently "bad" policy have already been put forward for the dismal outcomes of that period: possible confusion of real and nominal interest rates, unstable responsiveness of policy to inflation, attempted exploitation of a Phillips curve that was misspecified to include a stable long-run trade-off between inflation and unemployment, and so forth. In our illustration we instead highlight the more subtle complication arising from comparing policies that, as already pointed out, would appear to be "good" under certain circumstances but have different degrees of sensitivity to the presence of misperceptions regarding the natural rates.

To set the stage, consider the evolution of perceptions regarding the natural rates of interest and unemployment such as appear to have been an integral part of the 1970s experience. (We review some direct evidence from the historical record on the evolution of beliefs below.) To illustrate the misperceptions that we wish to consider for this experiment, figure 13 traces an example that assumes that both natural rates increase over a period of 2 1/2 years by 1.5 percentage points. We assume that, at the beginning of the simulation, before the unexpected increases, policymakers know the correct levels of the natural rates. Despite starting with correct estimates, their gradual learning of the evolution of the natural rates when they unexpectedly rise results in temporary but nonetheless persistent misperceptions. Given the average speed of learning implied by our baseline estimates of historical misperceptions in our sample, the 1.5-percentage-point increase shown by the solid lines in figure 13 results in the real-time estimates shown by the dotted lines. For both natural rates, errors in real-time estimates--the difference between the true natural rate and the real-time estimates--gradually increase at first, to about 1 percentage point, and then dissipate slowly over a period of many years.

[FIGURE 13 OMITTED]

The effects of these misperceptions on economic outcomes for the classic and the revised Taylor rules are compared in figure 14. The upper panel shows that, when policy follows the classic Taylor rule, natural rate misperceptions lead to a persistent rise in inflation, which peaks at 3 percentage points above the policymaker's objective. The bulk of this unfavorable outcome is due to the strong response of this policy rule to an incorrectly estimated unemployment gap, which can be seen in the lower panel. As policymakers' perceptions of the natural rate lag behind reality, the policymaker incorrectly and strongly attempts to stabilize the rate of unemployment at a level that is persistently too low. Throughout the simulation, the policymaker believes that the actual unemployment rate is above the natural rate, and policy actions impede the movement of the economy toward the true natural rate. The outcome is the modestly stagflationary experience shown in the figure. The magnitude of the increase in inflation is greater for the revised Taylor rule because this rule is more responsive to the size of the perceived unemployment rate gap.

[FIGURE 14 OMITTED]

The magnitude of the peak inflationary effect depends on the parameters of the policy rule, but as long as policy responds to natural rates, the effects are quite persistent. The top two panels of figure 15 show the responses from the generalized rule optimized under the assumption of no misperceptions. The rise in the inflation rate is nearly 7 percentage points at its peak, and even after seven years inflation is nearly 3 percentage points above target. The robust policy likewise cannot avoid the initial increase in inflation, as seen in the bottom two panels of the figure. However, because the robust policy is not guided by perceptions of the unemployment gap, but only by the evolution of inflation and changes in unemployment, policy does not impede the movement of the economy toward the true natural rates in the way the optimized policy does. Consequently, the increase in the natural rates leads to a much less persistent deviation of inflation from its target in this case (bottom left-hand panel).

[FIGURE 15 OMITTED]

The relevance of this comparison for explaining the events of the 1970s rests on two elements. The first is that misperceptions regarding the natural rate of unemployment, and to a lesser degree the natural rate of interest, significantly influenced policy. The second and perhaps more controversial element is that policymakers at the time actually operated in a way resembling the Taylor rule or our "optimal" policy approach, instead of a more robust policy.

Bearing on this are the fascinating intellectual debates regarding "activist" countercyclical stabilization policies and the observation that proponents of such policies appeared to have won the day at the turn of the 1970s. (71) The perceived triumph of activist policy is reflected in many writings, including those of Robert Heller and Arthur Okun, (72) and appeared to capture the hopes of both academic economists and policymakers across a wide spectrum of ideologies and backgrounds. One succinct accounting of the policy errors committed using this lens was offered by Herbert Stein, who reflected on policymakers' attempts to guide the economy to its "optimum feasible path" (73) at the turn of the 1970s by targeting" `the natural rate of unemployment' which we thought to be 4 percent." (74) In contrast, our baseline estimates, as well as those by the Congressional Budget Office, suggest that the natural rate of unemployment at the beginning of the 1970s was nearly 6 percent. Stein's account is corroborated by a recent retrospective on Paul McCracken's service on the Council of Economic Advisers. (75) The view from the Federal Reserve suggests a similar picture. Shortly after he left the Federal Reserve, Arthur Bums, who had served as its chairman from 1970 to 1978, expressed his anguish over the deleterious effects of underestimating the natural rate of unemployment; like Stein, he noted that the initial estimate of 4 percent proved, retrospectively, to have been too low. (76) As Orphanides documents, (77) the related estimates of potential output and the output gap during the early 1970s proved, retrospectively, to have been exceedingly high.

Many issues complicated the measurement of the natural rate of unemployment in the early 1970s, including disagreements regarding the modeling of inflation dynamics and the Phillips curve, the meaning of "full employment," the proper accounting of demographics, the modeling of expectations, and so forth. Starting with its first volume in 1970, the first few years of the Brookings Papers are a valuable source documenting the debate on and evolution of views regarding the natural rate of unemployment. Indeed, at the very first meeting of the Brookings Panel, Okun and Nancy Teeters presented an analysis of the "full employment" surplus assuming that the appropriate definition of full employment was the 4 percent rate of unemployment widely accepted during the previous decade. (78) Robert Hall identified the "equilibrium level of unemployment" or "full employment unemployment" as the level that, "... if maintained permanently, would produce a steady rate of inflation of 3 or 4 percent per year" and noted that "[m]ost economists agree that this is somewhere between 4 and 5 percent unemployment." (79) Perry presented estimates of the shifting inflation-unemployment trade-off adjusting for changes in the demographic composition of the labor force (what later became known as "Perry weighting"), and the dispersion of unemployment among age-sex groups in the labor force. (80) According to his estimates, (81) whereas an unemployment rate of about 4 percent had been consistent with a 3 percent annual increase in the consumer price index during the mid-1950s, by 1970 the unemployment rate would have had to be around 5 percent to be consistent with the same 3 percent rate of inflation. Finally, in one of the earliest exercises of policy design based on an estimated econometric model at the Federal Reserve (and, as far as we are aware, the earliest such exercise using a model consistent with the natural rate hypothesis), William Poole presented experiments using the Federal Reserve's econometric model with two versions of a Phillips curve: a "standard model" (with a sloping "long-run" Phillips curve) and an "accelerationist model." (82) Poole's simulations using the standard model showed that inflation could be stabilized below 3 percent with a 4 percent rate of unemployment. In simulations using the accelerationist model the implicit "natural" rate of unemployment was 4.5 percent. Already in this work from 1970 and 1971 it is clear that estimates of the natural rate were beginning to rise from the 4 percent level that had prevailed during the 1960s. Nonetheless, the evidence is compelling that misperceptions regarding the natural rate of unemployment remained sizable at the turn of the 1970s.

Whereas such real-time estimates of the natural rate of unemployment are well documented, real-time estimates of the natural rate of interest are hard to come by. One source is the report prepared each year by the trustees of the Social Security system; for several decades this report has included projections of long-term interest rates. The forecast long-run real interest rate reported by the trustees rose from 2 1/2 percent in 1972 to 3 1/4 percent in 1975. Before 1972 only nominal rates were projected, and estimates of this rate rose by a full percentage point between 1969 and 1972. Given the relatively modest rise in inflation during that period, this rise in nominal rates can be interpreted as a significant increase in long-run real rates. Overall, this evidence provides some support for a significant increase in the perceived natural rate of interest over this period.

The 1990s

What Alan Blinder and Janet Yellen have called the "fabulous decade" arguably constitutes, in some respects, the exact opposite of the dismal experience of the 1970s. (83) During the 1990s the natural rate of unemployment apparently drifted downward, and significantly so. This lower level of the natural rate of unemployment went hand in hand with somewhat lower inflation; however, inflation remained more or less in line with policymakers' descriptions of their price stability objectives.

One possible difference from the experience of the 1970s is that natural rate misperceptions may have been smaller and more persistent in the more recent episode. Ball and Robert Tchaidze, for example, argue that the Federal Reserve's implicit NAIRU estimates may have fallen rapidly in the second half of the 1990s. (84) Even so, the record indicates the possibility of significant misperceptions. The transcripts of Federal Open Market Committee meetings for 1994 and 1995, for example, show that some members of the committee as well as Federal Reserve Board staff held the view that the natural rate of unemployment was around 6 percent at the time. By 2000 then-Governor Laurence Meyer was indicating that a range of 5 to 5 1/4 percent was a better estimate. (85) This points toward a nontrivial misperception, perhaps as great as 1 percentage point, at the middle of the decade. (86) Table 9 suggests similar revisions in responses from the Survey of Professional Forecasters as well as in estimates published by the Congressional Budget Office and the Council of Economic Advisers.

An alternative possibility is that, despite significant misperceptions regarding the natural rate of unemployment, economic outcomes were better because monetary policy was more robust to such errors than the policy framework in place during the 1970s. To highlight this possibility, figure 16 presents two alternative illustrations for this period, tracing the evolution of the economy following a reduction in the natural rate of unemployment under our optimized and under our robust policies. Here we assume that the natural rate of interest remains unchanged and that the change in the natural rate of unemployment has the same size and timing as that shown in the right-hand panels of figure 13, but opposite sign. Assuming the 1.5-percentage-point reduction in the natural rate of unemployment underlying the simulation, policy under the optimized rule would have led to deflation over this period, with inflation falling by almost 6 percentage points during the simulation and staying well below its initial value for many years. By contrast, our robust policy appears more successful in replicating the "Goldilocks"-like economic outcomes of this period.

[FIGURE 16 OMITTED]

Concluding Remarks

This paper has critically reexamined the usefulness of the natural rates of interest and unemployment in the setting of monetary policy. Our results suggest that underestimating the unreliability of real-time estimates of the natural rates may lead to policies that are very costly in terms of the stabilization performance of the economy. It is important to note that our critique does not necessarily imply any disagreement with the validity or usefulness of these concepts for understanding and describing historical macroeconomic relationships. Indeed, our analysis and conclusions are based entirely on models in which deviations from natural rates are the primary drivers of inflation and unemployment. Instead we argue that uncertainty about natural rates in real time recommends against relying excessively on these intrinsically noisy indicators when making monetary policy decisions. In that respect our critique echoes similar concerns voiced decades ago about the operational usefulness of policy based on natural rates--concerns also reflected, at least in part, in more recent discussions of monetary policy. (87)

A key aspect of natural rate measurement is the profound uncertainty regarding the degree of mismeasurement. Because the losses from underestimating measurement error exceed those from exaggerating it, Bayesian and robust control strategies indicate that the policy rule should incorporate a biased protection against measurement error and respond only modestly to estimates of the natural rates of interest and unemployment. Indeed, in forward-looking models a "difference" policy rule in which the change in the interest rate responds to the inflation rate and the change in the unemployment rate, and not to the levels of the natural rates, performs nearly as well as more complicated rules that incorporate both level and difference features. Only in a backward-looking model do we find a strong argument for maintaining a nontrivial response to natural rates, but even in this model the basic conclusion of our analysis holds: natural rate uncertainty calls for very muted responses to both the natural rate of interest and the natural rate of unemployment relative to policy rules designed in the context of no measurement error. (88)

The historical experiences of the 1970s and the late 1990s provide insights into the design of monetary policy in light of natural rate uncertainty. In the earlier episode, arguably, policymakers mistakenly held to the belief that the natural rate of unemployment was lower than we now (with hindsight) believe it was, and they actively sought to stabilize unemployment at that level. The result was rising inflation and eventually stagflation. In the 1990s the reverse shock took place, but inflation remained relatively stable.
Table 1. Retrospective Estimates of the Natural Rate of Unemployment,
Selected Years, 1960-2000

Percent

Source or method        1960      1970      1980      1990     2000

Congressional Budget
  Office (2002) (a)      5.5       5.9       6.2       5.9      5.2
Gordon (2002) (a)        5.6       6.3       6.3       6.2      5.0
Ball and Mankiw
  method (b)             5.0       6.0       6.9       6.2      4.5
Staiger, Stock, and
  Watson (2002) (a)      5.8       4.7       7.7       6.3      4.5
Kalman filter (b)         --       5.7       6.4       5.8      5.0
Brainard and Perry
  (2000) (a)             3.8       4.7       9.8       5.8      3.8 (c)
Shimer (1998) (a)        5.3       6.5       7.1       5.9      5.9
Band-pass filter,
  8-year window (d)      6.0       4.2       7.3       5.9      4.9
Band-pass filter,
  15-year window (e)     5.6       4.4       7.9       6.3      5.0
Hodrick-Prescott
  filter, [lambda] =
  1,600 (b)              5.9       4.6       7.5       6.1      4.5
Hodrick-Prescott
  filter, [lambda] =
  25,600 (b)             5.3       5.0       7.4       6.4      4.6

Memoranda:

Median of estimates      5.6       5.0       7.3       6.1      4.9
Range of estimates     3.8-5.9   4.2-6.5   6.2-9.8   5.8-6.4   3.8-5.9
Actual unemployment
  rate                   5.5       5.0       7.2       5.6      4.0

Sources: Literature cited and authors' calculations.

(a.) Estimates are taken from the indicated source; Shimer estimates
are from updates provided by Robert Shimer.

(b.) Estimates are authors' calculations; Ball and Mankiw results are
based on a method described in Ball and Mankiw (2002).

(c.) Estimate is for 1998.

(d.) Following Baxter and King (1999) and Christiano and Fitzgerald
(forthcoming).

(e.) Following Staiger, Stock, and Watson (2002).

Table 2. Forecast Errors of Alternative Natural Rate-Based and
Autoregressive Methods

                                          Standard error of the
                                              regression (a)

                                    1-quarter   4-quarter   8-quarter
Method                               horizon     horizon     horizon

Forecasting inflation (b)

Constant natural rate of
  unemployment (c)                    1.11        1.12        1.74
Kalman filter (d)                     1.10        1.14        1.80
Ball and Mankiw method (e)            1.14        1.11        1.73
Band-pass filter, 8-year window       1.10        1.13        1.78
Band-pass filter, 15-year window      1.11        1.16        1.74
Hodrick-Prescott filter,
  [lambda] = 1,600                    1.13        1.13        1.79
Hodrick-Prescott filter,
  [lambda] = 25,600                   1.14        1.16        1.80
AR(4)                                 1.18        1.24        1.92

Forecasting unemployment rate (f)

Constant natural rate of
  interest (c)                        0.26        0.55        1.10
Kalman filter (d)                     0.25        0.52        1.07
Laubach and Williams methods (g)      0.26        0.54        1.11
Band-pass, 8-year window              0.26        0.53        1.09
Band-pass, 15-year window             0.25        0.52        1.06
Hodrick-Prescott filter,
  [lambda] = 1,600                    0.26        0.54        1.07
Hodrick-Prescott filter,
  [lambda] = 15,600                   0.25        0.51        1.03
AR(2)                                 0.26        0.55        1.12

Source: Authors' regressions as described below.

(a.) The sample period is 1970:1-2002:2. For the one-quarter horizon
the forecast rate is that in the next quarter; for the four-quarter
horizon it is the average of the next four quarters; for the
eight-quarter horizon it is the average of the subsequent four
quarters.

(b.) All except the AR(4) equation include four lags of inflation, one
lag of the change in the unemployment rate, and two lags of the
unemployment gap.

(c.) For the constant natural rate case, no natural rate estimate is
included.

(d.) Estimates are based on the bivariate systems described in the
text.

(e.) Estimates are based on a method described in Ball and Mankiw
(2002).

(f.) All except the AR(2) equation include two lags of the unemployment
rate and one lag of the four-quarter moving average of the real
interest rate gap.

(g.) Estimates are based on a method described in Laubach and Williams
(forthcoming).

Table 3. Misperceptions of the Natural Rates and Their Persistence
Assuming the Economic Model Is Known (a)

                          Standard
                        deviation of      Persistence measures
                         difference
                          between                        Standard
                       real-time and   Persistence       error of
                       retrospective   coefficient      regression
Method or source         estimates       ([rho])     ([[sigma].sub.v])

Natural rate of
  unemployment

Kalman filter               0.66           0.95            0.21
Ball and Mankiw
  method                    0.58           0.97            0.14
Band-pass filter,
  8-year window             0.52           0.89            0.23
Band-pass filter,
  15-year window            0.61           0.92            0.23
Hodrick-Prescott
  filter, [lambda]
  = 1,600                   0.75           0.97            0.18
Hodrick-Prescott
  filter, [lambda]
  = 25,600                  0.78           0.98            0.12

Natural rate of
  interest

Kalman filter               1.44           0.93            0.55
Laubach and Williams
  method                    0.90           0.91            0.38
Band-pass filter,
  8-year window             1.04           0.92            0.42
Band-pass filter,
  15-year window            1.34           0.96            0.41
Hodrick-Prescott
  filter, [lambda]
  = 1,600                   1.26           0.96            0.37
Hodrick-Prescott
  filter, [lambda]
  = 25,600                  1.70           0.99            0.25

Source: Authors' calculations.

(a.) For each method the real-time misperception is defined as the
difference between the real-time and the retrospective estimate of the
natural rate. Estimates are those of the authors for this paper except
where indicated otherwise. The sample period for these statistics is
1969:1-1998:2.

Table 4. Misperceptions of the Natural Rates Allowing for Model
Uncertainty

                                    Frequency distribution based on
                                  alternative measures of natural rate
                                           misperceptions (a)

                                                  25th
Statistic                            Minimum   percentile   Median

Natural rate of unemployment
Standard deviation                     0.48       0.63       0.75
Persistence coefficient ([rho])        0.89       0.95       0.96

Natural rate of interest
Standard deviation                     0.90       1.44       1.96
Persistence coefficient ([rho])        0.91       0.96       0.98

                                        75th
Statistic                            percentile   Maximum

Natural rate of unemployment
Standard deviation                     1.04         1.34
Persistence coefficient ([rho])        0.97         0.99

Natural rate of interest
Standard deviation                     2.84         3.24
Persistence coefficient ([rho])        0.98         0.99

Source: Authors' calculations.

(a.) The sample is the thirty-six alternative measures of natural rate
misperceptions corresponding to all possible pairwise combinations of
the six methods listed in each panel of table 3. Each of the two
statistics is computed separately.

Table 5. Macroeconomic Performance under Alternative Policy Rules and
Degrees of Natural Rate Misperception

                                 Rule parameter (b)

Rule and
misperception      [[theta]   [[theta].   [[theta]   [[theta].sub
index (a)          .sub.f]    sub.[pi]]   .sub.u]    .[DELTA]u]

Classic
Taylor rule
  s = 0              0.00       0.50        -1.00        0.00
  s = 1              0.00       0.50        -1.00        0.00
  s = 2              0.00       0.50        -1.00        0.00
  s = 3              0.00       0.50        -1.00        0.00

Revised
Taylor rule
  s = 0              0.00       0.50        -2.00        0.00
  s = 1              0.00       0.50        -2.00        0.00
  s = 2              0.00       0.50        -2.00        0.00
  s = 3              0.00       0.50        -2.00        0.00

Taylor rule
optimized
for s = 0
  s = 0              0.00       0.31        -3.81        0.00
  s = 1              0.00       0.31        -3.81        0.00
  s = 2              0.00       0.31        -3.81        0.00
  s = 3              0.00       0.31        -3.81        0.00

Taylor rule
optimized
for s = 1
  s = 0              0.00       1.37        -1.23        0.00
  s = 1              0.00       1.37        -1.23        0.00
  s = 2              0.00       1.37        -1.23        0.00
  s = 3              0.00       1.37        -1.23        0.00

Generalized
rule optimized
for s = 0
  s = 0              0.72       0.26        -1.83       -2.39
  s = 1              0.72       0.26        -1.83       -2.39
  s = 2              0.72       0.26        -1.83       -2.39
  s = 3              0.72       0.26        -1.83       -2.39

Generalized
rule optimized
for s = 1
  s = 0              0.97       0.39        -0.23       -5.39
  s = 1              0.97       0.39        -0.23       -5.39
  s = 2              0.97       0.39        -0.23       -5.39
  s = 3              0.97       0.39        -0.23       -5.39

Robust rule
  s = [infinity]     1.00       0.35         0.00       -5.96

                              Standard deviation (c)

Rule and                                           Loss (d)
misperception                                   ([omega] = 0.2,
index (a)           u-u *     [pi]   [DELTA]f    [psi] = 0.05)

Classic
Taylor rule
  s = 0             0.81      2.14     2.83           1.84
  s = 1             0.88      3.67     2.88           3.73
  s = 2             1.01      6.11     3.38           8.85
  s = 3             1.18      8.72     4.15          17.18

Revised
Taylor rule
  s = 0             0.71      2.03     2.89           1.64
  s = 1             0.77      4.13     2.91           4.32
  s = 2             0.91      7.28     3.56          11.89
  s = 3             1.09     10.57     4.59          24.36

Taylor rule
optimized
for s = 0
  s = 0             0.61      2.05     2.83           1.54
  s = 1             0.71      7.15     3.09          11.11
  s = 2             0.94     13.64     4.54          38.94
  s = 3             1.22     20.22     6.41          85.05

Taylor rule
optimized
for s = 1
  s = 0             0.73      1.86     4.25           2.02
  s = 1             0.79      2.07     4.90           2.56
  s = 2             0.82      2.50     4.94           3.01
  s = 3             0.86      3.05     5.11           3.76

Generalized
rule optimized
for s = 0
  s = 0             0.62      1.82     2.23           1.23
  s = 1             0.70      4.49     2.32           4.71
  s = 2             0.95      8.36     3.01          15.16
  s = 3             1.27     12.35     4.00          32.58

Generalized
rule optimized
for s = 1
  s = 0             0.66      1.94     2.45           1.40
  s = 1             0.66      1.95     2.42           1.40
  s = 2             0.66      2.08     2.40           1.50
  s = 3             0.66      2.32     2.40           1.71

Robust rule
  s = [infinity]    0.66      2.01     2.49           1.46

Source: Authors' regressions described in the text.

(a.) s indexes the magnitude of policymakers' misperception of the true
natural rates.

(b.) Parameters measure policymakers' response to the lagged federal
funds rate, the inflation gap, the unemployment gap, and the change in
unemployment, respectively.

(c.) Standard deviation of the unemployment gap, the inflation rate,
and the change in the federal funds rate, respectively.

(d.) Loss due to variation in inflation from its target and in
unemployment from its natural rate, as calculated from equation 12 in
the text, where [omega] and [psi] measure, respectively, policymakers'
preferences for each type of variation.

Table 6. Robust Policy Rule Parameters and Associated Performance under
Alternative Policymaker Preferences (a)

                     Rule parameter (b)         Standard deviation

Loss parameters   [[theta].   [[theta].sub.
[omega], [psi]    sub.[pi]]     [DELTA]u]     u-u *   [pi]   [DELTA]f

0.5, 0.05            0.57         -6.29        0.67   1.94     2.78
0.5, 0.50            0.25         -3.56        0.82   2.22     1.77
0.5, 5.00            0.13         -2.43        1.05   2.67     1.48
0.2, 0.05            0.35         -5.96        0.66   2.01     2.49
0.2, 0.50            0.17         -3.34        0.85   2.32     1.66
0.2, 5.00            0.12         -2.34        1.09   2.76     1.46
0.1, 0.05            0.24         -5.79        0.65   2.08     2.36
0.1, 0.50            0.14         -3.25        0.87   2.38     1.62
0.1, 5.00            0.11         -2.30        1.11   2.80     1.46

Source: Authors' calculations.

(a.) See table 5 for definitions of parameters and performance
measures.

(b.) Parameters of the robust rule in equation 3 in the text.

Table 7. Performance under Optimized and under Robust Rules for
Alternative Economic Models

                       Loss when policy follows: (a)

                                Generalized Taylor rule
                                  optimized for s = 0

                   Robust     True    True    True    True
Model             rule (b)   s = 0   s = 1   s = 2   s = 3

Baseline            1.46      1.23    4.71   15.16   32.58
New synthesis       0.63      0.56    0.69    1.02    1.56
Accelerationist     5.13      2.19    2.53    3.54    5.24

Source: Authors' calculations.

(a.) Loss as calculated by equation 12 in the text.

(b.) Equation 3 in the text.

Table 8. Robust Policy Rules across Alternative Economic Models (a)

                              Rule parameter (b)
Loss parameters
[omega], [psi]    [[theta].sub.[pi]   [[theta].sub.[DELTA]u]

0.5, 0.05                1.56                  -7.13
0.5, 0.50                0.84                  -4.23
0.5, 5.00                0.56                  -3.21
0.2, 0.05                1.28                  -7.85
0.2, 0.50                0.76                  -4.41
0.2, 5.00                0.54                  -3.26
0.1, 0.05                1.15                  -8.19
0.1, 0.50                0.72                  -4.49
0.1, 5.00                0.53                  -3.28

                         Loss when true model is: (c)

Loss parameters   Baseline   New synthesis   Accelerationist
[omega], [psi]      model        model            model

0.5, 0.05           2.89         1.12             5.45
0.5, 0.50           5.84         2.20            10.19
0.5, 5.00          24.21         9.61            32.06
0.2, 0.05           1.88         0.74             5.27
0.2, 0.50           4.60         1.84             9.73
0.2, 5.00          22.55         9.32            30.72
0.1, 0.05           1.53         0.60             5.14
0.1, 0.50           4.17         1.72             9.51
0.1, 5.00          21.98         9.23            30.22

Source: Authors' calculations.

(a.) See table 5 for definitions of parameters and performance
measures.

(b.) Parameters of the robust rule (equation 3 in the text) chosen to
minimize the expected loss for the indicated values of the loss
parameters, when the true model is unknown and each of the three models
is assigned equal likelihood of being the true model.

(c.) As calculated by equation 12 in the text.

Table 9. Estimates of the Natural Rate of Unemployment, 1995-2002

Percent

             Survey of
            Professional             Congressional
          Forecasters (a)            Budget Office          Council of
                                                             Economic
Year    Low   Median   High   Real-time (b)   Current (c)  Advisers (d)

1995    --      --      --         6.0            5.3        5.5-5.8
1996   5.00    5.65    6.00        5.8            5.2          5.7
1997   4.50    5.25    5.88        5.8            5.2          5.5
1998   4.50    5.30    5.80        5.8            5.2          5.4
1999   4.13    5.00    5.60        5.6            5.2          5.3
2000   4.00    4.50    5.00        5.2            5.2          5.2
2001   3.50    4.88    5.50        5.2            5.2          5.1
2002   3.80    5.10    5.50        5.2            5.2          4.9

Sources: Federal Reserve Bank of Philadelphia; Congressional Budget
Office, The Budget and Economic Outlook, various years; Congressional
Budget Office (2002); Economic Report of the President, various years.

(a.) Responses are those from the third-quarter survey in the indicated
year.

(b.) Estimates are from The Budget and Economic Outlook published in
the indicated year (usually in January).

(c.) Estimates are from Congressional Budget Office (2002).

(d.) Estimates are from the Economic Report of the President published
in the indicated year (usually in February) and reflect either explicit
references to a NAIRU estimate or, when no explicit reference appears,
the unemployment rate at the end of the long-term economic forecast
presented in the report.


We have benefited from presentations of earlier drafts at the European Central Bank, the Deutsche Bundesbank, The Johns Hopkins University, and the University of California, Santa Cruz. This research project has benefited from discussions with Flint Brayton, Richard Dennis, Thomas Laubach, Andrew Levin, David Lindsey, Jonathan Parker, Michael Prell, David Reifschneider, John Roberts, Glenn Rudebusch, Robert Tetlow, Bharat Trehan, Simon van Norden, Volker Wieland, and Janet Yellen. We thank Mark Watson, Robert Gordon, and Robert Shimer for kindly providing us with updated estimates. Kirk Moore provided excellent research assistance. Any remaining errors are the sole responsibility of the authors. The opinions expressed are those of the authors and do not necessarily reflect the views of the Board of Governors of the Federal Reserve System or of the management of the Federal Reserve Bank of San Francisco.

(1.) Williams (1931, p. 578).

(2.) This definition leaves open the question of the length of the horizon over which one defines inflation stability. Rotemberg and Woodford (1999), Woodford (forthcoming), and Neiss and Nelson (2001), among others, consider definitions of the natural rates in which inflation is constant in every period, whereas many other authors (cited later in this paper) examine estimates of a lower frequency, or "trend" natural rates.

(3.) Friedman (1968, p. 10).

(4.) Cassel (1928, p. 518).

(5.) Wicksell (1898/1936, p. 106).

(6.) Staiger, Stock, and Watson (1997a); see also Staiger, Stock, and Watson (1997b) and Laubach (2001).

(7.) Orphanides and van Norden (2002); see also Lansing (2002).

(8.) Laubach and Williams (forthcoming).

(9.) Brainard and Perry (2000, p. 69).

(10.) Staiger, Stock, and Watson (1997a, p. 239).

(11.) This literature includes Orphanides (1998, 2001, 2002a), Smets (2002), Wieland (1998), Orphanides and others (2000), McCallum (2001), Rudebusch (2001, 2002), Ehrmann and Smets (2002), and Nelson and Nikolov (2002).

(12.) See Swanson (2000) and Svensson and Woodford (forthcoming) for recent expositions of certainty equivalence in the absence of any model uncertainty. Hansen and Sargent (2002) offer a modern treatment of robust control in the presence of possible model misspecification.

(13.) Taylor (1993).

(14.) In what follows, we assume that an Okun's law coefficient of 2 is appropriate for mapping the output gap onto the unemployment gap. This is significantly lower than Okun's original suggestion of about 3.3. Recent views, as reflected in the work by various authors, place this coefficient in the 2 to 3 range.

(15.) Taylor (1999b).

(16.) Bryant, Hooper, and Mann (1993).

(17.) The contributions in Taylor (1999a), as reviewed in Taylor (1999b), provided additional support for this finding.

(18.) This experience is discussed in Orphanides (2000a, 2000b, 2002a).

(19.) This modification parallels that made by McCallum (2001), Orphanides (2000h), Orphanides and others (2000), Leitemo and Lonning (2002), and others, who have argued in favor of policy rules that respond to the growth rate of output rather than the output gap when real-time estimates of the natural rate of output are prone to measurement error.

(20.) Interestingly, as Woodford (1999) has shown, the optimal policy from a "timeless perspective" in the purely forward-looking "new synthesis" model responds to the change in the output gap, but not to its level.

(21.) Including Williams (1999), Levin, Wieland, and Williams (1999, forthcoming), and Rotemberg and Woodford (1999).

(22.) Phillips (1954).

(23.) Rotemberg and Woodford (1999).

(24.) Policy rules similar to equation 2 have been found in earlier studies to offer a simple characterization of historical monetary policy in the United States over the past few decades (Orphanides, 2002b; Orphanides and Wieland, 1998; McCallum and Nelson, 1999; Levin, Wieland, and Williams, 1999, forthcoming).

(25.) This specification is similar to those examined by Judd and Motley (1992) and Fuhrer and Moore (1995b), in which the change in the short-term rate responds to growth in nominal income or to inflation, respectively.

(26.) Shimer (1998); Katz and Krueger (1999); Ball and Mankiw (2002).

(27.) These are discussed in Laubach and Williams (forthcoming).

(28.) Staiger, Stock, and Watson (1997a).

(29.) Staiger, Stock, and Watson (1997a).

(30.) Laubach and Williams (forthcoming); Rudebusch (2001).

(31.) Hodrick and Prescott (1997); Baxter and King (1999).

(32.) Rotemberg (1999).

(33.) Christiano and Fitzgerald (forthcoming).

(34.) Staiger, Stock, and Watson (2002).

(35.) Staiger, Stock, and Watson (1997a, 2002); Gordon (1998).

(36.) In the measurement equation, the inflation rate depends on lags of inflation (with the coefficients restricted to sum to 1), relative oil and nonoil import price inflation, and the unemployment gap. We apply Stock and Watson's (1998) median unbiased estimator for the signal-to-noise ratio and estimate the remaining parameters by maximum likelihood over the sample period 1969:1-2002:2.

(37.) Ball and Mankiw (2002).

(38.) Laubach and Williams (forthcoming).

(39.) In two papers Bomfim uses other approaches to estimate the natural rate of interest. Bomfim (2001) uses yields on inflation-indexed bonds to estimate investors' view of the natural rate of interest; unfortunately, because these securities have only existed in the United States for a relatively short time, we have scant time-series evidence using this approach. In earlier work Bomfim (1997) estimated a time-varying natural rate of interest using the Federal Reserve Board's MPS model.

(40.) Sargent (1971).

(41.) Modigliani and Papademos (1975, p. 145).

(42.) Brainard and Perry (2000).

(43.) Congressional Budget Office (2001, 2002).

(44.) Staiger, Stock, and Watson (2002); Gordon (2002).

(45.) Shimer (1998).

(46.) See, for example, St. Amant and van Norden (1997), Christiano and Fitzgerald (forthcoming), Orphanides and van Norden (2002), and van Norden (2002).

(47.) Laubach and Williams (forthcoming). They construct the real interest rate using the inflation rate of personal consumption expenditure prices; we have adjusted their natural rate estimates to place them on the basis of GDP price inflation.

(48.) However, the suggested forecast improvement from including the unemployment gap is based on within-sample performance. The usefulness of unemployment or output gap estimates for out-of-sample forecasts of inflation is much less clear (Stock and Watson, 1999; Orphanides and van Norden, 2001).

(49.) On the new synthesis model see Goodfriend and King (1997), Rotemberg and Woodford (1999), Clarida, Gali, and Gertler (1999), and McCallum and Nelson (1999); models with intrinsic inflation and unemployment inertia include Fuhrer and Moore (1995a), Batini and Haldane (1999), and Smets (2000).

(50.) Roberts (1997, 2001); Rudebusch (2002).

(51.) Romer and Romer (2000) follow a similar procedure when comparing Federal Reserve Board Green Book forecasts with the data.

(52.) Zarnowitz and Braun (1993); Croushore (1993); Croushore and Stark (2001).

(53.) For example, s = 2 approximately corresponds to the case of a policymaker who may incorrectly rely on the HP filter (with [lambda] = 1,600) for real-time estimates of the natural rates when the true process continues to be described by our two-sided Kalman filter. In terms of the policy evaluations we report later on, we confirmed that, using s = 2 with the Kalman filter, errors are also very similar to those based on these misspecified errors. This suggests that our approach of summarizing the magnitude of misperceptions by a single parameter, s, captures the key implications of policymakers' misspecification of the natural rate process.

(54.) Anderson and Moore (1985); Blanchard and Kahn (1980). We abstract from the complications arising from imperfections in the formation of expectations (Orphanides and Williams, 2002). For simplicity, we also abstract from errors in within-quarter observations of the rates of inflation and unemployment.

(55.) See Levin, Wieland, and Williams (1999) for a detailed discussion.

(56.) We assume that the inflation target is sufficiently above zero to minimize issues related to the zero bound on interest rates and other nonlinearities associated with very low inflation or deflation (Akerlof, Dickens, and Perry, 1996; Orphanides and Wieland, 1998; Reifschneider and Williams, 2000).

(57.) This result is consistent with the findings reported in the studies collected in Taylor (1999a) and elsewhere.

(58.) Orphanides (2000b).

(59.) As reported by Orphanides (1998), Smets (2000), Rudebusch (2001, 2002), McCallum (2001), Ehrmann and Smets (forthcoming), and others.

(60.) Swanson (2000) and Svensson and Woodford (forthcoming) offer recent expositions.

(61.) Brainard (1967).

(62.) Levin, Wieland, and Williams (1999); see also Levin and Williams (2002).

(63.) To gain some insight into the breakdown of the traditional certainty equivalence results in the presence of filter uncertainty, consider the simple static problem of minimizing the expected squared value of variable y = x = c, where x is a random variable and c is the policy control. If x is observed, the solution is trivial: set c = x. Suppose instead, however, that x is not directly observable but instead must be inferred from the variable z = [xi]x + [eta]. Let x and [eta] be independently and normally distributed random variables with zero mean and constant and known variances [[sigma].sup.2.sub.x] and [[sigma].sup.2.sub.[eta] = [[bar][sigma].sup.2.sub.eta] respectively, and without loss of generality let [xi] = 1. Then, if all these parameters are known, certainty equivalence applies and the optimal control is c = x = [kappa]z, where [kappa] = [[sigma].sup.2.sub.x] / ([[sigma].sup.2.sub.x] + [[bar][sigma].sup.2.sub.[eta]]) is the optimal filter applied to z. Next, to illustrate filter uncertainty, suppose that instead of being fixed and known, [[sigma].sub.[eta] and [xi] are independently drawn with equal probabilities from {[[bar][sigma].sub.[eta] - [S.sub.[eta]], [[bar].sub.[eta] + [[S.sub.[eta]} and {1 - [S.sub.[xi], 1 + [S.sub.[xi]]}, respectively. In this case, if we consider the optimal linear policy c = Oz, the optimal choice of [theta] is given by [theta] = [[sigma].sup.2.sub.x] / [(1 + [S.sup.2.sub.[xi]])[[sigma].sup.2.sub.x] + ([[bar][sigma].sup.2.sub.[eta] + [S.sup.2.sub.[eta])]. Note that [theta] = [kappa] for [S.sup.[xi] = [S.sub.[eta]] = 0 but is strictly decreasing in both [S.sub.[xi] and [S.sub.[eta]]. Thus the optimal linear policy attenuates the response relative to that implied assuming certain and known [[sigma].sub.[eta] and [xi].

(64.) Estrella and Fuhrer (forthcoming).

(65.) Rotemberg and Woodford (1999); McCallum (2001).

(66.) McCallum (2001); see Nelson and Nikolov (2002) for further discussion.

(67.) Rotemberg and Woodford (1999) explore this in detail.

(68.) In backward-looking models this is a result that generally applies to price-level and nominal income targeting rules, which, as noted earlier, are related to the robust rule we examine here. For example, using a similar model (with some forward-looking behavior), Rudebusch (2002) finds that optimized Taylor rules dominate some versions of nominal income targeting rules even in the presence of mismeasurement of the natural rate of output, whereas Orphanides and others (2000), using a more forward-looking model, find that rules targeting output growth are more robust in that case.

(69.) McCallum (1988); Taylor (1999b).

(70.) Levin, Wieland, and Williams (forthcoming).

(71.) See Orphanides (2000a, 2000b) for a historical review.

(72.) Heller (1966); Okun (1970).

(73.) Stein (1984, p. 171).

(74.) Stein (1984, p. 19).

(75.) Jones (2000).

(76.) Burns (1979).

(77.) Orphanides (2000a, 2000b).

(78.) Okun and Teeters (1970).

(79.) Hall (1970, p. 370).

(80.) Perry (1970).

(81.) Perry (1970, figure 2, p. 432).

(82.) Poole (1971).

(83.) Blinder and Yellen (2001).

(84.) Ball and Tchaidze (2002).

(85.) Meyer (2000).

(86.) Transcripts and other documents relating to Federal Open Market Committee meetings are released with a five-year lag and are therefore not yet available for years after 1996.

(87.) For example, Federal Reserve Chairman Greenspan (2000) recently pointed out that "However one views the operational relevance of a Phillips curve or the associated NAIRU (the nonaccelerating inflation rate of unemployment)--and I am personally decidedly doubtful about it--there has to be a limit to how far the pool of available labor can be drawn down without pressing wage levels beyond productivity. The existence or nonexistence of an empirically identifiable NAIRU has no bearing on the existence of the venerable law of supply and demand."

(88.) Interestingly, Walsh (forthcoming) reaches similar conclusions in a recent paper that assumes no measurement problem but in which policymakers cannot commit to a policy rule. He shows that in a forward-looking model it is optimal to assign an objective of stabilizing inflation and the change in the output gap to a policymaker who acts with discretion, when the true social welfare objective is to stabilize inflation and the level of the output gap.

Comments and Discussion

Jonathan A. Parker: Athanasios Orphanides and John Williams have written an ambitious paper that tackles a difficult and important question: how should a central bank conduct monetary policy in practice, that is, not in a simple model of the economy, but in the complex and shifting U.S. economy? The authors focus on two related problems that the Federal Reserve confronts continuously in its attempts to stabilize economic growth.

First, there is a great deal of uncertainty at any point in time about the true state of the economy, and actual policy can be based only on information available at the time. A week before this conference, at the annual symposium on monetary policy in Jackson Hole, Wyoming, Federal Reserve Chairman Alan Greenspan spoke about the difficulty the Federal Reserve had encountered in guiding the economy through the boom of the late 1990s:
   The struggle to understand developments in the economy and
   financial markets since the mid-1990s has been particularly
   challenging for monetary policymakers. We were confronted with
   forces that none of us had personally experienced.... As events
   evolved, we recognized that, despite our suspicions, it was very
   difficult to definitively identify a bubble until after the
   fact. (1)


There is even now considerable uncertainty as to whether the increase in asset prices of the late 1990s was a bubble, which tighter monetary policy should have reined in, or an optimal response to changed economic conditions, such as the possibility that the United States was in a technological revolution that would increase the rate of growth of trend productivity. If one bases policy on poor estimates of the current state of the economy, estimation error becomes policy error. Stabilization policy becomes destabilizing.

The second main problem that confronts policymakers is uncertainty about the response of the economy to the policies that they consider. In his speech at Jackson Hole, Greenspan went on to argue that "it was far from obvious that bubbles, even if identified early, could be preempted short of the central bank inducing a substantial contraction in economic activity--the very outcome we would be seeking to avoid." Thus the Federal Open Market Committee did not act to reduce equity prices in part because committee members were unsure whether those prices were or were not justified by the fundamentals, and in part because they were unsure whether they could reduce equity prices, or at least slow the increase in prices, without slowing the economy so much as to cause a recession. Of course, we now know that a recession was not avoided. Following the turnaround in the stock market, a recession began in March 2001 and probably ended late that year. This second source of uncertainty poses the following question for policy: even supposing that the data during the 1990s had been clear, would and should a more contractionary policy have smoothed out some of the observed boom and recession? If one sets interest rates according to policies that are optimal in a model that turns out to be a poor approximation of the real world, model error becomes policy error. And again, stabilization policy becomes destabilizing.

Given these problems, Orphanides and Williams recommend using a policy role that sets the federal funds rate, [f.sub.t], as follows:

[f.sub.t] = [f.sub.t-1] + [[theta].sub.[pi]] ([[pi].sub.t] - [[pi].sup.*]) + [[theta].sub.[DELTA]u] ([u.sub.t] - [u.sub.t-1],

in which the parameters (the [theta]'s) on the inflation gap and the change in the unemployment rate are chosen so as to allow for substantial movement in the natural rate of interest and the natural rate of unemployment. The authors base this recommendation on their finding that this rule performs well in the sense of achieving close to the minimal attainable value of the following loss function:

[omega]Var([[pi].sub.t]-[[pi].sup.*] + (1 - [omega]) Var([u.sub.t] - [u.sup.*.sub.t] + [psi]Var([DELTA]f)

for a set of three simple models of the U.S. economy.

This rule deals with the first problem--that the Federal Reserve does not know the true state of the economy--because it does not depend on real-time estimates of the natural rate of interest or the full-employment level of unemployment. Rather, this rule depends only on economic variables that are observed easily and (almost) contemporaneously with their occurrence. As an example of a widely used rule that performs well in some small models, consider the following Taylor rule:

[f.sub.t] = [r.sup.*.sub.t] + [[pi].sub.t] + [[theta].sub.[pi]] ([[pi].sub.t] - [[pi].sup.*] + [[theta].sub.[DELTA]u] ([u.sub.t] - [u.sup.*.sub.t]).

The rule proposed by Orphanides and Williams excludes the natural rate of interest, [r.sup.*.sub.t], to which the Taylor rule responds, and replaces the natural rate of unemployment, [u.sup.*.sub.t], with actual unemployment lagged one period, to which the Taylor rule does not respond.

That the authors' rule also deals with the second problem is less obvious. According to their simulations, much of the robustness of the rule comes from the Federal Reserve acting more ignorant than it thinks it is, in case it is wrong. In terms of its form, the rule has two important features typical of optimal rules: a response to deviations of inflation from its target, and inertia, that is, a response to conditions in the recent past. The authors show that their rule does perform well in several somewhat different small structural models of the U.S. economy. But all three models are quite limited and quite similar, and I am unsure whether this robustness would hold in a wider class of models. (2)

The balance of my comments address three points. First, I discuss the reasons why the authors' rule works well given uncertainty both about the state of the economy and about the correct model of the economy. Second, I ask why the authors (and others) focus on simple rules. Any rule based on natural rates is not simple, and therefore the proposed rule is a significant step toward monetary policy simplicity. But simplicity is not always a virtue, and the optimal rules, calculated given noise in real-time estimates, might well perform better without much loss of robustness. Finally, I argue (or rather, plead) that we should be able to do better at estimating natural rates, and at least well enough so that the estimates are useful for policy.

Why does the authors' rule work well, and why does it work well in several models? The rule works well in the basic model specified in the authors' equations 8 and 9 because this structural model contains both lags of unemployment and leads of expectations. The lags of unemployment imply that the current state of the economy is not simply described by the current unemployment rate. Moreover, given that the objective function penalizes volatility in the federal funds rate, and given the presence of variables representing the expected future state of the economy in equations 8 and 9, the central bank would like to stabilize the economy by having small movements in interest rates lead to significant movements in expectations. The central bank can achieve this by tying its future actions to its past actions through lagged variables in its policy rule; this approach, called policy inertia, has been studied by Michael Woodford. (3) Thus the authors' rule allows interest rate movements today to commit the central bank to future behavior that cumulates to stabilize the economy without short-term rates becoming highly volatile.

The proposed rule works well in the set of models examined because the economic situation remains quite similar across these models. The natural rates are potentially poorly known, and therefore any rules that lean heavily on real-time estimates of the natural rates will do relatively poorly. The loss function remains the same, so that rules that do not contain inertia also do poorly. Finally, as I have mentioned, the models are not that different from each other--all confer an advantage on the rule that can influence expectations, and two of the three include substantial lags in the propagation of economic activity.

Is the rule robust more generally? If these are robust features of the real world, this rule ought to work well in many realistic models. On the one hand, Andrew Levin, Volker Wieland, and Williams have studied a similar simple rule that depends on the lagged federal funds rate and not on the natural rate of interest; they find little welfare loss and some robustness gain to such a rule over rules based on real-time estimates of natural rates in a wider range of models of the U.S. economy. (4) On the other hand, any model economy has a natural loss function in terms of the welfare of agents in the model, and it makes little sense to me to judge robustness across models using a loss function that does not reflect the differences in welfare costs across models. I am also uncomfortable with the importance of the interest rate smoothing objective in the loss function. Are there really any substantial costs to highly volatile short-term rates above and beyond the costs of deviations of inflation from the desired rate? On balance, given that one cannot test a rule under all of the infinite set of possible models we economists might come up with, it is interesting and good news that simple rules that do not require knowledge of natural rates perform quite well in a range of models.

My second main point involves the focus on simple policy rules. Given that we are moving from a model economy, where simple rules follow from simple models, to the real world, where actual policy confronts potentially nonstationary environments, why use rules at all? A typical response of economists is that commitment to a simple rule allows a central bank to maintain a reputation and avoid the problem of time consistency posed by the continual temptation to inflate. But the Federal Reserve does not follow a simple rule. It has had and continues to have complete leeway to deal with each new economic phenomenon as it sees fit. As I have noted, the Federal Reserve viewed itself as in largely uncharted waters as it navigated the boom of the second half of the 1990s. It seems to have solved the time consistency problem without a simple rule, and with only independence. It has learned how to conduct policy in a complex world; the behavior of Alan Greenspan and the Federal Open Market Committee is not easily reduced to a simple formula that is optimal in some model economy.

In this sense, then, I read the paper as advice for economists and as a defense of the Federal Reserve, rather than as advice for the Federal Reserve. Given that the Federal Reserve has learned a complex rule based on large amounts of real-world, real-time data, would it ever make the mistake of acting on a simple rule predicated on the incorrect belief that it has accurate measures of the natural rates? It seems more likely that the staff of the Federal Reserve, and academics more widely, might mistakenly recommend or try policies that are optimal in simple models based on data that are available only ex post. This paper also provides advice for other governments and other economists setting up central banks with legal rules that are optimal in simple model economies. A good, robust simple rule should incorporate the central bank's uncertainty about the natural rate process and include some reaction to lagged variables.

Since I think simplicity has little value, I am interested in the analysis of what rules are optimal in these models. Given uncertainty about the current natural rates, the truly optimal policy probably is one based both on the current estimates, with a reaction that reflects their signal-to-noise ratio, and on a distributed lag of past estimates. Such a rule is, by construction, robust to uncertainty about the state of the economy. It would be nice to know how robust an optimal rule is across different models. Although such a rule would be "complicated," it would not actually be more complicated than many proposed rules. As I have noted, and as this paper makes clear, any rule that relies on natural rates is not simple. To see this, write down the one-sided filtering problems used in the paper to construct real-time estimates of the natural rates and include them in the specification of the rule.

Citing the noisiness of estimates of the natural rates, the authors argue for completely ignoring measures of the natural rates in conducting monetary policy. It is here that I part ways with them and come to my third main point. Consider the measures of the natural rates of interest and unemployment plotted in the authors' figures 1 through 4. The ex post (retrospective) estimates differ significantly from the ex ante (real-time) estimates, and there is little agreement among the series. But can't we do better? These series are constructed almost without regard to theory. We should expect a smoothed series of the ex post real interest rate to do a terrible job of matching the rate of interest in the economy that would prevail if all prices were to adjust instantly and completely. One-sided smoothed series will always overshoot turning points. The real-time smoothed series are univariate, and so no information from forward-looking variables is contained in them.

[FIGURE 1-4 OMITTED]

There are two ways to improve analysis in the future. First, use the same model to evaluate the policy rule and to construct estimates of the natural rates of interest and unemployment. Each model predicts structural relationships among variables that should be useful in "forecasting" the natural rates. Incorporating this structure would bring to the exercise consistency between the natural rates and policy responses. If one employed several models to estimate and study, this would deliver a range of estimates of the natural rates, which would provide a measure of the degree of uncertainty in the estimates at any given time. Any model would probably have to be made more complicated than the authors' equations 8 and 9 to be useful for estimating movements in the natural rates, but the paper' s intent is to make policy recommendations for a complicated real world, and the Federal Reserve surely implicitly uses some complicated model to judge natural rates.

A second and more feasible approach is to use an auxiliary model of the real economy. To estimate the natural real rate of interest from a model does require some heroic assumptions. But bringing a few Minnesotans to a Brookings conference would not hurt. Certainly a lot of useful information relevant to potential output--about tax rates, the capital stock, investment rates, and so on--is ignored in a simple smoothing exercise. Understanding movements in the natural rate of unemployment seems more straightforward. Robert Shimer has provided a model of the impact of demographics on the natural rate of unemployment. (5) The age structure of the population should have a large impact, because younger workers spend more time in unemployment as they switch jobs and careers searching for a good match. The education distribution of the population, on the other hand, should not affect the natural rate of unemployment, according to several arguments. Given this, Shimer estimates the natural rate of unemployment from the residuals in the following regression:

[u.sub.i] = [alpha] + [beta][[u.sup.Prime.sub.t] + [[epsilon].sub.t],

where [u.sup.Prime.sub.t] is the rate of unemployment among males aged thirty-five to sixty-four.

Figure 1 below shows the actual unemployment rate, two ex post estimates of the natural rate of unemployment from the paper, and the quarterly averages of the monthly residuals from this regression added to 5.5. I construct a real-time Shimer estimate using residuals calculated out of sample from regressions ending in 1967, 1977, and 1987 as well as presenting an ex post series. (6) The figure shows that the real-time and the retrospective estimates of the natural rate are not significantly different. Also, the estimates lie roughly between the Congressional Budget Office's estimate and that of Douglas Staiger, James Stock, and Mark Watson. (7) Neither feature proves that the estimate of the ex ante rate is correct, but the estimate is both reasonable and stable. The estimates might be made even better by incorporating additional structural factors such as changes in labor regulations and sectoral shifts.

To summarize, the proposed rule seems reasonably robust and close to optimal in the class of models the authors examine. Policy recommendations should definitely account for the real-time lack of knowledge of the true natural rates. I am interested in the robustness of rules that are optimized given these shortcomings. But we should be able to construct better estimates of natural rates, and these might be quite valuable for policy.

(1.) Greenspan (2002).

(2.) I do not address the large question of what the correct model of nonneutrality is. Readers should use their own beliefs to judge the reasonableness of the range of diversity in the structures of the studied models.

(3.) Woodford (1999).

(4.) Levin, Wieland, and Williams (1999).

(5.) Shimer (1998).

(6.) I thank Robert Shimer for providing the data for this exercise.

(7.) Staiger, Stock, and Watson (2001).

Janet L. Yellen: It is a great pleasure to discuss this paper on monetary policy rules. I found the paper fascinating and provocative. It addresses the central question facing monetary policy: how to adjust the policy levers to optimize economic performance under uncertainty. Athanasios Orphanides and John Williams follow the approach that is now standard: they assume that the proper objective of policy is to minimize a loss function that depends on the weighted sum of squared deviations of inflation from a target level and of output from potential, with a small weight attached to interest rate fluctuations. This objective function is a good approximation of the goals of the Federal Reserve since the 1950s, namely, price stability and maximum employment, as espoused in the Federal Reserve Act.

John Taylor's paper of nearly a decade ago represents, in my view, an important practical breakthrough in policy design. (1) Taylor proposed a very simple, intuitive policy feedback rule relating the tightness of monetary policy--as measured by the deviation of the real federal funds rate from a "neutral" level--to the gaps between actual and desired performance of inflation and output. The rule has proved hard to beat: in stochastic simulations it has produced good results in a wide array of models. Better yet, it provides a remarkably succinct summary of the "system" by which the Federal Open Market Committee (FOMC) during the Greenspan era has successfully adjusted the monetary dials.

As Christopher Sims explains in his paper in this volume, the FOMC primarily relies not on rules of thumb but on judgmental forecasts, detailed analysis of current conditions, and policymakers' intuition. In this context, especially given the possible pitfalls of judgmental forecasting, I, along with at least one like-minded colleague on the committee, Laurence Meyer, considered "rule-based policy recommendations" useful additional input. Such recommendations might serve as a benchmark--a starting point for FOMC deliberations. Of course, there could be good reasons for policy to depart from a range of rule-based policy prescriptions, but when doing so, the committee should articulate a sensible rationale. Since early 1995 the prescriptions of a number of variants of the Taylor rule and related rules have been routinely provided to FOMC members as part of a financial indicators package. Committee members differ in their degree of interest in this information. I should emphasize that no Federal Reserve policymaker has ever endorsed the argument, popular in the academic literature, that precommitment to a rule is needed to overcome time inconsistency.

Since 1995, perhaps spurred by the interest of policymakers, there has been an explosion of research on policy rules. Orphanides and Williams have made important contributions, and this paper builds on their previous work. The paper details some serious shortcomings of the Taylor rule, and it proposes an interesting alternative, which I will refer to as the Orphanides-Williams (O-W) difference rule. I will try to summarize the authors' main findings, describe the advantages and possible disadvantages of their proposed rule, and then speculate on the relevance of their analysis to monetary policy in the United States during the 1990s.

The authors emphasize that implementation of Taylor-type rules requires estimates of the time-varying natural rates of unemployment and interest, about which policymakers are highly uncertain. The authors document the extent of this uncertainty, using a variety of time-series techniques to compare retrospective and real-time estimates of these two key parameters. They demonstrate, convincingly in my view, that the measurement errors are large and persistent. Uncertainty about the NAIRU, and later about the equilibrium rate of interest, was unquestionably the central issue for monetary policy during the 1990s. Although not all FOMC members are enamored of the NAIRU model, an examination of FOMC minutes and transcripts reveals ongoing, detailed discussions of the magnitude, causes, and likely persistence of structural shifts in the labor market that appeared to be responsible for an unexpectedly favorable combination of inflation and unemployment after 1994. The question of what constitutes a "neutral" value of the real federal funds rate was also critical to policy discussions at several junctures: in 1994-95, when the Federal Reserve was raising rates to avoid unemployment falling below the NAIRU; in the context of discussions of the appropriate response to contractionary fiscal policy in 1995-96; and again in 1998-99 as aggregate demand continued to outpace aggregate supply, suggesting that the equilibrium real rate of interest had risen as a consequence of the productivity shock.

The paper's key contribution is its analysis of the implications of uncertainty concerning the two natural rates for the design of policy rules and stabilization performance. The authors use a small-scale rational expectations model of the U.S. economy to compute the optimal coefficients and performance characteristics of rules designed to minimize their loss function under alternative levels of uncertainty. They examine Taylor-type rules, which allow policy feedbacks only from the levels of unemployment and inflation to the federal funds rate, along with more complex rules that also allow feedback from the lagged federal funds rate and the change in unemployment. Rules of this more general type, with a substantial inertial element, dominate the Taylor rule in the models studied in the paper; they also apparently come closer to characterizing the Federal Reserve's actual reaction function.

The key result of the paper is that the losses due to overconfidence and underconfidence about the levels of the natural rates are asymmetric. When the true degree of uncertainty is high, policymakers who follow the prescriptions of rules optimized to perform well under low uncertainty are apt to incur large losses. In contrast, rules designed for conditions of high uncertainty perform quite well when the true degree of uncertainty is lower. It follows that policy rules that are optimal for a high degree of uncertainty are robust, whereas those that ignore uncertainty concerning the natural rates are fragile. Overconfidence produces an especially large deterioration in performance with respect to the inflation objective; the variance of unemployment around its target is less sensitive to mismeasurement.

Why do rules optimized for low levels of uncertainty perform so poorly, particularly on the inflation front? The authors offer little intuition, so I will hazard a guess. I suspect the main problem is that rules relying on knowledge of the two natural rates tolerate persistent deviations in inflation from its target. In the absence of shocks, an economy with an accelerationist Phillips curve following the authors' generalized policy rule (their equation 2) converges to an equilibrium in which both unemployment and the real interest rate are equal to the true natural rates, [bar]u and [r.sup.*], but inflation will not converge to its target because of measurement error. In equilibrium,

[pi] - [[pi].sup.*] = 1-[[theta].sub.f]/[[theta].sub.[pi]]([r.sup.*] - [[r.sup.*] - [[THETA].sub.u]/[[THETA].sub.[pi]]([bar]u-[bar]u].

The persistent gap in inflation from target is larger the more the federal funds rate responds to the unemployment gap ([theta].sub.u]), the less it responds to the inflation gap ([[theta].sub.[pi]], and the smaller the degree of policy inertia ([theta].sub.f]). Small and persistent errors in estimating the natural rate of unemployment can easily translate into large, persistent deviations of inflation from target under both rules. We can see this by substituting the coefficients of the Taylor rule optimized for s = 0 (third panel of the authors' table 5) into the equation above to obtain

[pi] - [[pi].sup.*] = 3.23([r.sup.*]-[r.sup.*] + 12.29([bar]u-[bar]u]).

Even with the generalized Taylor rule optimized for s = 0 (fifth panel of table 5) we obtain

[pi] - [[pi].sup.*] = 1.08([r.sup.*]-[r.sup.*]) + 7.04([bar]u-[bar]u].

The inertial response of the federal funds rate under the optimized generalized rule works to mitigate the impact of mismeasurement of the natural rate of interest on the steady-state deviation of inflation from its target, improving performance on the inflation front without a significant deterioration in the variability of output.

This reasoning may explain why the authors find, in their baseline model, that increased uncertainty concerning the two natural rates should cause policymakers to raise the weight placed on the lagged federal funds rate, making policy yet more inertial, and lower the coefficient on the unemployment gap, attenuating the response of policy to what is recognized to be a noisy signal of future inflation pressures. The authors find that optimal policy compensates for the reduced sensitivity of policy to the output gap by raising the sensitivity of policy, [[theta].sub.[DELTA]u], to changes in unemployment. In the limit, as uncertainty about the natural rates rises (s [right arrow] [infinity]), Orphanides and Williams find that the optimal policy in their baseline model converges to a pure difference rule,

[f.sub.t]-[f.sub.t-1] = 0.35([pi]-[[pi].sup.*])-5.96([u.sub.t]-[u.sub.t-1]),

relating the change in the federal funds rate to the gap between inflation and its target and the change in unemployment. They argue that this rule is robust, performing well under conditions of both high and low parameter uncertainty. It obviously merits consideration for inclusion in the Federal Reserve's financial indicators package.

It would be useful if the authors offered some explanation for why their difference rule works so well in the baseline model. One reason must be that it avoids the possibility of a steady-state deviation of inflation from target. The O-W rule insists that deviations of inflation from target be eradicated through continuing adjustments in the real federal funds rate: the rule produces a marked improvement in inflation performance without a substantial decline in real outcomes. A policymaker following the rule could not have tolerated the persistent, high inflation of the 1970s.

Another reason for the success of the O-W rule may relate to the implications of interest rate inertia for the response of longer-term interest rates to changes in the federal funds rate. In previous work, (2) Williams and his coauthors Andrew Levin and Volker Wieland found that, in forward-looking models, the inclusion of a lagged interest rate in the policy rule strengthens the transmission mechanism by enhancing the impact of changes in the federal funds rate on longer-term rates and, in turn, on aggregate demand. Assuming that market participants understand the rule, they would expect any change in the federal funds rate to be persistent, and these expectations would generate a larger response of the longer-term interest rates that are more crucial to spending.

However, before sending the FOMC on permanent vacation and relegating the conduct of monetary policy to the Fed computer programmed with the O-W difference rule, we need to consider the possible pitfalls and alternatives. A first question is whether rules of the O-W difference type are robust across alternative models. Here the results presented in the paper offer grounds for caution. Although a difference rule works well in their fully forward-looking, new synthesis model, it performs poorly in their backward-looking, accelerationist model. Even in the accelerationist model, however, the authors' finding that increased uncertainty should push policy in the direction of an attenuated response to the output gap and greater inertia in the interest rate survives.

A second question is whether the performance of difference-type rules is robust to perturbations of the coefficients of the rule. I am fearful that the operation of a difference rule with the "wrong" coefficients could seriously increase the volatility of real outcomes. Let me explain why, with an example based on my own FOMC experience.

I joined the committee in August 1994, when the Federal Reserve was embarked upon a course of monetary tightening. Alan Blinder and I have described the debate that took place during the fall and winter of 1994-95. (3) It seemed to me that each time the FOMC convened, members looked for evidence that the economy was slowing. The thinking was that, until such evidence was in hand, they would just keep raising rates. This type of reasoning mirrors the logic of the O-W difference rule: keep raising rates if inflation exceeds the target and unemployment is falling. Of course, in the O-W rule there is some amount of tightening at six-week intervals that is "just right," but the FOMC was impatient for results and could easily have gotten it wrong. By forgetting that monetary policy works with long and variable lags, the committee might have engaged in policy overkill that would have produced a hard landing. Luckily, signs of a slowdown emerged by the time the federal funds rate reached 6 percent; the tightening came to an end with the funds rate below the 8 percent or so that the financial markets were anticipating in December 1994, and below the rate of over 6 percent embodied in the Green Book forecast. In this context I considered the Taylor rule a helpful antidote to the committee's reasoning: unlike the difference rule, it suggested that, under prevailing conditions, a federal funds rate around 6 percent would put the Federal Reserve in the right ballpark. Since the O-W rule sanctions the very thought process that alarmed me in 1994-95, I am concerned that reliance on a difference rule with the wrong coefficients could produce severe instability in real outcomes and even instrument instability. I therefore applaud the authors for their attempt to characterize rules that are robust not only to natural rate uncertainty but also to model uncertainty.

Before turning to the performance of the Federal Reserve during the 1990s, I would like to raise a few other questions concerning the use of the O-W rule in monetary policy. The authors are not explicit about the role they envision for their rule in the policy process. One question is whether the forward-looking models studied in their paper assume that the central bank must mechanically follow a rule in order to secure credibility. If so, I wonder whether the rule is still useful as part of an FOMC process that relies primarily on forecasts, judgments, and policymaker intuition.

I am also concerned that the alternative with which the authors compare the performance properties of their rule is something of a straw man. Under the alternative, policymakers sit on their hands even when inflation persistently deviates from target. As the paper by Sims describes, however, the actual FOMC policy process, like that in inflation-targeting countries, revolves around forecasts, not rules. Those forecasts are constantly updated in response to forecast errors, an approach that involves, among other things, constant reconsideration of the two natural rates. The actual standard deviations of unemployment and inflation in U.S. data over 1969-2000 do not greatly exceed the predicted values for their difference rule and are far smaller than the errors that would result from extremely overconfident rule-based behavior. This suggests that the Federal Reserve's detective work in identifying structural shifts, analyzing forecast errors, and estimating the size and persistence of shocks has avoided (at least since the 1970s) the worst mistakes--persistent and unintentional deviations of inflation from target--that overreliance on a Taylor-type policy rule could produce.

Finally, let me turn to the applicability of the authors' paper to the conduct of monetary policy during the 1990s. The authors suggest that policymakers may have done well during that period because they avoided excessive reliance on natural rate estimates, which were changing. They suggest that their robust policy rule is successful in replicating the "Goldilocks" economy. I find this suggestion perplexing: the O-W difference rule (assuming, following the authors, that "inflation more or less remained in line with policymaker descriptions of their price stability objectives" after 1994) would have called for raising the federal funds rate from 1994 to 1999 in response to falling unemployment. In fact, the O-W difference rule calibrated to the authors' baseline model would have raised the federal funds rate to double-digit levels by 1996! Of course, such an assertion is a bit unfair, since the economy, and policy in turn, would have responded, but such a simulation suggests that policy under the O-W difference rule would have been tighter, producing worse real outcomes and lower inflation than the Federal Reserve actually achieved.

It is interesting to contrast the historical performance of the O-W difference rule with that of the original Taylor rule. If one assumes a constant NAIRU of 5.5 percent and a constant equilibrium real interest rate of 2.5 percent, (4) the Taylor rule fits the Federal Reserve's actual behavior remarkably well after 1993, although policy was notably easier than the rule predicts from mid-1998 until the end of the tightening campaign in June 2000. The surprisingly good fit of the Taylor rule until mid-1998 reflects the fact that unemployment and inflation fell in tandem, calling for maintenance of a relatively constant federal funds rate.

We should not jump to the conclusion, however, that the Federal Reserve's reaction to developments in the second half of the 1990s was just a continuation of business as usual. Alan Blinder and I have argued that the Federal Reserve did behave differently in response to economic developments after 1995, practicing a policy of forbearance in the face of falling unemployment. In our view the Federal Reserve was, in effect, updating its views concerning the NAIRU throughout the period. Laurence Ball and Robert Tchaidze compare actual Federal Reserve policy with the predictions of a reaction function estimated with pre-1996 data and confirm that there was a shift in behavior in that period. (5) Estimated reaction functions find that the Federal Reserve typically responds more aggressively to changes in the unemployment rate than the Taylor rule calls for. Ball and Tchaidze find a growing gap between actual and predicted policy by the beginning of 1997. Their interpretation is that the FOMC, along with outside forecasters, was lowering its estimate of the NAIRU as the influence of supply shocks, particularly the productivity shock, became more evident and the linkages between faster productivity growth and the NAIRU became more obvious. They show that when the declining NAIRU estimates of forecasters are substituted into the pre-1996 reaction function, Fed behavior looks quite normal. As the authors recognize, "natural rate updating" is an alternative interpretation of monetary policy in the 1990s.

Laurence Meyer, who served on the FOMC throughout the period, offers yet a different interpretation of Federal Reserve strategy during the 1990s. Meyer and his coauthors Eric Swanson and Volker Wieland argue that the appropriate tactic for dealing with increased uncertainty about the NAIRU is to respond less to changes in unemployment and more to changes in inflation. (6) (Since the unemployment gap is a predictor of the change in inflation, a response of policy to unemployment could be considered preemptive, whereas a response to inflation is reactive.) In their view Federal Reserve policy simply became less preemptive in the face of growing uncertainty about the NAIRU. Meyer also shows that a strong, "nonlinear" response to unemployment is warranted when the unemployment rate falls below the lower threshold of the range of NAIRU uncertainty. According to this logic, the Federal Reserve began responding preemptively to falling unemployment when such a threshold was crossed in mid-1999.

Let me conclude by saying that this paper makes a valuable and constructive contribution to a burgeoning field of research that is generating important payoffs for the practice of monetary policy.

General discussion: Panelists represented the full range of views about the value of rules in the conduct of monetary policy. Gregory Mankiw observed that the rule suggested by Orphanides and Williams would have called for a monetary tightening in the 1990s. Although this would have been difficult to defend at that time, in hindsight it might have been optimal to tighten monetary policy at some point between 1997 and 1999. Robert Gordon underlined Janet Yellen's remark that the Taylor rule describes the policy of the Federal Reserve during the 1990s remarkably well except for 1998, the heyday of positive supply shocks to the U.S. economy. Nevertheless, Gordon agreed with Mankiw in suggesting that tightening in 1998 might have dampened the volatility the economy has experienced in 2001 and 2002. Alice Rivlin disagreed with Mankiw and Gordon. She argued that 1998 was a special year because of the Russian crisis and the fragility of global financial markets generally. In that situation the Federal Reserve appropriately placed unusual weight on the world economy's need for loosening, which the domestic economy did not need. Rivlin also observed that, during her tenure on the Federal Open Market Committee from 1996 to 1999, many members believed that the natural rate of unemployment--if there was such a thing--was falling. Productivity growth was accelerating, unemployment was going down, yet the inflation rate was still falling. It was not clear that there was an "unemployment gap," and so the FOMC did not raise rates until mid-1999. Rivlin believed this was likely to have been the best policy. She went on to stress the importance, when analyzing Federal Reserve behavior, of recognizing that the FOMC is, after all, a committee. It is misleading to talk about Federal Reserve policy as if it were based on unanimous agreement about goals and perceptions. FOMC members often have very different perceptions of the economic situation, different targets, and different ways of thinking about the economy.

Richard Cooper questioned the wisdom of conducting monetary policy on the basis of rules. He was skeptical of the common argument that a rule enhances credibility. In his view a simple or even a complex rule that is followed mechanically does not enhance credibility, but to the contrary signals poor policymaking. Another argument that has been made in favor of rules is that they indirectly communicate the goals of policy; Cooper thought it preferable to discuss policy objectives explicitly. Even complex rules are unlikely to anticipate and deal appropriately with unusual events. For example, he agreed with Rivlin that the threat to the global financial system in 1998 warranted the strong reaction of the Federal Reserve. Cooper drew an analogy to flying an airplane. Under ordinary conditions an airplane can fly on autopilot, responding to accurate and timely information according to very sophisticated rules, yet every commercial jetliner in the skies today is still equipped with two pilots. In Cooper's view one can depend on rules alone only if they are extremely complicated, and even then only with a lot of real-time input. Although it may someday be possible to turn all of the flying over to an autopilot, managing the economy is much more complicated than flying a plane. Mankiw responded that the phrase "following a rule" admits of two different interpretations. In Cooper's interpretation a rule is something that, once written down, is always strictly executed. In the alternative interpretation rules are no more than a guide or reference point for policy, with policymakers retaining discretion. If, Mankiw argued, one believes that models are useful for guiding policy, and that models are forward looking, then policy advice is naturally formulated as a rule.

The difference between the actual and the natural rate of unemployment, and uncertainty about that gap, play central roles in Orphanides and Williams's proposed rule. Edmund Phelps said he believed that the natural rate of unemployment is subject to marked cyclical swings, and that swings in the actual unemployment rate primarily reflect such swings in the natural rate. As a consequence, the central bank should not focus on the unemployment rate or on a putative unemployment gap in setting policy. Rivlin and Christopher Sims also expressed doubts about the usefulness of the natural rate concept. Rivlin observed that the natural rate was always known to be both unobservable and variable, but was nevertheless thought to be useful to policymakers. The significance of the paper was that it cast doubt on the usefulness of the natural rate for policymaking. If a parameter was unobservable and variable and led to worse policy outcomes, one might wonder if it was useful at all. Sims observed that the determinants of inflation are multivariate, and that the relative importance of each causal factor differs from period to period. He thought that the natural rate concept remained popular for two reasons. First, it has the attraction of allowing one to think about monetary policy in terms of a simple bivariate relationship. Second, it reflects a tendency of policymakers to rely too heavily on theoretical work and neglect some important empirical results. For example, policymakers in the 1970s relied on contemporary theories favoring a downward-sloping Phillips curve, even though many empirical studies indicated that the Phillips curve was vertical.

Phelps thought the advice he had offered for policymakers in his 1971 book, Inflation Policy and Unemployment Theory, remained relevant. If public expectations of the inflation rate exceed the equilibrium inflation rate, the Federal Reserve should tighten monetary policy in order to disappoint those expectations. Phelps did not think it necessary for the central bank to know the precise natural rate in order to conduct sensible policy, and in any case he thought Orphanides and Williams overstated the uncertainty about it. He noted that a battalion of economists over the last twelve to fifteen years had tried to estimate the relationship between the natural rate and various features of the economy, such as demographic structure, technological progress, and the real rate of interest. Phelps believed that we do know quite a bit about the natural rate and the causes of its shifts, and he agreed with Jonathan Parker that we should make better use of this knowledge. William Brainard, noting that changes in the economy's structure would show up as autocorrelation in the natural rate, was skeptical of Orphanides and Williams's assumption that the natural rate follows a random walk. Willem Buiter agreed and went on to suggest it was likely that the actual and the natural rate are cointegrated.

Several panelists commented on the details of model specification and of the statistical tests used by Orphanides and Williams. Mankiw observed that, with only slight modification, the authors' rule would become a rule in which the federal funds rate depends on the deviation of the unemployment rate from some moving average of past unemployment rates. If this moving average is taken as a rough estimate of the natural rate of unemployment, their rule is actually quite similar to a Taylor rule. Benjamin Friedman noted that the performance of Taylor rules varies considerably with the lag structure used. This led him to wonder how sensitive the performance of the difference rule suggested by Orphanides and Williams might be to the lags used in calculating changes in the unemployment rate--for example, whether the lag was one quarter or one year. Gordon noted that the authors did not consider supply shocks. Because of supply shocks, it makes a difference whether the central bank targets headline inflation, as the European Central Bank does, or core inflation, as does the Federal Reserve. Whether the response of monetary policy to an oil price shock should mimic the response to a change in the unemployment gap depends on many factors not discussed in the paper, for example whether wages respond differently to core than to headline inflation, and the feedback of the oil price shocks to core inflation itself.

(1.) Taylor (1993).

(2.) See Levin, Wieland, and Williams (1999).

(3.) Blinder and Yellen (2002).

(4.) This simulation of the Taylor rule measures inflation by the core consumer price index.

(5.) Ball and Tchaidze (2002).

(6.) Meyer, Swanson, and Wieland (2001).

References

Akerlof, George A., William T. Dickens, and George L. Perry. 1996. "The Macroeconomics of Low Inflation." BPEA, 1:1996, 1-76.

Anderson, Gary, and George Moore. 1985. "A Linear Algebraic Procedure for Solving Linear Perfect Foresight Models." Economic Letters 17: 247-52.

Ball, Laurence, and N. Gregory Mankiw. 2002. "The NAIRU in Theory and Practice." Working Paper 8940. Cambridge, Mass.: National Bureau of Economic Research (May).

Ball, Laurence, and Robert R. Tchaidze. 2002. "The Fed and the New Economy." American Economic Review 92(2): 108-14.

Batini, Nicoletta, and Andrew G. Haldane. 1999. "Forward-looking Rules for Monetary Policy." In Monetary Policy Rules, edited by John B. Taylor. University of Chicago Press.

Baxter, Marianne, and Robert G. King. 1999. "Measuring Business Cycles: Approximate Band-Pass Filters for Economic Time Series." Review of Economics and Statistics 81(4): 575-93.

Bernanke, Ben S., and Frederic S. Mishkin. 1997. "Inflation Targeting: A New Framework for Monetary Policy?" Journal of Economic Perspectives 11(2): 97-116.

Blanchard, Olivier, and Charles M. Kahn. 1980. "The Solution of Linear Difference Models under Rational Expectations." Econometrica 48(5): 1305-12.

Blinder, Alan S., and Janet L. Yellen. 2001. "The Fabulous Decade: Macroeconomic Lessons from the 1990s." In The Roaring Nineties: Can Full Employment be Sustained? edited by Alan B. Krueger and Robert M. Solow. New York: Russell Sage Foundation.

--. 2002. The Fabulous Decade: Macroeconomic Lessons from the 1990s. New York: Twentieth Century Fund.

Bomfim, Antulio N. 1997. "The Equilibrium Fed Funds Rate and the Indicator Properties of Term-Structure Spreads." Economic Inquiry 35(4): 830-46.

--. 2001. "Measuring Equilibrium Real Interest Rates: What Can We Learn from Yields on Indexed Bonds?" Unpublished paper. Washington: Board of Governors of the Federal Reserve System (July).

Brainard, William C. 1967. "Uncertainty and the Effectiveness of Policy." American Economic Review 57: 411-25.

Brainard, William C., and George L. Perry. 2000. "Making Policy in a Changing World." In Economic Events, Ideas, and Policies: The 1960s and After, edited by George L. Perry and James Tobin. Brookings.

Bryant, Ralph C., Peter Hooper, and Catherine Mann, eds. 1993. Evaluating Policy Regimes: New Research in Empirical Macroeconomics. Brookings.

Burns, Arthur. 1979. "The Anguish of Central Banking." Per Jacobsson Lecture, Belgrade, Yugoslavia, September 30.

Cassel, Gustav. 1928. "The Rate of Interest, the Bank Rate, and the Stabilization of Prices." Quarterly Journal of Economics 42(4): 511-29.

Christiano, Lawrence J., and Terry J. Fitzgerald. Forthcoming. "The Band Pass Filter." International Economic Review.

Clarida, Richard, Jordi Gali, and Mark Gertler. 1999. "The Science of Monetary Policy." Journal of Economic Literature 37(4): 1661-1707.

Congressional Budget Office. 2001. "CBO's Method for Estimating Potential Output: An Update." Washington: Government Printing Office (August).

--. 2002. "The Budget and Economic Outlook: An Update." Washington: Government Printing Office (August).

Croushore, Dean. 1993. "Introducing: The Survey of Professional Forecasters." Federal Reserve Bank of Philadelphia Business Review November/December, pp. 3-13.

Croushore, Dean, and Tom Stark. 2001. "A Real-Time Data Set for Macroeconomists." Journal of Econometrics 105(1): 111-30.

Ehrmann, Michael, and Frank Smets. Forthcoming. "Uncertain Potential Output: Implications for Monetary Policy." Journal of Economic Dynamics and Control.

Estrella, Arturo, and Jeffrey C. Fuhrer. Forthcoming. "Dynamic Inconsistencies: Counterfactual Implications of a Class of Rational Expectations Models." American Economic Review.

Friedman, Milton. 1968. "The Role of Monetary Policy." American Economic Association Papers and Proceedings 58(1): 1-17.

Fuhrer, Jeffrey C., and George R. Moore. 1995a. "Inflation Persistence." Quarterly Journal of Economics 110(1): 127-59.

--. 1995b. "Forward-Looking Behavior and the Stability of a Conventional Monetary Policy Rule." Journal of Money, Credit and Banking 27(4, part 1): 1060-70.

Goodfriend, Marvin, and Robert G. King. 1997. "The New Neoclassical Synthesis and the Role of Monetary Policy." In NBER Macroeconomics Annual, edited by Ben S. Bernanke and Julio J. Rotemberg. MIT Press.

Gordon, Robert J. 1998. "Foundations of the Goldilocks Economy: Supply Shocks and the Time-Varying NAIRU." BPEA, 2:1998, 297-333.

--. 2002. Macroeconomics, 8th ed. Boston: Addison-Wesley Higher Education.

Greenspan, Alan. 2000. "Technology and the Economy." Remarks before the Economic Club of New York, New York, January 13.

--. 2002. "Economic Volatility." Remarks at a symposium sponsored by the Federal Reserve Bank of Kansas City, Jackson Hole, Wyoming, August 30. www.federalreserve.gov/boarddocs/speeches/2002/20020830/default.htm.

Hall, Robert E. 1970. "Why Is the Unemployment Rate So High at Full Employment?" BPEA, 3:1970, 369-402.

Hansen, Lars Peter, and Thomas J. Sargent. 2002. "Robust Control and Model Uncertainty in Macroeconomics." Unpublished paper. Stanford University (September).

Heller, Walter W. 1966. New Dimensions of Political Economy. Harvard University Press.

Hodrick, Robert J., and Edward L. Prescott. 1997. "Post-war Business Cycles: An Empirical Investigation." Journal of Money, Credit and Banking 29(1): 1-16.

Jones, Sidney L. 2000. Public & Private Economic Adviser: Paul W. McCracken. Lanham, Md.: University Press of America.

Judd, John P., and Brian Motley. 1992. "Controlling Inflation with an Interest Rate Instrument." Federal Reserve Bank of San Francisco Economic Review 3: 3-22.

Katz, Lawrence F., and Alan B. Krueger. 1999. "The High-Pressure U.S. Labor Market of the 1990s." BPEA, 1:1999, 1-65.

Lansing, Kevin J. 2002. "Real-Time Estimation of Trend Output and the Illusion of Interest Rate Smoothing." Federal Reserve Bank of San Francisco Economic Review, pp. 17-34.

Laubach, Thomas. 2001. "Measuring the NAIRU: Evidence from Seven Economies." Review of Economics and Statistics 83(2): 218-31.

Laubach, Thomas, and John C. Williams. Forthcoming. "Measuring the Natural Rate of Interest." Review of Economics and Statistics.

Leitemo, Kai, and Ingum Lonning. 2002. "Simple Monetary Policymaking without the Output Gap." Oslo, Norway: Norges Bank (October).

Levin, Andrew, Volker Wieland, and John Williams. 1999. "Robustness of Simple Monetary Policy Rules under Model Uncertainty." In Monetary Policy Rules, edited by John B. Taylor. University of Chicago Press.

--. Forthcoming. "The Performance of Forecast-Based Policy Rules under Model Uncertainty." American Economic Review.

Levin, Andrew, and John C. Williams. 2002. "Robust Monetary Policy with Competing Reference Models." Unpublished paper. Washington: Federal Reserve Bank of San Francisco.

McCallum, Bennett T. 1988. "Robustness Properties of a Rule for Monetary Policy." Carnegie-Rochester Conference Series on Public Policy 29(Autumn): 173-203.

--. 2001. "Should Monetary Policy Respond Strongly to Output Gaps?" American Economic Review 91(2): 258-62.

McCallum, Bennett T., and Edward Nelson. 1999. "Performance of Operational Policy Rules in an Estimated Semiclassical Structural Model." In Monetary Policy Rules, edited by John B. Taylor. University of Chicago Press.

Meyer, Laurence. 2000. "The New Economy Meets Supply and Demand." Remarks before the Boston Economics Club, June 6.

Meyer, Laurence, Eric Swanson, and Volker Wieland. 2001. "NAIRU Uncertainty and Nonlinear Policy Rules." Washington: Board of Governors of the Federal Reserve (January).

Modigliani, Franco, and Lucas Papademos. 1975. "Targets for Monetary Policy in the Coming Year." BPEA, 1:1975, 141-63.

Neiss, Katharine S., and Edward Nelson. 2001. "The Real Interest Rate Gap as an Inflation Indicator." Working Paper 130. London: Bank of England (April).

Nelson, Edward, and Kalin Nikolov. 2001. "UK Inflation in the 1970s and 1980s: The Role of Output Gap Mismeasurement." Bank of England Working Paper 148 and CEPR Discussion Paper 2999. London: Bank of England and Centre for Economic Policy Research.

--. 2002. "Monetary Policy and Stagflation in the U.K." Bank of England Working Paper 155. London: Bank of England.

Okun, Arthur. 1962. "Potential GNP: Its Measurement and Significance." In American Statistical Association 1962 Proceedings of the Business and Economic Section. Washington: American Statistical Association.

--. 1970. The Political Economy of Prosperity. Brookings.

Okun, Arthur M., and Nancy H. Teeters. 1970. "The Full Employment Surplus Revisited." BPEA, 1:1970, 77-110.

Orphanides, Athanasios. 1998. "Monetary Policy Evaluation with Noisy Information." Finance and Economics Discussion Series 1998-50. Washington: Board of Governors of the Federal Reserve System (October).

--. 2000a. "Activist Stabilization Policy and Inflation: The Taylor Rule in the 1970s." Finance and Economics Discussion Series 2000-13. Washington: Board of Governors of the Federal Reserve System (February).

--. 2000b. "The Quest for Prosperity without Inflation." Working Paper 15. Frankfurt: European Central Bank (March).

--. 2001. "Monetary Policy Rules Based on Real-Time Data." American Economic Review 91(4): 964-85.

--. 2002a. "Monetary Policy Rules and the Great Inflation." American Economic Review 92(2): 115-20.

--. 2002b. "Historical Monetary Policy Analysis and the Taylor Rule." Unpublished paper. Washington: Board of Governors of the Federal Reserve System (November).

Orphanides, Athanasios, and Simon van Norden. 2001. "The Reliability of Inflation Forecasts Based on Output Gap Estimates in Real Time." Unpublished paper. Washington: Board of Governors of the Federal Reserve System (September).

--. 2002. "The Unreliability of Output Gap Estimates in Real Time." Review of Economics and Statistics 84(4): 569-83.

Orphanides, Athanasios, and Volker Wieland. 1998. "Price Stability and Monetary Policy Effectiveness When Nominal Interest Rates Are Bounded at Zero." Finance and Economics Discussion Series Working Paper 1998-35. Washington: Board of Governors of the Federal Reserve System (June).

Orphanides, Athanasios, and John C. Williams. 2002. "Imperfect Knowledge, Inflation Expectations and Monetary Policy." Finance and Economics Discussion Series 2002-27. Washington: Board of Governors of the Federal Reserve System (June).

Orphanides, Athanasios, and others. 2000. "Errors in the Measurement of the Output Gap and the Design of Monetary Policy." Journal of Economics and Business 52(1-2): 117-41.

Perry, George L. 1970. "Changing Labor Markets and Inflation." BPEA, 3:1970, 411-48.

Phillips, A. W. 1954. "Stabilisation Policy in a Closed Economy." Economic Journal 64(254): 290-323.

Poole, William. 1971. "Alternative Paths to a Stable Full Employment Economy." BPEA, 3:1971, 579-606.

Reifschneider, David, and John Williams. 2000. "Three Lessons for Monetary Policy in a Low Inflation Era." Journal of Money, Credit and Banking 32(4): 936-66.

Roberts, John M. 1997. "Is Inflation Sticky?" Journal of Monetary Economics 39(2): 173-96.

--. 2001. "How Well Does the New Keynesian Sticky-Price Model Fit the Data?" Finance and Economics Discussion Series 2001-13. Board of Governors of the Federal Reserve System (February).

Romer, Christina D., and David H. Romer. 2000. "Federal Reserve Information and the Behavior of Interest Rates." American Economic Review 90(3): 429-57.

Rotemberg, Julio J. 1999. "A Heuristic Method for Extracting Smooth Trends from Economic Time Series." Working Paper 7439. Cambridge, Mass.: National Bureau of Economic Research (December).

Rotemberg, Julio J., and Michael Woodford. 1999. "Interest Rate Rules in an Estimated Sticky Price Model." In Monetary Policy Rules, edited by John B. Taylor. University of Chicago Press.

Rudebusch, Glenn D. 2001. "Is the Fed Too Timid? Monetary Policy in an Uncertain World." Review of Economics and Statistics 83(2): 203-17.

--. 2002. "Assessing Nominal Income Rules for Monetary Policy with Model and Data Uncertainty." Economic Journal 112(479): 402-32.

Rudebusch, Glenn, and Lars E. O. Svensson. 1999. "Policy Rules for Inflation Targeting." In Monetary Policy Rules, edited by John B. Taylor. University of Chicago Press.

Sack, Brian, and Volker Wieland. 2000. "Interest-Rate Smoothing and Optimal Monetary Policy: A Review of Recent Empirical Evidence." Journal of Economics and Business 52(1-2): 205-28.

Sargent, Thomas J. 1971. "A Note on the `Accelerationist' Controversy." Journal of Money, Credit and Banking 3(3): 721-25.

Shimer, Robert. 1998. "Why Is the Unemployment Rate So Much Lower?" In NBER Macroeconomics Annual, edited by Ben S. Bernanke and Julio J. Rotemberg. MIT Press.

Smets, Frank. 2000. "What Horizon for Price Stability?" Working Paper 24. Frankfurt: European Central Bank.

--. 2002. "Output Gap Uncertainty: Does It Matter for the Taylor Rule?" Empirical Economics 22(1): 113-29.

St. Amant, Pierre, and Simon van Norden. 1997. "Measurement of the Output Gap: A Discussion of Recent Research at the Bank of Canada." Technical Report 79. Ottawa: Bank of Canada.

Staiger, Douglas, James H. Stock, and Mark W. Watson. 1997a. "How Precise Are Estimates of the Natural Rate of Unemployment?" In Reducing Inflation: Motivation and Strategy, edited by Christina D. Romer and David H. Romer. University of Chicago Press.

--. 1997b. "The NAIRU, Unemployment, and Monetary Policy." Journal of Economic Perspectives 11(1): 33-49.

--. 2002. "Prices, Wages, and the U.S. NAIRU in the 1990s." In The Roaring Nineties: Can Full Employment be Sustained? edited by Alan B. Krueger and Robert M. Solow. New York: Russell Sage Foundation.

Stein, Herbert. 1984. Presidential Economics: The Making of Policy from Roosevelt to Reagan and Beyond. New York: Simon and Schuster.

Stock, James H., and Mark W. Watson. 1998. "Median Unbiased Estimation of Coefficient Variance in a Time-Varying Parameter Model." Journal of the American Statistical Association 93(441): 349-58.

--. 1999. "Forecasting Inflation." Journal of Monetary Economics 44(2): 293-335.

Svensson, Lars E. O., and Michael Woodford. Forthcoming. "Indicator Variables for Optimal Policy." Journal of Monetary Economics.

Swanson, Eric T. 2000. "On Signal Extraction and Non-Certainty-Equivalence in Optimal Monetary Policy Rules." Finance and Economics Discussion Series 2000-32. Washington: Board of Governors of the Federal Reserve System (June).

Taylor, John B. 1993. "Discretion versus Policy Rules in Practice." Carnegie-Rochester Conference Series on Public Policy 39: 195-214.

--, ed. 1999a. Monetary Policy Rules. University of Chicago.

--. 1999b. "The Robustness and Efficiency of Monetary Policy Rules as Guidelines for Interest Rate Setting by the European Central Bank." Journal of Monetary Economics 43(3): 655-79.

Taylor, John B., and Michael Woodford, eds. 1999. Handbook of Macroeconomics. Amsterdam: North Holland.

van Norden, Simon. 2002. "Filtering for Current Analysis." Working paper. Ottawa: Bank of Canada.

Walsh, Carl E. Forthcoming. "Speed Limit Policies: The Output Gap and Optimal Monetary Policy." American Economic Review.

Wicksell, Knut. 1898. Interest and Prices. (Translated by R. F. Kahn, London: Macmillan, 1936).

Wieland, Volker. 1998. "Monetary Policy and Uncertainty about the Natural Unemployment Rate." Finance and Economics Discussion Series 98-22. Washington: Board of Governors of the Federal Reserve System (May).

Williams, John C. 1999. "Simple Rules for Monetary Policy." Finance and Economics Discussion Series 99-12. Washington: Board of Governors of the Federal Reserve System (February).

Williams, John H. 1931. "The Monetary Doctrines of J. M. Keynes." Quarterly Journal of Economics 45(4): 547-87.

Woodford, Michael. 1999. "Optimal Monetary Policy Inertia." Working Paper 7261. Cambridge, Mass.: National Bureau of Economic Research (July).

--. Forthcoming. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton University Press.

Zarnowitz, Victor, and Phillip A. Braun. 1993. "Twenty-Two Years of the NBER-ASA Quarterly Economic Outlook Surveys: Aspects and Comparisons of Forecasting Performance." In Business Cycles, Indicators, and Forecasting, edited by James H. Stock and Mark W. Watson. University of Chicago Press.

ATHANASIOS ORPHANIDES Board of Governors of the Federal Reserve System

JOHN C. WILLIAMS Federal Reserve Bank of San Francisco
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有