首页    期刊浏览 2025年12月24日 星期三
登录注册

文章基本信息

  • 标题:Risk management for monetary policy near the zero lower bound.
  • 作者:Evans, Charles ; Fisher, Jonas ; Gourio, Francois
  • 期刊名称:Brookings Papers on Economic Activity
  • 印刷版ISSN:0007-2303
  • 出版年度:2015
  • 期号:March
  • 语种:English
  • 出版社:Brookings Institution
  • 摘要:Table 8 shows our policy rule estimates with and without the various FOMC-based variables. Tables 9 and 10 show estimates with and without the quarterly variance and skewness proxies. Except for the human-coded variables hUnc and hIns, prior to estimation the risk proxies have been normalized to have mean zero and unit standard deviation, so their coefficients indicate percentage-point responses of the funds rate to standard deviation changes. The tables have the same layout: the first column shows the policy rule excluding any risk proxies, and the other columns show the policy rules after adding the indicated risk proxy. The coefficient associated with a given risk proxy corresponds to an estimate of p in equation 13. The speed of adjustment to the notional funds rate target ([[summation].sup.N.sub.j=1][a.sub.j]) and the coefficients on the forecasts of inflation ([beta]) and the output gap ([gamma]) are similar across specifications and consistent with estimated forecast-based policy rules in the literature.
  • 关键词:Monetary policy;Risk management

Risk management for monetary policy near the zero lower bound.


Evans, Charles ; Fisher, Jonas ; Gourio, Francois 等


III.B. Policy Rule Findings

Table 8 shows our policy rule estimates with and without the various FOMC-based variables. Tables 9 and 10 show estimates with and without the quarterly variance and skewness proxies. Except for the human-coded variables hUnc and hIns, prior to estimation the risk proxies have been normalized to have mean zero and unit standard deviation, so their coefficients indicate percentage-point responses of the funds rate to standard deviation changes. The tables have the same layout: the first column shows the policy rule excluding any risk proxies, and the other columns show the policy rules after adding the indicated risk proxy. The coefficient associated with a given risk proxy corresponds to an estimate of p in equation 13. The speed of adjustment to the notional funds rate target ([[summation].sup.N.sub.j=1][a.sub.j]) and the coefficients on the forecasts of inflation ([beta]) and the output gap ([gamma]) are similar across specifications and consistent with estimated forecast-based policy rules in the literature.

From table 8 we see that the coefficient on the human coding of uncertainty (hUnc) is statistically significant at the 5 percent level, indicating that when uncertainty has shaded the policy decision above or below the forecast-only prescription it has moved the notional target by 40 basis points. With interest rate smoothing the immediate impact is much smaller; the 95 percent confidence interval is 2 to 14 basis points. The machine coding of uncertainty (mUnc) is significant at the 10 percent level but the effect is small. The insurance indicators (hIns and mins) are not significant, but the point estimate of the hIns coefficient is similar to its uncertainty counterpart. The coefficient on the output gap forecast revision variable (frGap) is large and significant, indicating a one-standard-deviation positive surprise in the forecast raises the notional target by 47 basis points over and above the impact this surprise has on the forecast itself. (62) In contrast, revisions to the inflation outlook (frInf) do not influence policy beyond their direct effect on the forecast.

Table 9 shows clear evidence that variance in the economic outlook has shaded policy away from the forecast-only prescription. The coefficients on VXO and JLN are both statistically and economically significant, with one-standard-deviation increases lowering the notional target funds rate by 43 and 29 basis points, respectively. (63) Disagreement over the GDP forecast (DvGDP) has a significant coefficient, which is similar to the ones for VXO and JLN, suggesting that the latter variables' correlation with monetary policy reflects uncertainty in the growth outlook. That all these coefficients are negative suggests that higher uncertainty about growth has influenced the FOMC when it was concerned about recessionary dynamics and lowered the funds rate more than prescribed by the forecast alone. The only other significant coefficient in table 9 corresponds to the measure of individual forecasters' views about the uncertainty in their inflation forecasts (vInf). In this case uncertainty shades the policy higher, by about 20 basis points. This suggests that higher uncertainty about the inflation forecast has influenced the FOMC when it was concerned about inflation rising above desired levels and raised rates above levels prescribed by the baseline forecast.

Similarly strong evidence that skewness has mattered for policy decisions is found in table 10. The coefficients are significant on the interest-rate-spread indicator of downside risks to activity (SPD), skewness in the outlook for inflation measured from forecasters' own forecast distributions (sInf), and skewness in the inflation outlook measured across point forecasts (Dslnf). An increase in perceived downside risks to activity lowers the funds rate, while an increase in perceived upside risks to inflation raises it. The effects seem large; increases in the skewness proxies change the notional target by -56, 23, and 40 basis points, respectively.

These findings reinforce our findings on the variance proxies and, similarly, seem consistent with our reading of FOMC communications. The point estimates for skewness in the GDP outlook (sGDP and DsGDP) have surprisingly negative signs. However these coefficients are relatively small and insignificant.

Taken together, these results indicate that risk management concerns, broadly conceived, have had a statistically and economically significant impact on policy decisions over and above how those concerns are reflected in point forecasts. The effects we find suggest that the FOMC acted aggressively to offset concerns about declining growth or rising inflation. We conclude from this econometric analysis that risk management does not just appear in the words of the FOMC--it is reflected in the FOMC's deeds as well.

IV. Conclusion

We have focused on risk surrounding the forecast as a relevant consideration for monetary policy near the ZLB, but other issues are relevant to the liftoff calculus as well. In particular, policymakers may face large reputational costs of reversing a decision. Empirically, it is well known that central banks tend to go through "tightening" and "easing" cycles, which in turn induce substantial persistence in the short-term interest rate. Uncertainty over the outlook may be one reason for this persistence. But another reason why policymakers might be reluctant to reverse course is that doing so would damage their reputation, perhaps because the public would lose confidence in the central bank's ability to understand and stabilize the economy. With high uncertainty, this reputational element would lead to more caution. In the case of liftoff, it argues for a longer delay in raising rates to avoid the reputational costs of reverting to the ZLB.

Another reputational concern is the signal the public might infer about the central bank's commitment to its stated policy goals. With regard to lift-off, suppose it occurred with output or inflation still far below target. Large gaps on their own pose no threat to the central bank's credibility if the public is confident that the economy is on a path to achieve its objectives in a reasonable period and that it is willing to accommodate this path. However, if there is uncertainty over the strength of the economy, early liftoff might be construed as a less-than-enthusiastic endorsement of the central bank's ultimate policy objectives. Motivated by the current situation, we have focused in the paper on the case of a central bank that is undershooting its inflation target, but similar issues would arise if risk management considerations dictated an aggressive tightening to guard against inflation and the central bank failed to act accordingly. In a wide class of models, such losses of credibility can have deleterious consequences for achieving the central bank's objectives.

ACKNOWLEDGMENTS We thank numerous seminar participants and Gadi Barlevy, Jeffrey Campbell, Stefania D'Amico, Alan Greenspan, Alejandro Justiniano, John Leahy, Sydney Ludvigson, Leonardo Melosi, Taisuke Nakata, Serena Ng, Valerie Ramey, David Reifschneider, Glenn Rudebusch, Paolo Surico, Francois Velde, and Johannes Wieland for their help and comments; Theodore Bogusz, David Kelley and Trevor Serrao for superb research assistance; and the volume editors. We also thank Michael McMahon for providing us with machine-readable FOMC minutes and transcripts and Thomas Stark for help with the Philadelphia Fed's real-time data. The views expressed herein are those of the authors and do not necessarily represent the views of the Federal Open Market Committee or the Federal Reserve System.

CHARLES EVANS

Federal Reserve Bank of Chicago

JONAS FISHER

Federal Reserve Bank of Chicago

FRANCOIS GOURIO

Federal Reserve Bank of Chicago

SPENCER KRANE

Federal Reserve Bank of Chicago

References

Adam, Klaus and Roberto M. Billi. 2007. "Discretionary Monetary Policy and the Zero Lower Bound on Nominal Interest Rates." Journal of Monetary Economics 54, no. 3: 728-52.

Alcidi, Cinzia, Alessandro Flamini, and Andrea Fracasso. 2011. "Policy Regime Changes, Judgment and Taylor Rules in the Greenspan Era." Economica 78: 89-107.

Andrade, Philippe, Eric Ghysels, and Julien Idier. 2013. "Tails of Inflation Forecasts and Tales of Monetary Policy." Kenan-Flagler Research Paper No. 2013-17, University of North Carolina.

Baker, Scott R., Nicholas Bloom, and Steven J. Davis. 2015. "Measuring Economic Policy Uncertainty." Working Paper, http://www.policyuncertainty.com/ media/BakerBloomDavis.pdf

Barlevy, Gadi. 2011. "Robustness and Macroeconomic Policy." Annual Review of Economics 3: 1-24.

Barsky, Robert, Alejandro Justiniano, and Leonardo Melosi. 2014. "The Natural Rate and Its Usefulness for Monetary Policy Making." American Economic Review 104, no. 4: 37-43.

Basu, Susanto, and Brent Bundick. 2014. "Downside Risk at the Zero Lower Bound." Working Paper. http://www.cla.auburn.edu/economics/assets/File/ BasuBundickDownsideRisk.pdf

Bekaert, Geert, Marie Hoerova, and Marco Lo Duca. 2013. "Risk, Uncertainty and Monetary Policy." Journal of Monetary Economics 60: 771-88.

Bemanke, Ben S. 2012. "The Changing Policy Landscape." In Monetary Policy since the Onset of the Crisis, Economic Policy Symposium, pp. 1-22. Kansas City: Federal Reserve Bank of Kansas City.

Bloom, Nicholas. 2009. "The Effect of Uncertainty Shocks." Econometrica 77, no. 3: 623-85.

Bomfin, A., and L. Meyer. 2010. "Quantifying the Effects of Fed Asset Purchases on Treasury Yields." Monetary Policy Insights: Fixed Income Focus. Macroeconomics Advisors (online).

Born, Benjamin, and Johannes Pfeifer. 2014. "Policy Risk and the Business Cycle." Journal of Monetary Economics 68: 68-85.

Brainard, William. 1967. "Uncertainty and the Effectiveness of Policy." American Economic Review 57, no. 2: 411-25.

Campbell, Jeffrey R., Charles. L. Evans, Jonas D. Fisher, and Alejandro Justiniano. 2012. "Macroeconomic Effects of Federal Reserve Forward Guidance." Brookings Papers on Economic Activity, Spring, 1-54.

Castelnuovo, Efrem. 2003. "Taylor Rules, Omitted Variables, and Interest Rate Smoothing in the U.S." Economics Letters 81, no. 1: 55-59.

Chen, Han, Vasco Curida, and Andrea Ferrero. 2012. "The Macroeconomic Effects of Large Scale Asset Purchase Programmes." Economic Journal 122, no. 564: 289-315.

Chevapatrakul, T., T. Kim, and P. Mizen. 2009. "The Taylor Principle and Monetary Policy Approaching a Zero Bound on Nominal Rates: Quantile Regression Results for the United States and Japan." Journal of Money, Credit and Banking 41, no. 8: 1705-23.

Christiano, Lawrence, Martin Eichenbaum, and Charles Evans. 2005. "Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy." Journal of Political Economy 113, no. 1: 1-45.

Clarida, Richard, Jordi Galt, and Mark Gertler. 2000. "Monetary Policy Rules and Macroeconomic Stability: Evidence and Some Theory." Quarterly Journal of Economics 115, no. 1: 147-80.

Coibion, Olivier, Yuriy Gorodnichenko, and Johannes Wieland. 2012. "The Optimal Inflation Rate in New Keynesian Models: Should Central Banks Raise Their Inflation Targets in Light of the Zero Lower Bound?" Review of Economic Studies 79, no. 4: 1371-1406.

Curdia, Vasco, Andrea Ferrero, Ging Cee Ng, and Andrea Tambalotti. 2015. "Has U.S. Monetary Policy Tracked the Efficient Interest Rate." Journal of Monetary Economics 70 (March): 72-83.

D'Amico, Stefania, and Thomas King. 2013. "Flow and Stock Effects of Large-Scale Treasury Purchases: Evidence on the Importance of Local Supply." Journal of Financial Economics 108, no. 2: 425-448.

--. 2015. "Policy Expectations, Term Premia, and Macroeconomic Performance." Unpublished manuscript. Chicago: Federal Reserve Bank of Chicago.

D'Amico, Stefania, and Athanasios Orphanides. 2014. "Inflation Uncertainty and Disagreement in Bond Risk Premia." Working Paper 2014-24. Chicago: Federal Reserve Bank of Chicago.

Dolado, Juan J., P. Ramon Mana-Dolores, and Manuel Naveira. 2005. "Are Monetary Policy Reaction Functions Asymmetric? The Role of Nonlinearity in the Phillips Curve." European Economic Review 49, no. 2: 485-503.

Dolado, Juan J., P. Ramon Marfa-Dolores, and Francisco J. Ruge-Murcia. 2004. "Nonlinear Monetary Policy Rules: Some New Evidence for the U.S." Studies in Nonlinear Dynamics and Econometrics 8, no. 3.

Egertsson, Gauti B., and Michael Woodford. 2003. "The Zero Bound on Interest Rates and Optimal Monetary Policy." Brookings Papers on Economic Activity 2003, no. 1: 139-211.

Eichenbaum, Martin, and Jonas D. Fisher. 2007. "Estimating the Frequency of Price Re-Optimization in Calvo-Style Models." Journal of Monetary Economics 54, no. 7: 2032-47.

Engen, Eric M., Thomas T. Laubach, and David Reifschneider. 2015. "The Macroeconomic Effects of the Federal Reserve's Unconventional Monetary Policy." Finance and Economics Discussion no. 2012-5. Washington: Board of Governors of the Federal Reserve System.

English, William B., J. David Lopez-Salido, and Robert Tetlow. 2013. "The Federal Reserve's Framework for Monetary Policy--Recent Changes and New Questions." Finance and Economics Discussion no. 2013-76. Washington: Board of Governors of the Federal Reserve System.

Evans, Charles L. 2014. "Patience Is a Virtue When Normalizing Monetary Policy." Speech presented at the Conference on Labor Market Slack, Peterson Institute for International Economics, Washington, September 24.

Fernandez-Villaverde, Jesus, Pablo Guerron-Quintana, Keith Kuester, and Juan Rubio-Ramirez. 2012. "Fiscal Volatility Shocks and Economic Activity." Working Paper no. 11-32/R. Philadelphia: Federal Reserve Bank of Philadelphia.

Friedman, Benjamin M. 1975. Economic Stabilization Policy: Methods in Optimization. Amsterdam and New York: North-Holland Publishing Company and American Elsevier Publishing Company.

Fuhrer, Jeffrey C. 2000. "Habit Formation in Consumption and Its Implications for Monetary-Policy Models." American Economic Review 90, no. 3: 367-90.

Gagnon, J., M. Raskin, J. Remache, and B. Sack. 2010. "Optimal Fiscal and Monetary Policy with Occasionally Binding Zero Bound Constraints." Staff Report no. 441. New York: Federal Reserve Bank of New York.

Galt, Jordi. 2008. An Introduction to the New Keynesian Framework. Princeton University Press.

Galt, Jordi, and Mark Gertler. 1999. "Inflation Dynamics: A Structural Econometric Analysis." Journal of Monetary Economics 44, no. 2: 192-222.

Gerlach-Kristen, Petra. 2004. "Interest Rate Smoothing: Monetary Policy Inertia or Unobserved Variables." Contributions in Macroeconomics 4, no. 1: 1534-6005.

Gilchrist, Simon, and Egon Zakrajsek. 2012. "Credit Spreads and Business Cycle Fluctuations." American Economic Review 102, no. 4: 1692-1720.

Gnabo, Jean-Yves, and Diego N. Moccero. 2015. "Risk Management, Nonlinearity and Aggressiveness in Monetary Policy: The Case of the U.S." Journal of Banking & Finance 55: 281-94.

Goodfriend, Marvin. 1991. "Interest Rates and the Conduct of Monetary Policy." Carnegie-Rochester Conference Series on Public Policy no. 34: 7-30.

Greenspan, Alan. 2004. "Risk and Uncertainty in Monetary Policy." American Economic Review 94, no. 2: 33-40.

Hamilton, James D., Ethan S. Harris, Jan Hatzius, and Kenneth D. West. 2015. "The Equilibrium Real Funds Rate: Past, Present and Future." Unpublished manuscript, University of California--San Diego.

Hamilton, James, and Jing Wu. 2010. "The Effectiveness of Alternative Monetary Policy Tools in a Zero Lower Bound Environment." Unpublished manuscript, University of California--San Diego.

Hansen, Lars, and Thomas Sargent. 2008. "Robustness." Princeton University Press.

Jurado, Kyle, Sydney Ludvigson, and Serena Ng. 2015. "Measuring Uncertainty." American Economic Review 105, no. 3: 1177-1276.

Kiesel, Konstantin, and Maik H. Wolters. 2014. "Estimating Monetary Policy Rules When the Zero Lower Bound on Nominal Interest Rates Is Approached." Kiel Working Paper no. 1898, Kiel Institute for the World Economy.

Kiley, Michael T. 2012. "The Aggregate Demand Effects of Short- and Long-Term Interest Rates." Finance and Economics Discussion no. 2012-54. Washington: Board of Governors of the Federal Reserve System.

Kilian, Lutz, and Simone Manganelli. 2008. "The Central Banker as a Risk Manager: Estimating the Federal Reserve's Preferences under Greenspan." Journal of Money, Credit and Banking 40, no. 6: 1103-29.

Krishnamurthy, Arvind, and Annette Vissing-Jprgensen. 2013. "The Ins and Outs of LSAPS." Proceedings of the Economic Policy Symposium--Jackson Hole. Federal Reserve Bank of Kansas City.

Krugman, Paul R. 1998. "It's Baaack: Japan's Slump and the Return of the Liquidity Trap." Brookings Papers on Economic Activity, no. 2: 137-87.

Laubach, Thomas, and John C. Williams. 2003. "Measuring the Natural Rate of Interest." Review of Economics and Statistics 85. no. 4: 1063-70.

Laxton, Douglas, David Rose, and Demosthenes Tambakis. 1999. "The U.S. Phillips Curve: The Case for Asymmetry." Journal of Economic Dynamics & Control 23, nos. 9, 10: 1459-85.

Mas-Colell, Andreau, Michael D. Whinston, and Jerry R. Green. 1995. Microeconomic Theory. Oxford University Press.

Mumtaz, Haroon, and Paolo Surico. 2015. "The Transmission Mechanism in Good and Bad Times." International Economic Review, forthcoming.

Nakata, Taisuke. 2013a. "Optimal Fiscal and Monetary Policy with Occasionally Binding Zero Bound Constraints." Finance and Economics Discussion no. 2013-40. Washington: Board of Governors of the Federal Reserve System.

--. 2013b. "Uncertainty at the Zero Lower Bound." Finance and Economics Discussion no. 2013-09. Washington: Board of Governors of the Federal Reserve System.

Nakata, Taisuke, and Sebastian Schmidt. 2014. "Conservatism and Liquidity Traps." Finance and Economics Discussion no. 2014-105. Washington: Board of Governors of the Federal Reserve System.

Nakov, Anton A. 2008. "Optimal and Simple Monetary Policy Rules with Zero Floor on the Nominal Interest Rate." International Journal of Central Banking 4, no. 2: 73-127.

Orphanides, Athanasios, and John C. Williams. 2002. "Robust Monetary Policy Rules with Unknown Natural Rates." Brookings Papers on Economic Activity, Fall: 63-145.

Reifschneider, David, and John C. Williams. 2000. "Three Lessons for Monetary Policy in a Low-Inflation Era." Journal of Money, Credit, and Banking 32, no. 4: 936-66.

Romer, Christina D., and David H. Romer. 1989. "Does Monetary Policy Matter? A New Test in the Spirit of Friedman and Schwartz." In NBER Macroeconomics Annual 2005, vol. 4, ed. by O. Blanchard and S. Fischer.

Rudebusch, Glenn D. 2002. "Term Structure Evidence on Interest Rate Smoothing and Monetary Policy Inertia." Journal of Monetary Economics 49, no. 6: 1161-87.

Rudebusch, Glenn, and Lars E. Svensson. 1999. "Policy Rules for Inflation Targeting." In Monetary Policy Rules, ed. by J. B. Taylor. University of Chicago Press.

Sack, Brian. 2000. "Does the Fed Act Gradually? A VAR Analysis." Journal of Monetary Economics 46, no. 1: 229-56.

Smets, Frank, and Rafael Wouters. 2007. "Shocks and Frictions in U.S. Business Cycles: A Bayesian DSGE Approach." American Economic Review 97, no. 3: 586-606.

Surico, Paolo. 2007. "The Fed's Monetary Policy Rule and U.S. Inflation: The Case of Asymmetric Preferences." Journal of Economic Dynamics & Control 31, no. I: 305-24.

Svensson, Lars, and Michael Woodford. 2002. "Optimal Policy with Partial Information in a Forward-Looking Model: Certainty-Equivalence Redux." Working Paper, Columbia University, http://www.columbia.edu/~mw2230/ SWCE206.pdf

--. 2003. "Indicator Variables for Optimal Policy." Journal of Monetary Economics 50, no. 3: 691-720.

Taylor, John B. 1993. "Discretion versus Policy Rules in Practice." Carnegie-Rochester Conference Series on Public Policy no. 39: 195-214.

Tenreyro, Silvana, and Gregory Thwaites. 2015. "Pushing on a String: US Monetary Policy Is Less Powerful in Recessions." Working Paper, London School of Economics and Political Science. http://personal.lse.ac.uk/tenreyro/TandT.pdf

Werning, Ivan. 2012. "Managing a Liquidity Trap: Monetary and Fiscal Policy." Unpublished manuscript, Massachusetts of Technology.

Woodford, Michael. 2003. "Interest and Prices." Princeton University Press.

--. 2012. "The Changing Policy Landscape." In Methods of Policy Accommodation at the Interest-Rate Lower Bound, Economic Symposium (pp. 185-288). Kansas City: Federal Reserve Bank of Kansas City.

(1.) In his speech at the Petersen Institute for Economics, Evans (2014) discussed these issues at greater length.

(2.) From "Risk and Uncertainty in Monetary Policy," Chairman Greenspan's remarks at the Meeting of the American Economic Association, San Diego, California, January 3, 2004 (Greenspan 2004).

(3.) For example, while there is econometric evidence that changes in term premia influence activity and inflation, some studies find that the effects appear to be less powerful than comparably sized movements in the short-term policy rate; see D'Amico and King (2015), Kiley (2012), and Chen, Curida, and Ferrero (2012).

(4.) Bomfin and Meyer (2010), D'Amico and King (2013), and Gagnon, Raskin, Remache, and Sack (2010) find noticeable effects of LSAPs on Treasury term premia, while Chen, Curida, and Ferrero (2012) and Hamilton and Wu (2010) unearth only small effects. Krishnamurthy and Vissing-Jorgensen (2013) argue that LSAPs only had a substantial influence on private borrowing rates in the mortgage market. Engen, Laubach, and Reifschneider (2015) and Campbell and others (2012) analyze the interactions between LSAPs' forward guidance and private sector expectations.

(5.) These costs are mitigated, however, by additional tools the Fed has introduced to exert control over interest rates when the time comes to exit the ZLB and by enhanced supervisory and regulatory efforts to monitor and address potential financial instability concerns. Furthermore, continued low rates of inflation and contained private-sector inflationary expectations have reduced concerns regarding an outbreak of inflation.

(6.) "Monetary Policy since the Onset of the Crisis," Remarks by Chairman Ben S. Bernanke at the Federal Reserve Bank of Kansas City Economic Symposium, Jackson Hole, Wyoming, August 31, 2012 (Bernanke 2012, p. 14).

(7.) Krishnamurthy and Vissing-Jorgensen (2013) argue successive LSAP programs have had a diminishing influence on term premia. Surveys conducted by Blue Chip and the Federal Reserve Bank of New York also indicate that market participants are less optimistic that further asset purchases would provide much stimulus if the Fed were forced to expand their use in light of unexpected economic weakness.

(8.) This framework can be derived from a micro-founded DSGE model (see for instance Woodford [2003], Chapter 6), but it has a longer history and is used even in models that are not fully micro-founded. The Federal Reserve Board staff routinely conducts optimal policy exercises in the FRB/US model; see for example English, Lopez-Salido, and Tetlow (2013).

(9.) Woodford (2003, p. 248) defines the natural rate as the equilibrium real rate of return in the case of fully flexible prices. As discussed by Barsky, Justiniano, and Melosi (2014), in medium-scale DSGE models with many shocks the appropriate definition of the natural rate is less clear.

(10.) There is ample evidence of considerable uncertainty regarding the natural rate. See for example Barsky. Justiniano, and Melosi (2014), Hamilton and others (2015), and Laubach and Williams (2003).

(11.) Uncertainty itself could give rise to g, shocks. A large amount of recent work, following Bloom (2009), suggests that private agents react to increases in economic uncertainty, leading to a decline in economic activity. One channel is that higher uncertainty may lead to precautionary savings, which in turn depresses demand, as is emphasized by Basu and Bundick (2014), Fernandez-Villaverde and others (2012), and Born and Pfeifer (2014).

(12.) Implicitly we are assuming the central bank does not have the ability to employ what Campbell and others (2012) call "Odyssean" forward guidance. However, our model is consistent with the central bank's using forward guidance in the "Delphic" sense they describe because agents anticipate how the central bank reacts to evolving economic conditions.

(13.) It is easy to verify that if the uncertainty about the natural rate is only at t = 0 the optimal policy would be to set the interest rate to the expected value of the natural rate, and the amount of uncertainty would have no effect. This is why our scenario has more than two periods.

(14.) This simple interest rate rule implements the equilibrium [[pi].sub.t] = [x.sub.t] = 0 but is also consistent with other equilibria. However, there are standard ways to rule out these other equilibria. See Gall (2008, pp. 76-77) for a discussion. Henceforth we will not consider this issue.

(15.) Recent statements of the certainty equivalence principle in models with forward-looking variables can be found in Svensson and Woodford (2002, 2003).

(16.) See Mas-Colell, Whinston, and Green (1995, Proposition 6.D.2, p. 199) for the relevant result regarding the effect of a mean-preserving spread on the expected value of concave functions of a random variable.

(17.) Finally, there is a case where the ZLB does not bind initially but does bind if uncertainty is higher. In this case, x0 may be lower or higher with higher uncertainty, while [[pi].sub.0] is always smaller.

(18.) See also Nakata and Schmidt (2014) for a related analytical result in a model with two-state Markov shocks.

(19.) Indeed, private sector forecasters attribute a significant likelihood of a return to the ZLB: respondents to the January 2015 Federal Reserve Bank of New York survey of Primary Dealers put the odds of returning to the ZLB within two years following liftoff at 20 percent.

(20.) Online appendixes to all papers in this volume may be found at the BPEA web page, www.brookings.edu/bpea, under "Past Editions."

(21.) Indeed, empirical studies based on medium-scale DSGE models, such as those considered by Christiano, Eichenbaum, and Evans (2005) and Smets and Wouters (2007), find that backward-looking elements are essential to account for the empirical dynamics. Backward-looking terms are important in single-equation estimation as well. See for example Fuhrer (2000), Gall and Gertler (1999), and Eichenbaum and Fisher (2007).

(22.) Relaxing it would only strengthen our results.

(23.) Another difference is that they study a medium-scale DSGE model with both forward- and backward-looking elements; because of this added complexity, they use a different solution method.

(24.) Note that it is not clear how to map estimates of the lagged inflation coefficient in the literature to our backward-looking model since these are based on Phillips curves with forward-looking terms.

(25.) In the online appendix we discuss the implications for our results of different values for the initial gaps, uncertainty, p", 5, and c,.

(26.) One might be surprised that inflation is far below target under the naive policy even though the output gap is near the target. This reflects the fact that we plotted the modal outcome, rather than the mean, and that the distributions of inflation and output gap outcomes are skewed to the left.

(27.) For some calibrations, the outcomes under the Taylor rule can be so poor that liftoff is delayed and rates are below the optimal policy throughout the simulation period.

(28.) The suboptimality of the Taylor rule does not hold by definition, because it provides commitment, which may lead to more favorable outcomes.

(29.) The online appendix describes how we calculate the interest rate implied by the Taylor rule with our data.

(30.) We thank Johannes Wieland for suggesting that we assess the volatility of the nominal interest rate.

(31.) "Risk and Uncertainty in Monetary Policy," p. 36 (see note 2).

(32.) For an early contribution of the effects of asymmetric loss functions on stabilization policy see Friedman (1975).

(33.) The fact that a convex Phillips curve can lead to a role for risk management has been discussed by Laxton, Rose, and Tambakis (1999) and Dolado, Mana-Dolores, and Naveira (2005).

(34.) Available at http://www.federalreserve.gov/boarddocs/hh/1998/february/Report Section1.htm

(35.) Available at http://www.federalreserve.gov/fomc/minutes/!9970701.htm

(36.) "Coming Budgetary Challenges," Testimony of Chairman Alan Greenspan before the Committee on the Budget, U.S. House of Representatives, March 4, 1998. Available at http://www.federalreserve.gov/boarddocs/testimony/1998/19980304.htm

(37.) Minutes of the Federal Open Market Committee, September 29, 1998. Available at http://www.federalreserve.gov/fomc/minutes/19980929.htm

(38.) "The Federal Reserve's Semiannual Report on Monetary Policy," Testimony of Chairman Alan Greenspan before the Committee on Banking, Housing, and Urban Affairs, U.S. Senate, February 23, 1999. Available at http://www.federalreserve.gov/boarddocs/ hh/1999/february/testimony.htm

(39.) The FOMC had already invoked such arguments earlier in this cycle. As noted in the July 2000 Monetary Policy Report: "The FOMC considered larger policy moves at its first two meetings of 2000 but concluded that significant uncertainty about the outlook for the expansion of aggregate demand in relation to that of aggregate supply, including the timing and strength of the economy's response to earlier monetary policy tightenings, warranted a more limited policy action." (Monetary Policy Report forwarded to Congress on July 20, 2000, available at http://www.federalreserve.gov/boarddocs/hh/2000/July/Report Section1.htm)

(40.) Minutes of the Federal Open Market Committee, June 27-28, 2000. http://www. federalreserve.gov/fomc/minutes/20010131.htm

(41.) At that meeting the Federal Reserve Board staff was forecasting that growth would stagnate in the first half of the year but that the economy would avoid an outright recession even with the funds rate at 5.75 percent. Core PCE inflation was projected to rise modestly to a little under 2.0 percent.

(42.) Minutes of the Federal Open Market Committee, January 30-31, 2UU1. Available at http://www.federalreserve.gov/fomc/minutcs/20010131.htm

(43.) Minutes of the Federal Open Market Committee, November 6, 2001. A transcript is available at https://www.andrew.cmu.edu/course/88-301/monetarism/minutes-0111.pdf

(44.) A value of plus (minus) one for either variable could reflect the FOMC raising (lowering) rates by more (less) than they would have if they ignored uncertainty or insurance or a decision to keep the funds rate at its current level when a forecast-only call would have been to lower (raise) rates.

(45.) See note 40.

(46.) From the minutes: "Should the strength of the economic expansion and the firming of labor markets persist, policy tightening likely would be needed at some point to head off imbalances that over time would undermine the expansion in economic activity. Most saw little urgency to tighten policy at this meeting, however ... (o)n balance, in light of the uncertainties in the outlook and given that a variety of special factors would continue to contain inflation for a time, the Committee could await further developments bearing on the strength of inflationary pressures without incurring a significant risk."

(47.) The appendix describes our coding algorithm in more detail.

(48.) Indeed, for much of our sample period, the FOMC discussed risks about the future evolution of output or inflation in order to signal a possible bias in the direction of upcoming rate actions. For example, in the July 1997 meeting described earlier, the minutes indicate: "An asymmetric directive was consistent with their view that the risks clearly were in the direction of excessive demand pressures." Since the FOMC delayed tightening at this meeting, this "risk" reference communicated that the risks to price stability presented by the baseline outlook would likely eventually call for rate increases. But it does not appear to be a reference that variance or skewness in the distribution of possible inflation outcomes should dictate some non-standard policy response.

(49.) There is a large literature that examines nonlinearities in policy reaction functions (see Gnabo and Moccero [2015], Mumtaz and Surico [2015], and Tenreyro and Thwaites [2015] for reviews of this literature and recent estimates), but surprisingly little work that speaks directly to risk management. We discuss the related literature below.

(50.) There is no presumption that (equation 13) reflects optimal policy and so assuming a constant natural rate is not inconsistent with our theoretical analysis. We explored using forecasted growth in potential output derived from board staff forecasts to proxy for the natural rate and found this did not affect our results.

(51.) We make no attempt to address the possibility of hitting the ZLB in our estimation. See Chevapatrakul, Kim, and Mizen (2009) and Kiesel and Wolters (2014) for papers that do this.

(52.) The online appendix describes our data in more detail.

(53.) We assume meetings are equally spaced even though this is not true in practice. We account for this discrepancy when we calculate standard errors by allowing for heteroskedasticity.

(54.) Gnabo and Moccero (2015) also estimate quarterly reaction functions using board staff forecasts.

(55.) Using a VAR framework Bekaert, Hoerova, and Lo Duca (2013) find weak evidence that positive innovations to VXO lead to looser policy. Gnabo and Moccero (2015) find that policy responds more aggressively to economic conditions and is less inertial in periods of high uncertainty as measured by VXO.

(56.) Alcidi, Flamini, and Fracasso (2011), Castelnuovo (2003), and Gerlach-Kristen (2004) consider reaction functions including SPD.

(57.) The forecast distributions are for growth and inflation in the current and following year. We use D'Amico and Orphanides' (2014) procedure to translate these into distributions of four-quarter-ahead forecasts.

(58.) Gnabo and Moccero (2014) find statistically insignificant effects of Dvlnf on monetary policy.

(59.) As discussed by Baker, Bloom, and Davis (2015) there is no consensus on how good a proxy it is. Note that we do not study Baker, Bloom, and Davis's (2015) measure of uncertainty since it confounds uncertainty about monetary policy and the economic outlook.

(60.) Between 1990 and 1992, only 4 of the 18 changes in the funds rate target occurred at an FOMC meeting. In contrast, between 1993 and 2008, 54 of the 61 changes in the funds rate target occurred at FOMC meetings. Ignoring inter-meeting moves causes specification problems if interest rate smoothing is a function not only of time but also of the number of policy moves. Indeed, when we estimated our meeting frequency models starting in 1987, our point estimates were (statistically) similar, but even with 5 funds rate lags substantial serial correlation remained in the residuals.

(61.) In 1992 the SPF narrowed the bins it used to summarize the forecast probability distributions of individual forecasters. See D'Amico and Orphanides (2014) and Andrade, Ghysels, and Idier (2013) for attempts to address this change in bin sizes.

(62.) The magnitude and significance of this coefficient is largely driven by the sharp decline in the funds rate in 2008 that occurred alongside substantial downward revisions to the output gap forecast.

(63.) The JLN variable can be expressed as a linear combination of the three uncertainty measures constructed with the underlying activity, inflation, and financial indicators separately. We used Jurado, Ludvigson, and Ng's (2015) replication software to separate out these components, and found that the estimated effects of JLN are driven primarily by the financial indicators.

Comments and Discussion

COMMENT BY

ALAN GREENSPAN In this paper, Charles Evans, Jonas Fisher, Francois Gourio, and Spencer Krane have produced an impressive formal evaluation of the procedures the Chicago Fed employs as it approaches monetary policy normalization. They have rightly chosen a risk management paradigm that, in my judgment and given our state of knowledge, is the appropriate strategy for policy development.

Effective policy rests primarily on the policymakers' ability to forecast economic outcomes. Obviously, if economic forecasts and the related monetary policy could be successfully driven wholly by a formal model, that is, a set of rules, neither discretion nor risk management would be necessary. Regrettably, that is not the case.

My major concern with current monetary policy deliberations is their adherence to models that failed to capture either the timing or the depth of the breakdown of 2008, arguably the most devastating global financial crisis ever. To be sure, the Great Depression of the 1930s was the most devastating economic collapse, but financial markets continued to function throughout that crisis. In the wake of the Lehman Brothers bankruptcy, however, many critical overnight markets ceased to function, precipitating an unprecedented global economic breakdown. Before the more recent crisis, the last time overnight trading had failed to function occurred for one day in 1907, when call money rates were bid at 125 percent with no offers (Homer and Sylla 1991, p. 340).

None of the major models, including that of the Federal Reserve, accurately anticipated the 2008 crisis. What claim do we central bankers have for policy credibility if we could not anticipate and address the most wrenching financial crisis of our lifetimes?

LEVERAGE MATTERS In virtually all previous such crises, the presumptive cause was the collapse of a financial bubble triggering a bout of contagious serial debt default. Leading up to the crisis of 2008, nonfinancial balance sheets were in reasonably good shape, only to be upended by corrosive finance. Nonfinancial corporate equity, for example, has averaged close to 50 percent of assets, compared with finance, which has averaged a small fraction of that. We need to amend our standard forecasting models to incorporate those rare occasions when highly leveraged finance, otherwise appearing benign, morphs single defaults into a rapid and uncontrollable serial debt contagion that disables nonfinancial systems in its wake. While the default of Lehman Brothers was anticipated as a distinct possibility, central bankers, supported by the most advanced macro models, failed to foresee the carnage that was about to arise in the hours following the default announcement.

All such toxic events have almost always been preceded by a speculative bubble. And all bubbles, by definition, deflate. But not all deflating bubbles lead to serial default contagion. The collapse of the bubble that preceded the historic one-day stock market crash of October 19, 1987, barely nudged the economy. And the bubble that burst in 2000 left in its wake the shallowest recession since the end of World War II. However, monetary policymakers failed to fully grasp the implications of either the highly leveraged subprime crisis of 2008 or the 1929 broker loan collapse.

As I note in my book The Map and the Territory 2.0, the severity of the destruction caused by a bursting bubble is determined not by the type of asset that turns toxic but by the degree of leverage employed by the holders of those toxic assets. The latter condition dictates to what extent contagion becomes destabilizing. In short, debt leverage matters.

On the eve of the dot-com stock market crash of 2000, highly leveraged institutions held a relatively small share of equities, and an especially small share of technology stocks, which were the toxic asset of that bubble. Most stock was held by households (who were considerably less leveraged at that time than they became as the decade progressed) and pension funds. Their losses, while severe, were readily absorbed without contagious bankruptcies because the amount of debt held to fund equity investment was small. Accordingly, few lenders went into default, and crisis was avoided. A similar scenario played out following the crash of 1987.

One can imagine how those events would have played out if the stocks that fell sharply in 2000 (or 1987) had been held by leveraged institutions in the same proportions that mortgages and mortgage-backed securities were held in as of 2008. The U.S. economy almost certainly would have experienced a far more destabilizing scenario than in fact occurred.

Alternatively, if mortgage-backed securities in 2008 had been held in unleveraged institutions--for example, defined-contribution pension funds (40Iks) and mutual funds--as had been the case for stocks in 2000, those institutions would have suffered large losses, but bankruptcies triggered by debt defaults would have been far fewer.

It was the capital impairment on the balance sheets of financial institutions that provoked the crisis. Debt securities were the problem in 2008, but the same effect would have been experienced by the financial system had the dollar amount of losses incurred by highly leveraged financial institutions, in the wake of the collapsing housing bubble, been in equity investments rather than mortgage-backed securities.

We need to explicitly integrate bubbles, a combination of rational and nonrational intuitive human responses, and other aspects of behavioral economics into our monetary policy models. In the online appendix to this comment, I further probe the measurement of bubbles and their consequences. (1) But more broadly, our policy models would be significantly reinforced by incorporating the behavioral long-term stabilities that are so evident in our data. They define the long-term equilibria to which economic activity is drawn.

THE LONG RUN Over the long run, inbred propensities of human nature are highly predictable. For example, time preference--the extent to which we discount claims to future values--is clearly a deeply embedded, invariant human propensity that has exhibited no significant trend over the millennia of recorded economic history. Interest rates (a manifestation of time preference) that merchants charged in ancient Rome, and even as far back as fifth-century B.C. Greece, exhibited levels not significantly different from rates that we've experienced in recent decades. Since its founding in 1694, the Bank of England's daily discount rate has been trendless--holding at an unwavering 5 percent for more than a century (1719 to 1822) and, with the exception of the inflation-ridden 1970s and 1980s, it has remained at 10 percent or less since 1822.

Similarly, stock price rates of return that arbitrage with interest rates are also trendless, as are rates of return on business equity and commercial banking (figure 1). The private savings rate (households plus businesses), importantly determined by time preference, has been trendless since the latter part of the 19th century (figure 2). Even though the propensity to save is arguably inbred, prior to the 19th century most people lived hand-to-mouth and were incapable of abstaining from consumption.

[FIGURE 1 OMITTED]

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

Savings in the form of newly produced capital goods that embody contemporaneous cutting-edge technologies are the primary sources of productivity growth. The real net business capital stock, (2) adjusted to capture the increased quality of labor hours, (3) closely matches the upward path of output per hour (figure 3).

Between 1870 and 1970, the United States' annual rate of increase in non-farm output per hour (our best proxy for productivity) averaged 2.2 percent. (4) Given that the accumulation of knowledge is largely irreversible, we would expect a persistently rising level of productivity. (5) And, indeed, over any 15-year period since 1889, average yearly output-per-hour growth has never exceeded 3.2 percent or fallen below 1.1 percent (figure 4).

[FIGURE 4 OMITTED]

With the exception of the immediate post-World War II years, (6) output-per-hour growth in most advanced economies appears to have been subject to the 3 percent growth ceiling (Maddison 2001). But why couldn't the current level of technology and productivity have been achieved in, say, 1965, rather than a half century later? I presume we human beings are not smart enough to have produced such a pace of innovation.

The relatively stable rate of growth of U.S. productivity from 1870 to 1970 doubtless reflected a combination of the long-term unchanging inbred rate of time preference (and hence savings rates) and the rate of capital stock accumulation, coupled with the human race's inbred propensity toward optimism and competitiveness (Kahneman 2011, pp. 256-59).

Productivity growth, I presume, is capped only by the upside limit of human ability to create and apply knowledge over the long run. Certainly there is nothing to demonstrate a major difference during past millennia in the degree of intelligence of, for example, Euclid, Newton, and Einstein, the icons of outer-edge human intelligence of their respective eras. (7) Although technology builds on itself, the rate of knowledge accumulation, of necessity, is limited.

THE SHORT RUN Output per hour (for business), coupled with a generally reliable increase in either the working-age population (known approximately two decades in advance) or the civilian labor force, creates a reasonable proxy for long-term growth in gross domestic product. But short-term cyclical changes require that we add equations that, at a minimum, capture both euphoria and its obverse, fear, to our dynamic stochastic general equilibrium models.

"Economic uncertainty," a widespread explanation given for much short-term negative economic change, is more meaningfully understood in terms of relative degrees of fear. An investor increasingly uncertain of how the future will evolve becomes fearful of a significant loss of his net worth. "Uncertain" does not portray the extent of angst people experience in such circumstances. This is readily visible in figure 5, which records the changing willingness of business managers to invest their corporations' liquid cash flow in illiquid, and hence riskier, long-term assets. Arguably, we are observing the outer ranges that business choice has exhibited over the generations and, arguably, the outer range of human euphoria and fear reflected in the marketplace (at least since 1952). But data trace trendless cycles of fear and euphoria. The upside limit, I presume, reflects an objective reality that, for example, balks at price-earnings ratios of 200. The downside limit is the extraordinary resilience of people who persevere through unimaginable stress.

We can model fixed capital expenditures as a share of cash flow by the rate at which investors discount the prospective income accruing from current capital investments in future years. One useful measure of relative fear and euphoria (uncertainty) is the yield spread between the U.S. Treasury 5-year note and the 30-year bond. (8) This reflects the term structure of investment expectations beyond the normal business cycle length. The spread anticipates investment choices with a 6-month lead (figure 6). This is presumably the timing difference between decision and implementation of capital investment. In addition to the shock of the Lehman Brothers' default, the causes of such "uncertainty" are arguably global warming, the emergence of ISIS, domestic political gridlock, budget deficits, debt, taxes, and massive financial re-regulation that has weakened financial intermediation. Combined, they are engendering heavy discounting of income from long-lived investments.

[FIGURE 5 OMITTED]

UNSOLVED The one policy area I have found most challenging over the years is anticipating financial crises. Speculative stock price increases are necessarily being bolstered by an excess of bids over offers just before stock prices peak. If it were otherwise, the peak level of prices could not have been reached. But when the great preponderance of investors or speculators have shifted from "bears" to "bulls" and are presumably fully committed to a bullish future, and hence illiquid, the first market participants who wish to sell find that there are too few uncommitted cash-rich investors left to support the price level. Prices collapse into a seeming vacuum.

The timing of such a sequence is devilishly illusive. Euphoria and herding are formidable bull market human propensities. Bubbles, as history amply demonstrates, must run their course. But accurately tracing that course may, in the end, be indeterminate, since if market participants can anticipate a certain stock price peak, waves of selling will prevent that peak from being reached. Indeed, for years leading up to the 2008 crisis, it was widely expected that the precipitating event of the "next" crisis would be a sharp fall in the U.S. dollar in response to the dramatic increase in the U.S. current account deficit that began in 2002. The dollar accordingly came under heavy selling pressure. The rise in the euro-dollar exchange rate from around 1.10 in the spring of 2003 to 1.30 at the end of 2004 appears to have gradually arbitraged away the presumed dollar trigger of the "next" crisis. The U.S. current account deficit did not play a prominent direct role in the timing of the 2008 crisis.

[FIGURE 6 OMITTED]

Bubbles have always been a chronic concern of central bankers. As I noted, the bubbles of 1987 and 2000 deflated without serious economic consequence. The crises of 2008 and 1929 induced financial chaos. Those episodes, in retrospect, had the unforecastable characteristics of a snow avalanche which, in its early stages, appears benign, until it unexpectedly builds an unstoppable momentum. In short, just below the surface of an economic recession as it gets started is such an avalanche awaiting a trigger. Fortunately, the vast majority of recessions bottom out well above that triggering point, which accordingly goes unobserved as an economy turns and eventually recovers. But in very rare exceptions--2008 being the classic case--cumulative serial default is triggered among heavily leveraged financial balance sheets and the bottom falls out of those markets, precipitating a collapse in nonfinancial activity as well.

Traditional economics has always been acutely aware of bubbles that, in large part, are driven by what Keynes labeled "animal spirits," even though these bubbles are rarely, if ever, captured in dynamic stochastic general equilibrium models.9 To be sure, we have very few observations of major bubbles in the United States--four, in all, during the past eight decades. Given what data we have, the animal spirit component of bubbles does appear to be subject to formal analysis. The behavior of stock prices is an obvious representative example.

REFERENCES FOR THE GREENSPAN COMMENT

Flynn, James R. 2012. Are We Getting Smarter? Rising IQ in the Twenty-First Century. Cambridge University Press.

Greenspan, Alan. 2014. The Map and the Territory 2.0: Risk, Human Nature, and the Future of Forecasting. New York: Penguin Books.

Historical Statistics of the United States, Millennial Edition. 2006. Cambridge University Press.

Homer, Sidney, and Richard Sylla. 1991. A History of Interest Rates, 3rd ed. Rutgers University Press.

Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Strauss and Giroux.

Maddison, Angus. 2001. The World Economy: A Millennial Perspective. Issy-les-Moulineaux, France: Development Centre of the Organisation for Economic Co-operation and Development.

(1.) Online appendixes to all papers in this volume may be found at the Brookings Papers web page, www.brookings.edu/bpea, under "Past Editions."

(2.) Data published by the Bureau of Economic Analysis.

(3.) Data published by the Bureau of Labor Statistics.

(4.) My estimate for 1870 employs Angus Maddison's 1.9 percent annual rate of change between 1870 and 1913 to obtain a number consistent with the series published by John W. Kendrick and the Bureau of Labor Statistics covering the period 1889 to 2014 (see Maddison 2001).

(5.) I suspect that this surprising degree of long-term stability reflects, in part, a large and slowly growing capital stock with an average age of nearly 20 years. Obviously, the greater the average age, the slower the rate of turnover and the more stable the flow of imputed "services" from that stock relative to other factors of growth. The "services" emanate daily from our capital infrastructure--our buildings, productive equipment, highways, and water systems, to identify just a few. And that relatively stable average age itself reflects the apparent stability of human time preference, a key animal spirit.

(6.) Virtually all war-ravaged plants and equipment in Europe were replaced with the newest technologies between 1950 and 1973.

(7.) For an interesting review of this controversial issue, see Flynn (2012).

(8.) Measures of credit risk are also statistically significant.

(9.) To gain some statistical insight into bubbles, in the online appendix to this comment I trace out trends in daily stock price changes since 1951 and discuss the relative roles of rational judgment versus animal spirits.
Regression Output for Figure 6

                                       Log ratio of fixed
                                   investment to cash flow (a)

U.S. Treasury bond yield spread:              -0.109
  30yr-5yr (b)                               (-7.323) (c)
No. of observations                          161
Adjusted [R.sup.2]                             0.590
Durbin-Watson statistic                        0.348

Source: Author's calculations, based on data from U.S. Federal Reserve
Board, collected from Haver Analytics.

(a.) For nonfinancial corporate businesses.

(b.) Variable is lagged two quarters, and the units are percentage
points.

(c.) t-statistic calculated using Newey-West HAC standard errors and
covariance.


COMMENT BY

JOHANNES WIELAND Charles Evans, Jonas Fisher, Francois Gourio, and Spencer Krane argue in this paper that when we are uncertain over whether the zero lower bound (ZLB) will bind in the future, the prudent policy action is to be cautious about raising interest rates. This can be read as a warning that tightening now may cause a repeat of the "mistake of 1937." Then as now, the recovery from a deep recession was under way, and policymakers debated over the appropriate actions given uncertainty about the evolution of future inflation and unemployment. In fact, prominent economists have argued that premature tightening in 1937 contributed to the 1937-38 recession (Friedman and Schwartz 1963; Eggertsson and Pugsley 2006; Romer 2009).

This paper makes its case in two steps. First, the authors conduct a theoretical analysis of optimal policy under uncertainty. They show that optimal policy is looser when there is uncertainty over whether the ZLB constraint on nominal interest rates will bind. Second, they provide narrative and statistical evidence that the Federal Reserve has conducted risk management in the past, so that delaying interest rate hikes would not constitute a radical policy change. I will follow this structure and discuss each part in turn.

OPTIMAL POLICY The authors first consider the standard forward-looking new Keynesian model,

[[pi].sub.t], = [beta]E[[pi].sub.t+1] + [kappa][x.sub.t],

[x.sub.t] = E[x.sub.t+1] - 1/[sigma] ([i.sub.t] - [E.sub.t][[pi].sub.t+1] - [[rho].sub.t]).

where [[pi].sub.t] is inflation, [x.sub.t] is the output gap, and [[rho].sub.t], the texogenous natural rate of interest. The central bank conducts optimal monetary policy under discretion, so each period minimizes the loss

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For t [greater than or equal to] 2 the natural rate of interest is positive, so the central bank can perfectly stabilize the economy, [x.sub.t] = [[pi].sub.t] = 0. At time t = 0 there is uncertainty over whether the ZLB will bind at time t = 1. For high realizations of the natural rate of interest (drawn from [[rho].sub.1] ~ f([rho])), the ZLB will not bind, whereas for low realizations it will. Thus,

[[rho].sub.1] [greater than or equal to] 0 [??] ZLB not binding: [i.sub.1] 0, [x.sub.1] = 0, [[pi].sub.1], = 0, and

[[rho].sub.1] < 0 [??] ZLB binding: [i.sub.1] = 0, [x.sub.1] < 0, [[pi].sub.1] < 0.

Since the central bank can perfectly stabilize the economy only in the first case, on average agents in this economy will expect a recession at t = 1, [E.sub.0][x.sub.1] < 0, [E.sub.0][[pi].sub.1], < 0. Through the expectations channel, this reduces the current output gap and inflation, which the central bank wants to offset by lowering nominal interest rates today.

While risk over the ZLB constraint affects policy in this scenario, I would not label this outcome "risk management." The central bank keeps interest rates low today, because conditions today are bad (through the expectations channel) and it only cares about current outcomes. When conditions improve, this central bank will immediately raise nominal interest rates. In this scenario, there is no notion of delayed liftoff under which interest rates would be kept low despite improvements in current fundamentals. Thus, in my view, this channel does not capture the idea of risk management.

I believe the "buffer-stock" channel better captures a risk-management motive. This channel operates when there are backward-looking elements in the model, such as in the baseline old Keynesian model considered by the authors,

[[pi].sub.t] = [xi] [[pi].sub.t-1] + [kappa][x.sub.t]

[x.sub.t] = [delta][x.sub.t-1] + 1/[sigma] ([i.sub.t] - [[pi].sub.t-1] - [[rho].sub.t]).

Again, a discretionary policymaker will minimize the expected loss:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

In contrast to the forward-looking model, the central bank's current policy will affect future inflation and output through the backward-looking structure. This gives the central bank the ability to use current policy to affect the probability and severity of future ZLB episodes even without commitment technology.

As before, the central bank faces uncertainty over whether the ZLB binds at t = 1, [[rho].sub.1] ~ f([rho]). Depending on the realization of [[rho].sub.1], the ZLB will either not be binding or will be binding at t = 1:

[[rho].sub.1] [greater than or equal to] [[rho].sup.*] [??] ZLB not binding: [i.sub.1] > 0, [x.sub.1] = f([[[pi].bar].sub.0]), [[pi].sub.1] = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

[[rho].sub.1] < [[rho].sub.*] [??] ZLB binding: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Stimulating the economy today has clear benefits in case the ZLB does bind tomorrow. A higher output gap [x.sub.0] and inflation [[pi].sub.0] today directly raise the future output gap [x.sub.1], and inflation [[pi].sub.1] and further stimulate by lowering the real interest rate [R.sub.1][|.sub.ZLB] = [i.sub.1] - [[pi].sub.0] = -[[pi].sub.0]. Since the economy suffers from too low output and inflation at the ZLB, such a policy would improve outcomes in that state.

However, the benefits in the ZLB states have to be balanced with the costs that occur when the ZLB does not bind. In those states, the central bank raises the real interest rate when inflation is higher so that output contracts. Thus, the more the central bank stimulates the economy today, the more the output gap and inflation deviate from target at t = 1 if the ZLB does not bind, which reduces the payoffs in these states.

Optimal policy trades off the benefits of looser policy in the ZLB states with the costs in non-ZLB states. These are clear insurance motives, with payoffs in one state being traded off with those in another state in line with the risk-management rhetoric. Further, these motives are at play even if current economic conditions call for higher nominal interest rates. The buffer-stock channel thus also provides a rationalization for delayed liftoff. (1)

The paper calibrates these simple models to assess the quantitative relevance of each channel. I will focus my discussion on the backward-looking channel, which I view as the more compelling theory of risk management. The two key aspects that determine the extent of risk management used are (i) how uncertain the natural rate is and (ii) how costly it is for the central bank to use the buffer-stock channel.

The one-quarter-ahead standard deviation of the annualized natural rate is set to 1.3 percent, with an unconditional standard deviation of 2.5 percent. The optimal real interest inherits the same volatility. The actual ex ante real federal funds rate (2) has a one-quarter-ahead standard deviation of 0.5 percent and an unconditional standard deviation of 1.61 percent from the first quarter of 1986 until the third quarter of 2008. If the estimates for the natural rate are correct, then the optimal policy must be substantially more volatile than it is in practice (this is the position taken, for example, in Barsky, Justiniano, and Melosi 2014). However, the Great Moderation is typically not associated with large output gaps, suggesting that monetary policy was not far from the natural rate. These conflicting accounts should be sorted out in another paper, but the problem leaves me concerned that the volatility of the natural rate in the calibration may be too high.

The buffer-stock channel benefits from a high persistence of inflation, which is calibrated at 0.95 in the paper. Thus, any buffer of inflation built up today will still be largely present tomorrow to help with the ZLB constraint. But empirical estimates of the backward-looking elements in Phillips curves range from 0 to 0.5 when forward-looking elements are also included (for example, as in Galt and Gertler 1999; Cogley and Sbordone 2008). The decline in persistence would make risk management more costly, since the central bank would have to create more (costly) inflation today to raise inflation tomorrow by the same amount. The forward-looking elements might help raise inflation in the ZLB states, but only if the central bank creates more (costly) inflation in future non-ZLB states. Thus, I conjecture that a more realistic specification of the Phillips curve would reduce the scope for risk management.

Another practical impediment to the buffer-stock channel is the impact lags of monetary policy. Neither narrative- nor VAR-identified monetary shocks affect inflation for several quarters (Romer and Romer 2004; Christiano, Eichenbaum, and Evans 2005; Coibion 2012). Again, I conjecture that this would make it more difficult to use the buffer-stock channel.

In short, the calibration exercise is a useful first step, but further work is needed to assess the quantitative importance of the buffer-stock channel. For example, additional real rigidities in a medium-scale model may be able to compensate for lower volatility in the natural rate and weaker inflation persistence. Further, a comparison of the buffer-stock channel with optimal policy under commitment, our existing justification for delayed liftoff (Eggertsson and Woodford 2003; Weming 2012), would be helpful to determine their relative importance.

RISK MANAGEMENT The second part of the paper focuses on whether risk-management considerations have affected Federal Reserve policies in the past. First, the authors search Federal Open Market Committee (FOMC) transcripts for incidents when uncertainty or insurance motives have affected policy. This in itself is a very ambitious and difficult task, since risk management motives have to be separated from first-moment shocks (such as fundamental shocks and news). For example, the following quote from minutes of the November 6, 2001, FOMC meeting, which the authors cite, suggests both first-moment news ("economic weakness had intensified") and risk-management considerations:
   ... members stressed the absence of evidence that the economy was
   beginning to stabilize and some commented that indications of
   economic weakness had in fact intensified. Moreover, it was likely
   in the view of these members that core inflation, which was already
   modest, would decelerate further. In these circumstances
   insufficient monetary policy stimulus would risk a more extended
   contraction of the economy and possibly even downward pressures on
   prices that could be difficult to counter with the current federal
   funds rate already quite low. Should the economy display
   unanticipated strength in the near term, the emerging need for a
   tightening action would be a highly welcome development that could
   be readily accommodated in a timely manner to forestall any
   potential pickup in inflation.


The narrative accounts reveal that uncertainty and insurance motives sometimes decrease and sometimes increase policy rates. (See the authors' figures 4 and 5). This suggests that uncertainty and insurance are not one-dimensional objects. Indeed, the transcripts show different forms of uncertainty, such as the effects of an exogenous shock and signal extraction problems. An interesting next step would be to analyze how policy responses differ for different types of uncertainty or states of the economy.

To test for the importance of risk management, the authors estimate an augmented interest rate rule,

[R.sub.t] = A(L)[R.sub.t-1] + (1 - A(1))[[R.sup.*]+ [beta][E.sub.t] ([[pi].sub.t,k]) + [gamma][E.sub.t]([x.sub.tq] + [mu][s.sub.t]],

where R, is the federal funds rate, A(L) a lagged polynomial, [E.sub.t][[pi].sub.t,k] is expected inflation, [E.sub.t][x.sub.t,q] is the expected output gap, and [s.sub.t] is an uncertainty measure. The measures used are the narratively identified insurance and uncertainty variables, FOMC sentence counts of uncertainty and insurance, FOMC inflation and output forecast revisions, financial uncertainty measures, and uncertainty and disagreement among professional forecasters.

Some of these measures will capture uncertainty better than others. For instance, uncertainty among professional forecasters seems to be a good measure. By contrast, the level of forecast revisions does not look like a convincing proxy to me. It would imply that positive and negative revisions have opposite implications for uncertainty. Perhaps using absolute (or squared) revisions over a rolling window would better capture forecast uncertainty.

It is also important to emphasize that the test [mu] = 0 is not a general test of whether uncertainty matters. For example, the expectations channel operates through the mean forecasts of inflation and output, which are used as controls. Thus, there is nothing left to be explained by the uncertainty proxy.

Further, only the human-coded uncertainty measure takes into account that uncertainty can affect policy both ways. For all other measures, higher uncertainty is restricted to either raising the federal funds rate (if [mu] > 0) or lowering the federal funds rate (if [mu] < 0). But if these other measures can also affect policy both ways, then the coefficient p in the estimation is biased toward zero.

To illustrate this, 1 have created a new uncertainty variable, I Human Uncertainty!, which is the absolute value of the narrative Human Uncertainty variable in the paper. It discards the information on whether uncertainty raises or lowers the policy rates and only captures whether uncertainty has affected policy. The estimates of p for these two variables differ significantly, as shown in columns 2 and 3 of my table 1. Only the original human-coded uncertainty measure is significant and economically important. Discarding the additional information on the policy response reduces the coefficient on p, switches its sign, and raises the standard error. Thus, by testing for unidirectional effects, we may underestimate the extent of risk management in the policy rule.

In my view, this example also illustrates that we want more measures that take into account the multidimensional aspects of uncertainty. A complementary way to proceed is to use economic theory to understand why uncertainty may sometimes cut one way and sometimes the other. For instance, uncertainty about inflation may affect policy very differently when initial current inflation is high than when current inflation is low.

I will close by reemphasizing that this is an ambitious paper on the conduct of monetary policy under uncertainty and an important contribution to the current policy debate.

REFERENCES FOR THE WIELAND COMMENT

Barsky, Robert, Alejandro Justiniano, and Leonardo Melosi. 2014. "The Natural Rate of Interest and Its Usefulness for Monetary Policy." American Economic Review 104, no. 5: 37-43.

Christiano, Lawrence J., Martin Eichenbaum, and Charles L. Evans. 2005. "Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy." Journal of Political Economy 113, no. 1: 1-45.

Cogley, Timothy, and Argia M. Sbordone. 2008. "Trend Inflation, Indexation, and Inflation Persistence in the New Keynesian Phillips Curve." American Economic Review 98, no. 5: 2101-26.

Coibion, Olivier. 2012. "Are the Effects of Monetary Policy Shocks Big or Small?" American Economic Journal: Macroeconomics 4, no. 2: 1-32.

Coibion, Olivier, Yuriy Gorodnichenko, and Johannes Wieland. 2012. "The Optimal Inflation Rate in New Keynesian Models: Should Central Banks Raise Their Inflation Targets in Light of the Zero Lower Bound?" Review of Economic Studies 79, no. 4: 1371-1406.

Eggertsson, Gauti B., and Benjamin Pugsley. 2006. "The Mistake of 1937: A General Equilibrium Analysis." Monetary and Economic Studies 24, no. S-1: 151-90.

Eggertsson, Gauti B., and Michael Woodford. 2003. "The Zero Bound on Interest Rates and Optimal Monetary Policy." Brookings Papers on Economic Activity no. 1: 139-211.

Friedman, Milton, and Anna Jacobson Schwartz. 1963. A Monetary History of the United States, 1867-1960. Princeton University Press.

Gali, Jordi, and Mark Gertler. 1999. "Inflation Dynamics: A Structural Econometric Analysis." Journal of Monetary Economics 44, no. 2: 195-222.

Romer, Christina. 2009. "The Lessons of 1937." Economist, June 20.

Romer, Christina H., and David D. Romer. 2004. "A New Measure of Monetary Shocks: Derivation and Implications." American Economic Review 94, no. 4: 1055-84.

Werning, Ivan. 2012. "Managing a Liquidity Trap: Monetary and Fiscal Policy." Working Paper. Cambridge, Mass.: Massachusetts Institute of Technology. http://economics.mit.edu/files/7558
Table 1. Uncertainty Measure Using and Discarding Narrative
Information

                          (1)          (2)        (3)

[E.sub.t][p.sub.t+h]    1.77 ***     1.95 ***   1.91 ***
                        (.11)        (.16)      (.17)
[E.sub.t][x.sub.t+k]     .79 ***      .88 ***    .86 ***
                        (.06)        (.05)      (.06)
Human Uncertainty                     .40 ***
                                     (.16)
[absolute value of                             -0.15
  Human Uncertainty]                            (.19)
No. of observations    167         128        128
[R.sup.2]                0.99        0.98       0.98

Note: Statistical significance indicated at the ***1 percent level.


GENERAL DISCUSSION Lars Svensson opened the conversation by complimenting the paper for its robust result that the nonlinearity of the effective lower bound justifies looser monetary policy to avoid the risk of the lower bound binding. He also felt it was time to stop using the term "zero lower bound," because the lower bound is not zero but negative, and it is not hard but soft. He would prefer to call it the effective lower bound. He noted that interest rates can become somewhat negative without huge amounts of cash being stored, because the storage cost--including insurance and crime-prevention cost--make the actual return on cash somewhat negative. Svensson found it satisfying to see another demonstration of how imperfect the Taylor Rule is, with its reliance on a symmetric response to only two variables, inflation and output. Good policy sometimes requires asymmetric responses and always requires responses to more variables than inflation and output, indeed responses to all variables that substantially affect the forecast of inflation and employment.

Donald Kohn agreed with the authors that a central bank ought to be cautious about tightening too soon when the rate is at or near the zero lower bound. He said that to some extent, what had driven him and his colleagues on the Federal Reserve Board was nonlinearity due both to deflation and approaching the zero lower bound. He himself was influenced by Japan and how it had become stuck at the zero lower bound. Kohn was surprised that discussant Alan Greenspan did not mention the "fire break" Greenspan had publicly discussed as chairman of the Federal Reserve Board in June 2003. At that time, inflation was very low, and the Federal Reserve was not planning to tighten any time soon, importantly to create a "firebreak" between the U.S. economy and deflation. Within the next year the FOMC did engage in some tightening, though very slowly, because inflation and nominal rates were still quite low. Kohn thought that had been a good example of genuine risk management. He also noted that there were interactions between interest rates at zero and financial stability that needed to be addressed; waiting to exit, as the authors argued, could require exiting steeply later. It seemed to him that this might threaten financial stability by taking many people by surprise.

Olivier Blanchard concurred with Kohn that the management of risk to financial stability was important and said he had expected the paper to focus on that. Even though the literature on the effect of low interest rates on creating financial risk is very unclear, some analysts believe low interest rates do create financial risks, and if they are right it would argue for the opposite of the authors' conclusion. Apart from that, Blanchard added, if there were no zero lower bound constraint, there would be no asymmetry, which raises another question: What is the optimal rate of inflation?

Andrew Levin fundamentally agreed with the authors' conclusions but also wanted to urge an attitude of humility. The FOMC and professional forecasters generally have been over-optimistic for a number of years in a row, he noted, and that means the models everyone has been using do not satisfactorily explain what is happening in the economy. The Taylor Rule is one such model and in Levin's view not adequate for deciding when to lift off again from the zero bound. New, more robust benchmarks are needed, and although this paper is an exception, policymakers have spent little effort developing them.

Judging uncertainty and risk is difficult. Levin added, as illustrated by the FOMC statement made in September 2008 just as Lehman and AIG were collapsing. The statement then only acknowledged the Fed's general concern that inflation was carrying an upside risk and that growth was carrying a downside risk. In retrospect the downside risk of overheated growth dramatically swamped any inflation risk, yet even at that moment it was hard for the FOMC to understand the magnitude of what was starting to happen.

Michael Kiley thought the paper's conclusions about policy were close to what the textbooks say one ought to do. If unemployment is above the target, textbook optimal policy says inflation should also be above the target. The paper's authors tie themselves to the model and a notion of optimal policy under discretion in the absence of the zero lower bound by defining the natural rate of interest as the shock in the IS curve and optimal policy as the nominal interest rate path that tracks this natural rate. The paper's results are clear, but by limiting the definition of optimal policy to the discretionary case in the absence of a zero lower bound, they are not as productive for future discussions as they could be. In particular, focusing on the first-order conditions would be more transparent. Emphasis on the discretionary case also raises additional issues: In the discretionary case, in which the FOMC is unable to commit itself, all the FOMC can do over the long run to minimize risks associated with the zero lower bound is raise its inflation target. As Michael Woodford emphasized in his discussion of an earlier Brookings paper by John Williams, there may be sizable losses associated with a higher inflation target, and these social losses may rise rapidly with the target rate of inflation. The commitment policy would give us a route that avoids such costs, and targets the price level, which would be much more efficient.

Ben Friedman thought the paper made two very important points. First, it reminds us that not all undesired outcomes are equally costly. He agreed with Johannes Wieland that when the costs are asymmetrical, with greater cost to downside rather than upside mistakes, the right choice is always to ease monetary policy because of increased uncertainty. When all of the undesired potential outcomes are not equally costly, asymmetry in the costliness of possible outcomes leads to downward bias in the optimal policy interest rate. Second, he added, the paper usefully shows that this kind of asymmetry has always been a part of actual monetary policy decisionmaking, in contrast to today's academic literature which mostly assumes quadratic loss functions and normally distributed uncertainty and therefore leaves out asymmetry altogether.

Justin Wolfers agreed with Friedman that the paper's insights are valuable, because in modern macroeconomics the models insist on optimizing everything according to rational, forward-looking decisions in ways that few people in the actual economy follow because of the asymmetries. Some people believe the unemployment rate understates the amount of economic slack and others think it is roughly correct, but no one believes the rate overstates the slack. Likewise, there is a risk that hysteresis effects are real, so if the Fed lifts off from the zero rate too early a whole generation could find itself out of work. Wolfers believed an element missing from the paper was the biases in the Fed's decisionmaking, such as its habit of always continuing to raise rates once it first raises them. One of the risks of liftoff then might be this unwillingness to retrace a step, which can lead to bad decisions down the road.

David Romer noted that a premise of the paper is that the Fed does not feel constrained in its rate setting other than the zero lower bound. But as Wolfers just pointed out, it is presuming a lot to think that once the Fed has started to raise rates, if the economy is hit with bad shocks it will have no trouble reducing rates again. That is, it seemed to Romer that once the Fed starts to tighten, the barrier to cutting rates will be higher than the barrier to raising them further. A second premise in the paper that gave him pause was the notion that if the Fed delays liftoff and then inflation rises faster than expected, it will have no trouble raising rates quickly. This struck him as a laudable sentiment, but in fact the Fed has not responded that way to such situations in a long time.

Martin Baily also agreed with the paper's conclusions but wanted to raise some possible counterarguments. For example, the paper assumes one can get inflation under control relatively easily, but if that were true why were so many of the Brookings Panel papers in the 1970s and 1980s devoted to solving the problem of inflation? A second counterargument echoed Blanchard's point, namely that problems might be created by keeping rates low for a very long time, especially in financial markets.

Chris Carroll noted that much of the paper's logic has echoes in the consumption literature. Even if people have quadratic utility, he said, if they see a chance of a binding constraint in the future it can induce a motive to accumulate a buffer stock of savings as a precaution. Fie believes this paper's argument is the extension of that insight into the monetary policy context. He then pointed out that the asymmetry that the authors highlight would be further strengthened if the model were extended to take into account the likelihood that periods when deflation looms tend to be periods when other kinds of uncertainty arise beyond the difficulty of cutting rates further. Many households are likely to feel uncertain about the future path of the economy. Such reactions would only make the asymmetric loss function much more asymmetric. The paper's argument, then, might be even stronger than the authors realize once such effects are factored in.

Johannes Wieland interjected that others might be interested in a paper he coauthored in 2012 with Olivier Coibion and Yuriy Gorodnichenko that appeared in 2012 in the Review of Economic Studies. They found that the current inflation targets are optimal in these kinds of models, and the basic idea is that when you have a temporary problem, like a zero bound, then using a permanent policy change, like raising the inflation target, is really not a well designed way to deal with it.

Julia Coronado wondered how the authors treated past errors in the model as well as in their own empirical view. The Fed has not hit its target for the better part of three decades, and optimization is always explained through a current projection starting "from now," with a promise of hitting the target at the end of an unspecified horizon. She asked whether they worried that such projections feed into expectations that in turn become a headwind, making it harder to meet the target. The prescription of simply raising the inflation target raises another credibility problem, too.

Wendy Edelberg asked what the authors thought about what their conclusion means for average monetary policy over the long run, that is, when GDP has gotten back to its potential level but some baseline uncertainty remains. Would the natural rate of interest be so low by then that the zero lower bound would hold periodically? And is it possible that monetary policy will therefore be looser, on average, over time?

Charles Evans responded, first, to the idea that very low interest rates can trigger financial instability. He said such a relationship is very difficult to assess. The authors' intent was to take up the challenge from the editors and make the case for being more cautious about raising policy rates given uncertainty about the natural rate of interest. He has no doubt that much work can be done to better investigate the potential linkages between such policies and financial instability, and would appreciate seeing others formulate detailed arguments that could be tested empirically. But in Evans's current view, there are tools other than interest rate policy that one could use to minimize financial instability risk, including macro- and micro-prudential measures such as higher capital standards.

As for a higher inflation objective, which Blanchard and others asked about, Evans acknowledged there is an argument for such a change to give the Fed more headroom against policy running into the zero lower bound. However, he believed ample space could be achieved by the FOMC's demonstrating commitment to its stated long-run strategy of achieving a symmetric 2 percent inflation objective, one in which inflation ought to be above 2 percent half of the time and below it half of the time. He thought that if the Fed is properly symmetric in its approach, and if it responds ahead of time to economic developments, then its current 2 percent inflation objective is manageable against the constraints posed by the zero lower bound. In fact, though, over the last 6 years the United States has averaged 1.5 percent inflation and many forecasts suggest it will remain below 2 percent for another 2 to 4 years. Accordingly, Evans admitted that the problem Coronado posed--that the public will wonder if the Fed has the wherewithal to keep inflation hovering symmetrically near 2 percent--worries him as well. He noted that such credibility risks would be diminished if the Fed demonstrated its commitment to a symmetric target by applying policies to bring inflation up sooner rather than later.

Referring to Levin's comment about the Taylor Rule having outlived its usefulness, he said he does believe such benchmarks are very important, although only as a gauge to how policy likely would be set during normal times. But the current economy is still very far from being back to business-as-usual. He believes that there is appropriately more humility in the paper's view that there is a great deal of uncertainty surrounding the current value of the natural rate of interest and that this uncertainty provides a reason for exercising more caution and delay before beginning to normalize monetary policy.

Regarding quantitative easing, Evans said he was uncomfortable with Kiley's thought that it might not be as effective going forward. He considered it successful and interpreted the success of QE3 as stemming from its being open-ended. The FOMC had told the public it would be committed to hitting its goals and would stay with QE3 until the labor market outlook showed substantial improvement. It was that commitment to goals that he thought was extremely important, and he worried that if the FOMC made a misstep by allowing a premature liftoff, people would wonder about its resolve to achieve its mandates and the Committee would have to work very hard to regain the public's trust.

Finally, Evans agreed with Romer that, historically, once the Fed begins to raise rates it seems to just continue raising them, and the barrier to reversing course appears to be a high one. To him, that is another good reason for delaying the liftoff. In reference to concerns that such delays risk inflationary consequences that the Fed would be slow to address, Evans noted that in fact there have been episodes when the Fed increased rates very strongly when it saw inflation rising too quickly. Two of them occurred in November 1994 and January 1995, when the FOMC increased the funds rate by 75 and 50 basis points, respectively, very big numbers at the time. He believed that it took those actions based on a lack of full confidence that inflationary pressures were under control. The Fed's ability to take such quick action depends, ultimately, on the outlook, and he is confident that the Fed could do that again if its forecasts so dictated.

(1.) Importantly, the policy considered here is a temporary increase in inflation. In these models, permanent increases in inflation beyond 2 percent are typically not optimal because the cost of higher inflation has to be paid every period, while the benefits only accrue when the ZLB binds (Coibion, Gorodnichenko, and Wieland 2012).

(2.) Calculated as the quarterly average of the federal funds rate minus expected inflation over the next quarter from the Survey of Professional Forecasters.
Table 1. Parameter Values (a)

Parameter                             Description               Value

[beta]                    Discount factor                        0.995
[kappa]                   Slope of Phillips curve                0.025
[sigma]                   Inverse elasticity of substitution     2
[[sigma].sub.[epsilon]]   Standard deviation natural rate        1.32
                            innovation
[[sigma].sub.u]           Standard deviation of cost-push        0.10
                            innovation
[[rho].sub.[epsilon]]     Serial correlation of natural rate     0.85
[[rho].sub.u]             Serial correlation of cost-push        0
[lambda]                  Weight on output stabilization         0.25
[pi] *                    Steady-state inflation (annualized)    2
[[rho].sup.n.sub.i]       Value of natural rate at time 1       -0.5
T                         Quarters to reach terminal natural    24
                            rate
[bar.[rho]]               Terminal natural rate (annualized)     1.75
[delta]                   Backward-looking IS curve              0.75
                            coefficient
[xi]                      Backward-looking Phillips curve        0.95
                            coefficient
[x.sub.0]                 Initial condition for the output      -1.5
                            gap
[[pi].sub.0]              Initial condition for inflation        1.3
[phi]                     Taylor rule coefficient on             1.5
                            inflation
[gamma]                   Taylor rule coefficient on output      0.5
                            gap

Source: Authors' calculations.

(a.) Values of standard deviations, inflation, the output gap, and
the natural rate are shown in percentage points.

Table 2. Forward-Looking Simulation

                                      Optimal
Statistic                            discretion   Naive   Taylor rule

Expected loss                           0.02       0.06       0.16
Mean time at liftoff                    4.11       1.00       1.00
Median time at liftoff                  3          1          1
Median [pi] at liftoff                  1.81       0.88       0.35
Median x at liftoff                     0.08      -1.44      -1.62
75th percentile maximum ([pi])          2.69       2.42       2.17
25th percentile minimum (x)            -0.72      -1.44      -2.63
Median standard deviation [DELTA]i      1.87       1.88       0.97

Source: Authors' calculations.

Table 3. Backward-Looking Simulation

                                      Optimal
Statistic                            discretion   Naive   Taylor rule

Expected loss                            0.27      0.28       0.60
Mean time at liftoff                    12.5      10.3        1.00
Median time at liftoff                  10         7          1
Median [pi] at liftoff                   2.00      1.81       1.21
Median x at liftoff                      0.32      0.00      -1.27
75th percentile max ([pi])               3.02      2.83       2.81
25th percentile min (x)                 -1.65     -1.70      -1.54
Median standard deviation [DELTA]i       2.96      3.10       0.54

Source: Authors' calculations.

Table 4. Summary Statistics for the FOMC-Based Risk Proxies

                                Standard
Variable       Obs.     Mean    deviation   Minimum   Maximum

Inflation       128      2.45     0.45        1.30      3.53
  forecast
Output gap      128     -0.14     1.58       -4.85      3.08
  forecast
hUnc            128     -0.13     0.48       -1         1
hIns            128     -0.06     0.33       -1         1
mUnc            128      2.92     4.80        0        30.8
mIns            128      0.83     2.45        0        16.7
frInf           128     -0.01     0.18       -0.63      0.63
frGap           128     -0.01     0.41       -2.00      0.77

                Correlation with
                  forecast of

                            Output
Variable       Inflation     gap

Inflation        1.00        0.21
  forecast
Output gap       0.21        1.00
  forecast
hUnc            -0.23       -0.33
hIns             0.18        0.15
mUnc            -0.06        0.14
mIns            -0.10        0.08
frInf            0.23        0.01
frGap            0.24        0.29

Source: Authors' calculations, based on Philadelphia Fed Greenbook
data sets and FOMC minutes; see text.

Table 5. Summary Statistics for Quarterly Risk Proxies

              Obser-            Standard
Variable      vations   Mean    deviation   Minimum   Maximum

Inflation       86       2.97     1.02        1.33      5.32
  forecast
Output gap      86      -0.45     1.69       -4.4       3.08
  forecast
vxo             86      21.0      8.48       10.6      62.1
JLN             86       0.96     0.05        0.89      1.22
vInf            68       0.74     0.06        0.6       0.90
vGDP            68       0.9      0.12        0.67      1.30
DvInf           86       0.6      0.18        0.24      1.10
DvGDP           86       0.73     0.27        0.3       1.64
SPD             86       2.11     0.65        1.37      5.60
sInf            68       0.05     0.08       -0.12      0.30
sGDP            68      -0.10     0.19       -0.54      0.47
DsInf           86       0.06     0.20       -0.5       0.51
DsGDP           86       0.3      0.27       -0.5       0.90

                 Correlation with
                   forecasts of

                              Output
Variable       Inflation       Gap

Inflation        1.00         -0.04
  forecast
Output gap      -0.04          1.00
  forecast
vxo             -0.02          0.04
JLN             -0.06         -0.04
vInf            -0.22         -0.08
vGDP            -0.22          0.22
DvInf            0.25         -0.35
DvGDP            0.37         -0.05
SPD             -0.34         -0.34
sInf             0.23         -0.12
sGDP            -0.10         -0.48
DsInf            0.01         -0.23
DsGDP           -0.22          0.21

Source: Authors' calculations, based on Philadelphia Fed Greenbook
data sets, Survey of Professional Forecasters, Haver Analytics, andJurado, Ludvigson, and Ng (2015); see text.

Table 6. Cross-Correlations of FOMC-Based Risk Proxies

Variable    hUnc      hIns      mUnc      mIns      frInf

hIns        -0.05
mUnc        -0.07     0.02
mIns        -0.13     -0.09     0.04
frInf       0.10      0.05      -0.07     0.06
frGap       -0.11     0.08      0.05      0.11      0.25

Source: Authors' calculations, based on Philadelphia Fed Greenbook
data sets and FOMC minutes; see text.

Table 7. Cross-Correlations of Quarterly Risk Proxies

Variable     VXO       JLN      vInf      vGDP      DvInf

JLN         0.54
vInf        0.04      0.23
vGDP        0.40      0.29      0.40
DvInf       0.15      0.18      0.29      0.03
DvGDP       0.54      0.38      0.16      0.37      0.33
SPD         0.73      0.67      0.15      0.32      0.26
sInf       -0.27     -0.11      0.29     -0.16      0.08
sGDP        0.21      0.22     -0.09     -0.04      0.25
DsInf      -0.28     -0.17      0.10     -0.24      0.13
DsGDP       0.07     -0.08      0.04      0.10     -0.22

Variable    DvGDP      SPD      sInf      sGDP      DsInf

JLN
vInf
vGDP
DvInf
DvGDP
SPD          0.35
sInf        -0.14     -0.18
sGDP         0.16      0.43     -0.15
DsInf       -0.14     -0.08      0.04      0.15
DsGDP        0.08      0.02     -0.08     -0.17     -0.17

Source: Authors' calculations, based on Survey of Professional
Forecasters; Jurado, Ludvigson, and Ng (2015); and Haver Analytics.
See text.

Table 8. FOMC-Based Risk Proxies in Monetary Policy Rules (a)

                             (1)         (2)         (3)         (4)

[[SIGMA].sup.5.sub.j=1]    .81 ***     .80 ***     .81 ***     .80 ***
  [[alpha].sub.j]         (.03)       (.03)       (.03)       (.03)
[beta]                    1.89 ***    1.95 ***    1.86 ***    1.90 ***
                          (.17)       (.16)       (.17)       (.17)
[gamma]                    .85 ***     .88 ***     .85 ***     .83 ***
                          (.05)       (.05)       (.05)       (.05)
hUnc                                   .40 **
                                      (.16)
hIns                                               .48
                                                  (.45)
mUnc                                                           .11 *
                                                              (.06)
mIns

frGap

frInf

LM (b)                     .31         .07         .59         .58
Obs.                       128         128         128         128

                             (5)         (6)         (7)

[[SIGMA].sup.5.sub.j=1]    .81 ***     .84 ***     .81 ***
  [[alpha].sub.j]         (.03)       (.03)       (.04)
[beta]                    1.89 ***    1.75 ***    1.89 ***
                          (.17)       (.22)       (.17)
[gamma]                    .85 ***     .80 ***     .85 ***
                          (.05)       (.06)       (.06)
hUnc

hIns

mUnc

mIns                      -.0006
                          (.05)
frGap                                  .47 **
                                      (.19)
frInf                                             -.009
                                                  (.14)
LM (b)                     .31         .63         .20
Obs.                       128         128         128

Source: Authors' calculations, based on Philadelphia Fed Greenbook
data sets and FOMC minutes; see text.

(a.) Standard errors are robust to heteroskedasticity. Statistical
significance at the *** 1, ** 5, and * 10 percent levels.

b. Entries in the "LM" row are p-values of Durbin's test for the null
hypothesis of no serial correlation in the residuals up to the fifth
order.

Table 9. Quarterly Variance Proxies in Monetary Policy Rules (a)

                           (1)         (2)         (3)         (4)

[[SIGMA].sup.sub.j=1]    .69 ***     .69 ***     .70 ***     .70 ***
  [[alpha].sub.j]       (.03)       (.03)       (.03)       (.04)
[beta]                  1.73 ***    1.73 ***    1.72 ***    2.21 ***
                        (.12)       (.11)       (.12)       (.17)
[gamma]                  .80 ***     .84 ***     .81 ***     .78 ***
                        (.06)       (.06)       (.06)       (.07)
VXO                                 -.43 ***
                                    (.11)
JLN                                             -.29 ***
                                                (.09)
vInf                                                         .21 **
                                                            (.10)
vGDP

DvInf

DvGDP

LM (b)                   .53         .56         .86         .71
Obs.                      86          86          86          68

                           (5)         (6)         (7)

[[SIGMA].sup.sub.j=1]    .69 ***     .69 ***     .69 ***
  [[alpha].sub.j]       (.04)       (.04)       (.03)
[beta]                  2.13 ***    1.73 ***    1.88 ***
                        (.16)       (.11)       (.13)
[gamma]                  .77 ***     .78 ***     .81 ***
                        (.06)       (.07)       (.06)
VXO

JLN

vInf

vGDP                     .03
                        (.12)
DvInf                               -.09
                                    (.13)
DvGDP                                           -.38 ***
                                                (.13)
LM (b)                   .59         .52         .86
Obs.                      68          86          86

Source: Authors' calculations, based on Philadelphia Fed Greenbook
data sets, Survey of Professional Forecasters, Haver Analytics, and
Jurado, Ludvigson, and Ng (2015); see text.

(a.) Standard errors are robust to heteroskedasticity. Statistical
significance at the *** 1, ** 5, and * 10 percent levels.

(b.) Entries in the "LM" row are p-values of Durbin's test for the
null hypothesis of no serial correlation in the residuals up to the
second order.

Table 10. Quarterly Skewness Proxies in Monetary Policy Rules (a)

                             (1)         (2)         (3)

[[SIGMA].sup.2.sub.j=1]    .69 ***     .68 ***     .71 ***
  [[alpha].sup.j]         (.03)       (.03)       (.04)
[beta]                    1.73 ***    1.55 ***    2.02 ***
                          (.12)       (.11)       (.16)
[gamma]                    .80 ***     .71 ***     .80 ***
                          (.06)       (.06)       (.07)
SPD                                   -.56 ***
                                      (.14)
sInf                                               .23 **
                                                  (.10)
sGDP

DsInf

DsGDP

LMb                        .53         .90         .34
Obs.                        86          86          68

                             (4)         (5)         (6)

[[SIGMA].sup.2.sub.j=1]    .70 ***     .72 ***     .69 ***
  [[alpha].sup.j]         (.04)       (.03)       (.03)
[beta]                    2.09 ***    1.74 ***    1.69 ***
                          (.16)       (.10)       (.12)
[gamma]                    .74 ***     .89 ***     .81 ***
                          (.08)       (.08)       (.07)
SPD

sInf

sGDP                      -.15
                          (.11)
DsInf                                  .40 ***
                                      (.13)
DsGDP                                             -.16
                                                  (.12)
LMb                        .67         .61         .62
Obs.                        68          86          86

(a.) Standard errors are robust to heteroskedastieity. Statistical
significance at the *** 1, ** 5, and * 10 percent levels.

(b.) Entries in the "LM" row are p-values of Durbin's test for the
null hypothesis of no serial correlation in the residuals up to the
second order.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有