Risk management for monetary policy near the zero lower bound.
Evans, Charles ; Fisher, Jonas ; Gourio, Francois 等
ABSTRACT With projections showing inflation heading back toward
target and the labor market continuing to improve, the Federal Reserve
has begun to contemplate an increase in the federal funds rate. There
is, however, substantial uncertainty around these projections. How
should this uncertainty affect monetary policy? In many standard models
uncertainty has no effect. In this paper, we demonstrate that the zero
lower bound (ZLB) on nominal interest rates implies that the central
bank should adopt a looser policy when there is uncertainty. In the
current context this result implies that a delayed liftoff is optimal.
We demonstrate this result theoretically through two canonical
macroeconomic models. Using numerical simulations of our models
calibrated to the current environment, we find that optimal policy calls
for a delay in liftoff of two to three quarters relative to a policy
that does not take into account uncertainty about policy being
constrained by the ZLB. We then use a narrative study of Federal Reserve
communications and estimated policy reaction functions to show that risk
management is a long-standing practice in the conduct of monetary
policy.
**********
To what extent should uncertainty affect monetary policy? This
classic question is relevant today as the Federal Reserve considers when
to start increasing the federal funds rate. In the March 2015
"Summary of Economic Projections," most Federal Open Market
Committee (FOMC) participants forecast that the unemployment rate would
return to its long-run neutral level by late 2015 and that inflation
would gradually rise, returning to its 2 percent target. This forecast
could go wrong in two ways. First, the FOMC may be overestimating the
underlying strength in the economy or the tendency of inflation to
return to target. Guarding against these risks would call for cautious
removal of accommodation. Second, the economy could be poised for
stronger growth and inflation than currently projected. This second risk
would call for more aggressive rate hikes. How should policy manage
these divergent risks?
If the FOMC misjudges the impediments to growth and inflation and
reduces monetary accommodation too soon, it could find itself in the
uncomfortable position of having to reverse course and being constrained
by the zero lower bound (ZLB) again. It is true that the FOMC has access
to unconventional policy tools at the ZLB, but these appear to be
imperfect substitutes for the traditional funds rate instrument. In
contrast, if the Fed keeps rates too low and inflation rises too
quickly, most likely inflation could be brought back into check with
modest increases in interest rates. Since the unconventional tools
available to counter the first scenario may be less effective than the
traditional tools available to counter the second scenario, the costs of
premature liftoff may exceed those of delay. It therefore seems prudent
to refrain from raising rates until the FOMC is highly certain that
growth is sustainable and inflation is returning to target. (1)
In this paper we establish theoretically that uncertainty about
monetary policy being constrained by the ZLB in the future implies an
optimally looser policy today, which in the current context means
delaying liftoff--the risk management framework just described. We
formally define risk management as the principle that policy should be
formulated taking into account the dispersion of shocks around their
means. Our main theoretical contribution is to provide a simple
demonstration, using standard models of monetary policy, that the ZLB
implies a new role for such risk management through two distinct
economic channels.
The first channel, which we call the expectations channel, arises
because the possibility of a binding ZLB tomorrow leads to lower
expected inflation and output today, and hence dictates some
counteracting policy easing today. The second channel, which we call the
buffer stock channel, arises because, if inflation or output is
intrinsically persistent, building up output or inflation today reduces
the likelihood and severity of hitting the ZLB tomorrow. Optimal policy
when either of these channels is operative should be looser whenever a
return to the ZLB remains a distinct possibility. In simulations
calibrated to the current environment, we find that optimal policy
prescribes two to three quarters of delay in liftoff relative to a
policy that does not take this uncertainty into account. However, under
the optimal policy the central bank must also be prepared to raise rates
quickly as the threat of being constrained by the ZLB recedes.
Would it be unusual for the Fed to take uncertainty into account in
setting its policy rate? The second part of this paper argues that risk
management has been a long-standing practice in U.S. monetary policy.
Therefore, advocating it in the current policy environment would be
consistent with a well-established approach of the Federal Reserve. Of
course, because the ZLB was only recently perceived as an important
constraint, the theoretical rationales for risk management were
different in the past. It is true that in a wide class of models that
abstract from the ZLB, optimal policy involves adjusting the interest
rate in response to the mean of the distribution of shocks, and
information on higher moments is irrelevant (the so-called
"certainty equivalence" principle). However, there is an
extensive literature covering departures from this result based on
nonlinear economic environments or uncertain policy parameters that
justify taking a risk management approach away from the ZLB.
We explore whether policymakers actually practiced risk management
prior to the ZLB period in two ways. First, we analyze Federal Reserve
communications over the period 1987-2008 and find numerous examples when
uncertainty or the desire to insure against important risks to the
economy were used to help explain the setting of policy. Confirmation of
this view is found in the statements of Alan Greenspan, who during his
tenure as Federal Reserve chair noted, "the conduct of monetary
policy in the United States has come to involve, at its core, crucial
elements of risk management." (2) Second, we estimate a
conventional forecast-based monetary policy reaction function augmented
with a variety of measures of risk based on financial market data,
Federal Reserve Board staff forecasts, private-sector forecasts, and
narrative analysis of the FOMC minutes. We find clear evidence that when
measured in this way, risk has had a statistically and economically
significant impact on the interest rate choices of the FOMC. For the
FOMC, risk management appears to be old hat.
If the monetary policy toolkit contained alternative instruments
that were perfect substitutes for changing the policy rate, then the ZLB
would not present any special economic risk and our analysis would be
moot. We do not think this is the case. Even though most central bankers
believe unconventional policies such as large-scale asset purchases
(LSAPs) or more explicit and longer-term forward guidance about policy
rates can provide considerable accommodation at the ZLB, few argue that
these tools are on an equal footing with traditional policy instruments.
(3)
One reason for this is that the effects on the economy of
unconventional policies are, naturally, much more uncertain than those
of traditional tools. There are divergent empirical estimates of their
effects, and there is uncertainty about the theoretical mechanism behind
those effects. Various studies of LSAPs, for example, provide a wide
range of estimates of their ability to put downward pressure on private
borrowing rates and influence the real economy. Furthermore, the effects
on interest rates of both LSAPs and forward guidance are complicated
functions of private-sector expectations, which make their economic
effects highly uncertain as well. (4)
Uncertainty about the transmission mechanism of LSAPs is reflected
in Arvind Krishnamurthy and Annette Vissing-Jorgensen's (2013)
discussion of the various hypotheses that have been proposed.
Unconventional tools also carry potential costs. The four most commonly
cited costs are these: (i) the large increases in reserves generated by
LSAPs risk unleashing inflation; (ii) a large balance sheet may make it
more difficult for the Fed to raise interest rates when the time comes;
(iii) the extended period of very low interest rates and Federal Reserve
intervention in the long-term Treasury and mortgage markets may induce
inefficient allocation of credit and financial fragility; and (iv) the
large balance sheet puts the Federal Reserve at risk of incurring
financial losses if rates rise too quickly, and such losses could
undermine its support and independence. (5) Costs reduce the incentive
to use any policy tool. Moreover, because the costs of unconventional
tools are very hard to quantify, the level of uncertainty associated
with them is naturally elevated as well.
A consequence of this uncertainty over the benefits and costs of
unconventional tools is that they are likely to be used more cautiously
than traditional policy instruments, as suggested by William
Brainard's (1967) classic analysis. For example, then Federal
Reserve Chairman Ben Bernanke emphasized in 2012 that because of their
uncertain costs and benefits, "the hurdle for using nontraditional
policies should be higher than for traditional policies." (6) In
addition, some of the benefits of unconventional policies may be
decreasing, and their costs may be increasing in terms of balance sheet
size or amount of time spent in a very low interest rate environment.
(7) Accordingly, policies that had widespread support early on in a ZLB
episode might be difficult to extend or expand with an already large
balance sheet.
So, while they can be valuable, unconventional policies also appear
to be less-than-perfect substitutes for changes in short-term policy
rates. Accordingly, the ZLB presents a different set of risks to
policymakers than those they face during more conventional times, and
thus they are worthy of consideration in their own right. We abstract
from unconventional policy tools for the remainder of our analysis.
I. Rationales for Risk Management Near the ZLB
The canonical framework of monetary policy analysis assumes that
the central bank sets the nominal interest rate to minimize a quadratic
loss function of the deviation of inflation from its target and the
output gap, and that the economy is described by a set of linear
equations. In most applications, uncertainty is incorporated as additive
shocks to these linear equations, capturing factors outside the model
that lead to variation in economic activity or inflation. (8) A
limitation of this approach is that, by construction, it denies that a
policymaker might choose to adjust policy in the face of changes in
uncertainty about economic fundamentals. However, the evidence discussed
below in sections II and III suggests that in practice, policymakers are
sensitive to uncertainty and respond to it by following what appears to
be a risk-management approach. Understanding why a central banker should
behave in this way requires some departure from the canonical framework.
The main contribution of this section is to consider a departure
associated with the possibility of a binding ZLB in the future. We show
that when a policymaker might be constrained by the ZLB in the future,
optimal policy today should take account of uncertainty about
fundamentals. We focus on two distinct channels through which this can
occur. First, we use the workhorse forward-looking New Keynesian model
to illustrate the expectations channel, in which the possibility of a
binding ZLB tomorrow leads to lower expected inflation and an output gap
occurring today, thus necessitating policy easing today. We then use a
backward-looking "Old" Keynesian setup to illustrate the
buffer stock channel, in which it can be optimal to build up output or
inflation today in order to reduce the likelihood and severity of being
constrained by the ZLB tomorrow. Both of these channels operate in
modern DSGE (dynamic stochastic general equilibrium) models such as
those described by Lawrence Christiano, Martin Eichenbaum, and Charles
Evans (2005) and by Frank Smets and Rafael Wouters (2007), but they are
more transparent if we consider them in separate, although related,
simple models. After describing these two channels we construct some
numerical simulations to assess their quantitative effects.
I.A. The Expectations Channel
The simple New Keynesian model has well established
micro-foundations based on price stickiness. Given that excellent
expositions of these foundations have been offered many times, for
example by Michael Woodford (2003) and Jordi Gall (2008), we simply
state our notation without much explanation. The model consists of two
main equations, the Phillips curve and the IS curve.
The Phillips curve is specified as
(1) [[pi].sub.t] = [KX.sub.t] + [beta][E.sub.t][[pi].sub.t+1] +
[u.sub.t],
where [[pi].sub.t] and [x.sub.t] are both endogenous variables and
denote inflation and the output gap at date t; [E.sub.t] is the date t
conditional expectations operator with rational expectations is assumed;
[u.sub.t] is a mean zero exogenous cost-push shock; and 0 < [beta]
< 1, [kappa] > 0. For simplicity we assume the central bank has a
constant inflation target equal to zero, so [[pi].sub.t] is the
deviation of inflation from that target. The cost-push shock represents
exogenous changes to inflation such as an independent decline in
inflation expectations, dollar appreciation, or changes in oil prices.
The IS curve is specified as
(2) [[pi].sub.t] = [E.sub.t][x.sub.t+1] - 1 / [sigma] ([i.sub.t] -
[E.sub.t][[pi].sub.t+1] - [[rho].sup.n.sub.t]),
where [sigma] > 0, [i.sub.t] is the nominal interest rate
controlled by the central bank, and [[rho].sup.n.sub.t] is the natural
rate of interest given by
(3) [[rho].sup.n.sub.t] = [bar.[rho]] + [sigma][g.sub.t] +
[sigma][E.sub.t] ([z.sub.t+1] - [z.sub.t]).
The variable [g.sub.t] is an exogenous mean zero demand shock, and
[z.sub.t] is the exogenous log of potential output. Since [g.sub.t] and
[z.sub.t] are exogenous, so is the natural rate. Equation 2 indicates
that [[rho].sup.n.sub.t] corresponds to the setting of the nominal
interest rate consistent with expected inflation at target and the
output gap equal to zero. (9) If potential output is constant and the
demand shock equals zero, then the natural rate equals the constant
[bar.[rho]] > 0.
Our analysis is centered on uncertainty in the natural rate. (10)
11 From equation 3 we see that this uncertainty derives from uncertainty
about [g.sub.t] and [E.sub.t] ([z.sub.t+1] - [z.sub.t]). We interpret
the former as arising due to a variety of factors, including fiscal
policy, foreign economies' growth, and financial considerations
such as deleveraging. (11) The latter source of uncertainty is over the
variety of factors that can influence the expected rate of growth in
potential output, for example as emphasized in the recent debate over
so-called secular stagnation.
We adopt the canonical framework in assuming that the central bank
acts to minimize a quadratic loss function with the understanding that
private-sector behavior is governed by equations 1 through 3. The loss
function is
(4) L = 1/2 [E.sub.0] [[infinity].summation over (t=0)]
[[beta].sup.t] ([[pi].sup.2.sub.t] + [lambda][x.sup.2.sub.t]),
where [lambda] [greater than or equal to] 0. We further assume the
ZLB constraint, that is, [i.sub.t] [greater than or equal to] 0,
abstracting from the possibility that the effective lower bound on
[i.sub.t], is slightly negative. The short-term interest rate is the
central bank's only policy instrument, and it is set by solving for
optimal policy under discretion. In particular, in each period the
central bank sets the nominal interest rate with the understanding that
private agents anticipate that it will re-optimize in the following
periods.
We focus on optimal policy under discretion for two reasons. First,
the case of commitment with a binding ZLB already has been studied
extensively. In particular, it is well known from the contributions of
Paul Krugman (1998), Gauti Egertsson and Michael Woodford (2003),
Woodford (2012), and Ivan Weming (2012) that commitment can reduce the
severity of the ZLB problem by creating higher expectations of inflation
and the output gap. One implication of these studies is that the central
bank should commit to keeping the policy rate at zero longer than would
be prescribed by discretionary policy. By studying optimal policy under
discretion we find a different rationale for a policy of keeping rates
"lower for longer" that does not rely on the central bank
having the ability to commit to a time-inconsistent policy. (12)
Nevertheless, below we discuss our intuition for why our main result
should extend to the case of commitment. Second, discretion may better
approximate the institutional environment in which the FOMC operates.
A ZLB SCENARIO We study optimal policy when the central bank is
faced with the following simple ZLB scenario. The central bank observes
the current value of the natural rate, [[rho].sup.n.sub.0], and the
cost-push shock [u.sub.0]; moreover, there is no uncertainty in the
natural rate after t = 2, [[rho].sup.n.sub.t] = [bar.[rho]] > 0 for
all t [greater than or equal to] 2, nor in the cost-push shock after t =
1, [u.sub.t] = 0 for all t [greater than or equal to] 1. However, there
is uncertainty at t = 1 regarding the natural rate [[rho].sup.n.sub.1].
(13) The variable [[rho].sup.n.sub.1] is assumed to be distributed
according to the probability density function [f.sub.[rho]](*).
This very simple scenario keeps the optimal policy calculation
tractable while preserving the main insights. We also think it captures
some key elements of uncertainty faced by the FOMC today; notably, our
formulation allows us to consider the optimal timing of liftoff. We do
not have to take a stand on whether the ZLB is binding before t = 0, but
one possibility is that the natural rate p" was sufficiently
negative for r < 0 so that the optimal policy rate was set at zero,
[i.sub.t] = 0 for t < 0, but because the economy has been improving
the natural rate is close to zero by t = 0. The question is whether to
raise the policy rate at t = 0, t - 1, or t = 2.
ANALYSIS To find the optimal policy, we solve the model backwards
from t = 2 and focus on the policy choice at t = 0. First, for t
[greater than or equal to] 2, it is possible to perfectly stabilize the
economy by setting the nominal interest rate equal to the (now positive)
natural rate, [i.sub.t], = [[rho].sup.n.sub.t] = [bar.[rho]]. This leads
to [[pi].sub.t], = [x.sub.t] = 0 for t [greater than or equal to] 2.
(14) The optimal policy at t = 1 will depend on the realized value of
the natural rate [[rho].sup.n.sub.1]. If [[rho].sup.n.sub.1] [greater
than or equal to] 0, then it is again possible (and optimal) to
perfectly stabilize by setting [i.sub.1] = [[rho].sup.n.sub.1], leading
to [x.sub.1] = [[pi].sub.1], = 0. However if [[rho].sup.n.sub.1] < 0,
the ZLB binds and consequently [x.sub.1] = [[rho].sup.n.sub.1]/[sigma]
< 0. The expected output gap at t = 1 is [E.sub.0][x.sub.1] =
[[integral].sup.0.sub.-[infinity]] [rho]
[f.sub.[rho]]([rho])[d.sub.[rho]]/[sigma] [less than or equal to] 0 and
expected inflation is [E.sub.0][[pi].sub.1] = [kappa][E.sub.0][x.sub.1]
< 0.
Because agents are forward-looking, this low expected output gap
and inflation feed backward to t = 0. A low output gap tomorrow
depresses output today by a wealth effect via the IS curve. Low
inflation tomorrow depresses inflation today, since price-setting is
forward-looking in the Phillips curve, and it also depresses output
today by raising the real interest rate via the IS curve. The optimal
policy at t = 0 must take into account these effects. This implies that
optimal policy will be looser than if there were no chance that the ZLB
would bind tomorrow.
Mathematically, substituting for [[pi].sub.0] and [i.sub.0] using
equations 1 and 2, and taking into account the ZLB constraint, optimal
policy at t = 0 solves the following problem:
(5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Two cases arise, depending on whether the ZLB binds at t = 0 or
not. Define the threshold value
(6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
If [[rho].sup.n.sub.0] > [[rho].sup.*.sub.0], then the optimal
policy is to follow the standard monetary policy response to an
inflation shock to the Phillips curve, [beta][E.sub.0][[pi].sub.1] +
[u.sub.0], leading to
(7) [x.sub.0] = - [kappa]/[lambda] + [[kappa].sub.2]
([beta][E.sub.0] [[pi].sub.1] + [u.sub.0]); [[pi].sub.0] = -
[lambda]/[lambda] + [[kappa].sub.2] ([beta][E.sub.0] [[pi].sub.1] +
[u.sub.0]).
The corresponding interest rate is
(8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
As long as [[integral].sup.0.sub.-[infinity]]
[rho][f.sub.[rho]]([rho])dp < 0, equation 8 implies that the optimal
interest rate is lower than if there were no chance of a binding ZLB
tomorrow, that is, if [f.sub.[rho]] ([rho]) = 0 for p [less than or
equal to] 0. The interest rate is lower today to offset the deflationary
and recessionary effects of the possibility of a binding ZLB tomorrow.
If [[rho].sup.n.sub.0] < [[rho].sup.*.sub.0] then the ZLB binds today
and optimal policy is [i.sub.0] = 0. In this case,
(9) [x.sub.0] = [[rho].sup.n.sub.0]/[sigma] + (1 + [kappa]/[sigma])
[E.sub.0][x.sub.1]; [[pi].sub.0] = [kappa] [[rho].sup.n.sub.0]/[sigma] +
[(1 + [beta])[kappa] + [[kappa].sub.2]/[sigma]] [E.sub.0] [x.sub.1].
Notice from equation 6 that higher uncertainty makes it more likely
that the ZLB will bind at t = 0. Specifically, even if agents were
certain that the ZLB would not bind at t - 1, [E.sub.0][x.sub.1] =
[E.sub.0][[pi].sub.1] = 0 and [i.sub.0] = 0 if [[rho].sup.n.sub.0] [less
than or equal to] - [sigma][kappa][u.sub.0]/([lambda] +
[[kappa].sup.2]).
So the possibility of the ZLB binding tomorrow increases the
chances of being constrained by the ZLB today.
Since [E.sub.0][x.sub.1] is a sufficient statistic for
[[integral].sup.0.sub.-[infinity]] [rho][f.sub.[rho]]([rho])d [rho] in
equation 8, the optimal policy has the flavor of a traditional
forward-looking policy reaction function that only depends on the
conditional expectations of output and inflation gaps. However
[E.sub.0][x.sub.1] is not independent of a mean-preserving spread or any
other change in the distribution of [[rho].sup.n.sub.1]. Accordingly,
optimal policy here departs from the certainty equivalence principle,
which says that the extent of uncertainty in the underlying fundamentals
(in our case [[rho].sup.n.sub.1]) does not affect the optimal interest
rate. (15) Furthermore, as a practical matter the central bank must
infer private agents' [E.sub.0][x.sub.1] in order to determine
optimal policy. Since [E.sub.0][x.sub.1] depends on the entire
distribution of p", so must the central bank's estimates of
it, which is a much more difficult inference problem than in the
certainty equivalence case.
Turning specifically to the issue of uncertainty, we obtain the
following unambiguous comparative static result:
Proposition 1: Higher uncertainty, that is, a mean-preserving
spread in the distribution of the natural rate p" tomorrow, leads
to a looser optimal policy today.
To see this, rewrite the key quantity
[[integral].sup.0.sub.-[infinity]] [rho][f.sub.[rho]]([rho])d [rho] = E
min([rho],0). Since the min function is concave, higher uncertainty
through a mean-preserving spread about [[rho].sup.n.sub.1] leads to
lower, that is, more negative, [E.sub.0][x.sub.1] and
[E.sub.0][[pi].sub.1],. Hence, higher uncertainty leads to lower
[i.sub.0]. (16)
The effect of higher uncertainty on [i.sub.0] is unambiguous, but
the effect on the output gap and inflation is more subtle. If the ZLB
does not bind at t = 0 initially, higher uncertainty leads to lower
[E.sub.0][x.sub.1] and [E.sub.0][[pi].sub.1] and consequently to higher
[x.sub.0] and lower [[pi].sub.0] according to equation 7. On the other
hand, if the ZLB does bind at t = 0 initially, then higher uncertainty
leads to lower [x.sub.0] and lower [[pi].sub.0] according to equation 9.
(17) Overall, the effect of higher uncertainty on [[pi].sub.0] is
unambiguously negative, but the effect on [x.sub.0] may be positive or
negative.
Another interesting feature of the solution is that the
distribution of the positive values of [[rho].sup.n.sub.1] is irrelevant
for policy. That is, policy today is adjusted only with respect to the
states of the world in which the ZLB might bind tomorrow. The logic is
that if a very high value of [[rho].sup.n.sub.1] is realized, monetary
policy can adjust to it and prevent a bout of inflation. This is a
consequence of the standard principle that, outside the ZLB, natural
rate shocks can and should be perfectly offset by monetary policy.
DISCUSSION Proposition 1 has several predecessors. Perhaps the
closest are Klaus Adam and Roberto Billi (2007), Taisuke Nakata
(2013a,b), and Anton Nakov (2008), who demonstrate numerically how, in a
stochastic environment, the ZLB leads the central bank to adopt a looser
policy. Our contribution is to provide a simple analytical example. (18)
This result has been correctly interpreted to mean that if negative
shocks to the natural rate lead the economy to be close to the ZLB, the
optimal response is to lower the interest rate aggressively to reduce
the likelihood that the ZLB becomes binding. The same logic applies to
liftoff. Following an episode where the ZLB has been a binding
constraint, the central bank should not raise rates as if it were sure
the ZLB constraint would never bind again. (19) Even though the best
forecast may be that the economy will recover and exit the ZLB--that is,
in the context of the model, that [E.sub.0]( [[rho].sup.n.sub.1]) >
0--it can be optimal to have zero interest rates today. Note that policy
is looser when the probability of being constrained by the ZLB in the
future is high or the potential severity of the ZLB problem is large;
that is, when [[integral].sup.0.sub.-[infinity]]
[rho][f.sub.[rho]]([rho])d [rho] is a large negative number; the economy
is less sensitive to interest rates (high [sigma]); and the Phillips
curve is steep (high [kappa]).
With higher uncertainty, the increase in interest rates will be
faster on average from t = 0 to t = 2. This follows since the / = 2
interest rate is unaffected by uncertainty whereas at t = 0 it is lower.
More generally, when uncertainty about being constrained by the ZLB in
the future dissipates, the interest rate can rise quickly because the
effects holding it down disappear along with the uncertainty.
While we have deliberately focused on a very simple example, our
results hold under more general conditions. For instance, the same
results still hold if [{[[rho].sup.n.sub.1]}.sub.t [greater than or
equal to]2] follows an arbitrary stochastic process, as long as it is
positive. In the online appendix we consider the case of optimal policy
with uncertainty about cost-push inflation. (20) We show that optimal
policy also is looser if there is a chance of a binding ZLB in the
future due to a low cost-push shock. Furthermore, the risk that
inflation picks up due to a high cost-push shock does not affect policy
today. If such a shock were to occur tomorrow, it would lead to some
inflation; however, there is nothing that policy today can do about it.
Finally, while the model chosen is highly stylized, the core insights
would likely continue to hold in a medium-scale model with a variety of
shocks and frictions.
Intuitively, we expect a version of Proposition 1 to still hold
with commitment as well. Optimal policy with commitment involves
promising at t = 0 that should the ZLB bind at t = 1, the central bank
would keep interest rates lower for t [greater than or equal to] 2 than
it would otherwise. As is well known, this policy reduces the size of
the inflation and output gaps at t = 1, but it does not eliminate them
entirely. These gaps then could generate negative expected inflation and
output gaps at t - 0 that become more negative the larger the t = 1
uncertainty. Higher uncertainty should therefore lead to looser policy
at t = 0, just as in the case of discretion.
One obvious limitation to these results is that we have assumed
(and will continue do so when studying the backward-looking model below)
that there is no cost to raising rates quickly if needed. For example,
our welfare criterion does not value interest-rate smoothing. Smoothing
has been rationalized by Marvin Goodfriend (1991) and others as
facilitating financial market adjustments or as a signaling tool. It is
true also that estimated reaction functions include lagged funds rate
terms to fit historical data. Nonetheless, there have been instances
when the FOMC has moved quickly. Some of these occurred as recessions
unfolded, but not all: between February 1994 and February 1995 rates
were tightened by 300 basis points and between November 1988 and
February 1989 by nearly 165 basis points. Moreover, as Brian Sack (2000)
and Glenn Rudebusch (2002) argue, interest rate smoothing might reflect
learning about an uncertain economy rather than a desire to avoid large
changes in interest rates per se. The policy prescriptions derived from
our models are specifically aimed at addressing such uncertainty.
I.B. The Buffer Stock Channel
The buffer stock channel relies not on forward-looking behavior but
on the view that the economy has some inherent momentum, for instance
due to adaptive inflation expectations, inflation indexation, habit
persistence, adjustment costs, or hysteresis. Suppose that output or
inflation has a tendency to persist. If there is a risk that the ZLB
binds tomorrow, building up output and inflation today creates some
buffer against hitting the ZLB tomorrow.
This intuition does not guarantee that it is optimal to increase
output or inflation today. In particular, the benefit of higher
inflation or output today in the event that a ZLB event arises tomorrow
must be weighed against the costs of excess output and inflation today,
as well as tomorrow's cost to bring down the output gap or
inflation if the ZLB turns out not to bind. So it is important to verify
that our intuition holds up in a model.
To isolate the buffer stock channel from the expectations channel
we focus on a purely backward-looking "Old" Keynesian model.
Purely backwardlooking models do not have micro-foundations as the New
Keynesian model does, but backward-looking elements appear to be
important empirically. (21) Backward-looking models have been studied
extensively in the literature, including by Thomas Laubach and John
Williams (2003), Athanasios Orphanides and Williams (2002), David
Reifschneider and Williams (2000), and Rudebusch and Lars Svensson
(1999).
The model we study simply replaces the forward-looking terms in
equations 1 and 2 with backward-looking terms:
(10) [[pi].sub.t] = [xi][[pi].sub.t-1] + [kappa][x.sub.1] +
[u.sub.1];
(11) [x.sub.t] = [delta][x.sub.t-1] - 1/[sigma] ([i.sub.t] -
[[rho].sup.n.sub.t] - [[pi].sub.t-1]),
where 0 < [xi] < 1 and 0 < [delta] < 1. This model is
essentially the same as the simple example Reifschneider and Williams
(2000) use to motivate their analysis of monetary policy constrained by
the ZLB. Unlike in the New Keynesian model, it is difficult to map
p'/ directly to underlying fundamental shocks as we do in equation
3. For simplicity, we continue to refer to this exogenous variable as
the natural rate and use equation 3 as a guide to interpreting it, but
it is perhaps better to think of it as simply a "demand" shock
or "IS" shock.
ANALYSIS We consider the ZLB scenario described in section I.A
(under "A ZLB Scenario") and again solve the model backwards
from t = 2 to determine optimal policy at t = 0 and how this is affected
by uncertainty in the natural rate at t = 1. After t = 1 the economy
does not experience any more shocks, but it inherits initial lagged
inflation and output terms [[pi].sub.1], and [x.sub.1], which may be
positive or negative. The output gap term can be easily adjusted by
changing the interest rate [i.sub.t] provided the central bank is not
constrained by the ZLB at t = 2, that is, if [[rho].sup.n.sub.2], =
[bar.[rho]] is large enough, an assumption we will maintain. (22) Given
the quadratic loss, it is optimal to smooth this adjustment over time so
that the economy will converge back to its steady-state slowly. The
details of this adjustment after t = 2 are not very important for our
analysis. What is important is that the overall loss of starting from t
= 2 with lagged inflation [[pi].sub.1], and output gap [x.sub.1] is a
quadratic function of [[pi].sub.1] only; we can write it as
W[[pi].sup.2.sub.1]/2, where W is a constant that depends on [lambda],
[kappa], and [beta] and is calculated in the online appendix.
Turn now to optimal policy at t = 1. Take the realization of
p,' and last period's output gap [x.sub.0] and inflation
[[pi].sub.0] as given. Substituting for [[pi].sub.1], and [i.sub.1]
using equations 10 and 11, and taking into account the ZLB constraint,
optimal policy at t = 1 solves the following problem:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
where the policymaker now anticipates the cost of having inflation
[[pi].sub.1], tomorrow, and her choices are affected by yesterday's
values [x.sub.0] and [[pi].sub.0].
Depending on the value of [[rho].sup.n.sub.1], two cases can arise.
Define the threshold value:
(12) [[rho].sup.*.sub.1] ([x.sub.0], [[pi].sub.0]) = - [(1 +
[beta]W) [kappa][xi]/(1 + [beta]W)[[kappa].sup.2] + [lambda] + 1]
[[pi].sub.0] - [sigma][delta][x.sub.0].
For [[rho].sup.n.sub.1] [greater than or equal to] [p.sup.*.sub.1]
([x.sub.0], [[pi].sub.0]) the ZLB is not binding; otherwise it is. Hence
the probability of hitting the ZLB is [MATHEMATICAL EXPRESSION NOT
REPRODUCIBLE IN ASCII] [f.sub.[rho]] ([rho])d[rho]. In contrast to the
forward-looking case, the probability of being constrained by the ZLB
constraint is now endogenous at t = 1 and can be influenced by policy at
t = 0. As indicated by equation 12, a higher output gap or inflation at
t = 0 will reduce the likelihood of hitting the ZLB at t = 1.
If [[rho].sup.n.sub.1] [greater than or equal to]
[[rho].sup.*.sub.1] ([x.sub.0], [[pi].sub.0]) optimal policy at t = 1
yields
[x.sub.1] = - (1 + [beta]W) [kappa][xi]/(1 + [beta]W)
[[kappa].sup.2] + [lambda] [[rho].sub.0]; [[pi].sub.1] = [lambda][xi]/(1
+ [beta]W) [[kappa].sup.2] + [lambda] [[pi].sub.0].
This is similar to the forward-looking model's solution, which
reflects the trade-off between output and inflation, except that optimal
policy now takes into account the cost of having inflation away from
target tomorrow, through W. The loss for this case is V([x.sub.0],
[[pi].sub.0], [[rho].sup.n.sub.1]) = W[[pi].sup.2.sub.0]/2, since in
this case the problem is the same as the one faced at t = 2. If
[[rho].sup.n.sub.1] < [p.sup.*]([x.sub.0], [[pi].sub.0]) the ZLB
binds, in which case
[x.sub.1] = [delta][x.sub.0] + [[pi].sub.0] +
[[rho].sup.n.sub.1]/[sigma]; [[pi].sub.1] = [kappa][delta][x.sub.0] +
[[pi].sub.0] ([xi] + [kappa]/[sigma]) + [kappa]
[[rho].sup.n.sub.1]/[sigma].
The expected loss from t = 1 on as a function of the output gap and
inflation at t = 0 is then given by:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
This expression reveals that the initial conditions [x.sub.0] and
[[pi].sub.0] matter by shifting the payoff from continuation in the
non-ZLB states, W[[pi].sup.2.sub.0]/2 the payoff in the case where the
ZLB binds (the second integral); and the relative likelihood of ZLB and
non-ZLB states through [[rho].sup.*.sub.1] ([x.sub.0], [[pi].sub.0]).
Since the loss function is continuous in p, even at pf(jt0,7t0), this
last effect is irrelevant for welfare at the margin.
The last step is to find the optimal policy at time 0, taking into
account the effect on the expected loss tomorrow:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
We use this expression to prove the following, which is analogous
to Proposition 1:
Proposition 2: For any initial condition, a mean-preserving spread
in the distribution of the natural rate p, tomorrow leads to a looser
optimal policy today.
From equations 10 and 11, higher uncertainty also leads to larger
[x.sub.0] and [[pi].sub.0]. The proof of Proposition 2 is in the
appendix. Note that it incorporates the case of uncertainty regarding
cost-push shocks at t = 1 and shows that a mean-preserving spread in the
cost-push shock tomorrow leads to looser policy today as well.
Our model also implies that an increase in uncertainty over the
initial output gap will lead to looser policy. Specifically we have:
Proposition 3: Suppose the initial output gap [x.sub.-1], is
unknown at t = 0 but becomes known at t= 1 and the central bank has a
prior distribution over [x.sub.-1]. Then a mean-preserving spread in
this prior distribution leads optimal policy to be looser at t = 0.
The proof of this proposition is similar to the one for Proposition
2. This result is particularly germane to the current policy environment
where there is uncertainty over the amount of slack in the economy.
Therefore, Proposition 3 provides an additional rationale for delaying
liftoff.
DISCUSSION As far as we know, Proposition 2 is a new result, but
its implications are similar to those of Proposition 1. As in the
forward-looking case, liftoff from an optimal zero interest rate should
be delayed today with an increase in uncertainty about the natural rate
or cost-push shock that raises the odds of the ZLB binding tomorrow.
Similarly, even if not constrained by the ZLB today, an increase in
uncertainty about the likelihood of being constrained by the ZLB
tomorrow leads to a reduction in the policy rate today. So the buffer
stock channel and the expectations channel have very similar policy
implications, though for very different reasons. The expectations
channel involves the possibility of being constrained by the ZLB
tomorrow feeding backward to looser policy today. The buffer stock
channel has looser policy today feeding forward to reduce the likelihood
and severity of being at the ZLB tomorrow. Note that as in the
forward-looking model, optimal policy prescribes that interest rates
rise as the likelihood of being constrained by the ZLB in the future
falls, even if the output gap or inflation does not change.
It is useful to compare the policy implications of the buffer stock
channel to the argument developed in Olivier Coibion, Yuriy
Gorodnichenko, and Johannes Wieland (2012). Their paper studies the
tradeoff between the level of the inflation target and the risk of
hitting the ZLB using policy reaction functions instead of optimal
policy. (23) Our analysis does not require a drastic change in monetary
policy in order to improve outcomes. It is achieved through standard
interest-rate policy rather than a credibility-damaging change to the
inflation target.
I.C. Quantitative Assessment
We now assess the quantitative significance of the expectations and
buffer stock channels using calibrated versions of the forward- and
backward-looking models that we solve numerically. With parameters drawn
from the literature and initial conditions calibrated to early 2015, we
compare equilibrium outcomes under optimal discretion to alternative
policies that do not take into account uncertainty. Our numerical
methods are described in the online appendix. Importantly, and in
contrast to most of the literature, they allow uncertainty to affect
policy and to be reflected in welfare.
PARAMETER VALUES The parameter values are reported in table 1. We
use the same values for parameters that are common to both models. The
time period is one quarter, with t = 1 taken to be 2015Q1. The natural
rate [[rho].sup.n.sub.1] is the sum of deterministic and random
components. We assume the deterministic component rises linearly between
t = 1 and t = T > 1, after which it remains constant at [bar.p] =
1.75 percent, which corresponds with the median long-run funds rate in
the March 2015 FOMC Summary of Economic Projections, less the
FOMC's inflation target [[pi].sup.*] = 2. The random component is
AR(1) with auto-correlation coefficient pf and innovation standard
deviation [[sigma].sub.[epsilon]]. We also assume there is an i.i.d.
cost-push shock with standard deviation [[sigma].sub.u]. There is no
uncertainty for t > T.
The degree of uncertainty we assume is central to our findings. The
particular values of [[rho].sub.[epsilon]] and [[sigma].sub.[epsilon]]
are not as important to our results as the unconditional volatility they
imply. There is wide variation in estimates of volatility in the natural
rate, corresponding to differences in theoretical concepts, models and
empirical methods used. Our calibration implies that the unconditional
standard deviation of the natural rate is 2.5 percent at an annual rate.
This lies within the range of estimates in Robert Barsky, Alejandro
Justiniano, and Leonardo Melosi (2014), Vasco Curdia and others (2015),
and Laubach and Williams (2003). The auto-correlation coefficient is set
midway between the values in Adam and Billi (2007) and Curdia and others
(2015). We set the standard deviation of the cost-push shock
[[sigma].sub.u] close to the value used in Adam and Billi (2007).
Assuming serial correlation or a moderately different unconditional
standard deviation of the cost-push shock is not very important for our
results. Finally, by assuming that the economy is not subject to shocks
for t > T and that the long run natural rate [bar.[rho]] is a known
constant, we have been conservative in our specification of uncertainty.
The Phillips curve slope, elasticity of intertemporal substitution,
and discount factor are all set to values common in the New Keynesian
literature. For the backward-looking model we set the coefficient on
lagged inflation in equation 10 to [xi] = 0.95, reflecting the fact that
inflation has been very persistent in recent years. (24) The coefficient
on lagged output in equation 11 is [delta] = 0.75, in order to generate
significant persistence in the output gap. For the backward-looking
model we assume an initial inflation rate of 1.3 percent, a recent
reading for core PCE inflation, and an initial output gap [x.sub.0] = -
1.5 percent, based on a simple calculation using the 2014Q4 unemployment
rate (5.7 percent), an estimate of the natural rate of unemployment (5.0
percent), and Okun's law. As indicated by Proposition 3, adding
uncertainty about the initial output gap would only strengthen our
results. (25)
We measure the quantitative effect of uncertainty on policy by
comparing equilibrium outcomes under optimal discretion to a scenario in
which we solve for optimal discretion when the central bank observes the
current natural rate and cost-push shocks but acts as if there will be
no more shocks. Private agents understand this policy but take into
account the true nature of uncertainty. Actual outcomes will be
inconsistent with the central bank's assumptions, so we call this
the "naive" policy. We also compare equilibrium outcomes under
optimal discretion to those obtained assuming the central bank follows a
reaction function with weights on inflation and the output gap as in
John Taylor (1993), and a constant term equal to 3.75 percent
corresponding to [rho] + [[pi].sup.*].
RESULTS FOR THE FORWARD-LOOKING MODEL Figure 1 displays
representative paths of the nominal interest rate, inflation, and the
output gap under optimal discretion, the naive policy, and the Taylor
rule, calculated by setting the ex post realized shocks to zero, the
modal outcome. Under the modal outcome, the interest rate under the
naive policy follows the natural rate exactly. The difference between
the interest rate paths indicates the substantial impact uncertainty has
on optimal policy; the naive policy is between 50 and 150 basis points
above the optimal policy for 2 years. This difference in policy has
little impact on the output gap, but under optimal policy the inflation
gap is closed much faster. The inflation gap is more negative under the
naive policy because the interest rate is higher both initially and in
the future since it does not take into account uncertainty about the
ZLB. (26) The Taylor rule prescribes rates above both the optimal and
naive policies for most of the simulation period, and because agents are
forward looking this feeds backward to cause much more negative gaps.
(27)
Table 2 summarizes the distribution of outcomes under the three
different policies based on simulating 50,000 paths drawn from the
calibrated distributions of the shocks. Optimal discretion implies
one-third the loss expected under the naive policy and one-eighth the
loss expected under the Taylor rule. (28) One way to interpret these
losses is to calculate the per-period reduction in the output gaps and
inflation gaps that would make the central banker indifferent between
the outcomes under the optimal policy and those under the alternatives.
Both gaps would have to be 43 percent and 65 percent smaller under the
naive policy and the Taylor rule, respectively, to achieve this
indifference. Under optimal discretion, the median liftoff (defined as
the nominal interest rate exceeding 25 basis points) is delayed by 2
quarters compared to the other policies; the mean liftoff is delayed by
more than 3 quarters, reflecting skewness in the outcomes. At the time
of liftoff, inflation and output are much closer to the target under
optimal discretion compared to the two alternative policies.
[FIGURE 1 OMITTED]
When comparing policies it is also important to assess how well
each balances the risks of bad outcomes. We do this by comparing the
75th percentile across simulations of the maximum inflation gap and the
25th percentile of the lowest output gap over the first 6 years. Under
optimal policy, the bad output outcomes are much lower than under either
alternative policy. The bad inflation outcomes do not seem particularly
high under any of the policies.
The statistic in the bottom row is the median standard deviation of
changes in the nominal interest rate. By comparing interest rate
volatility under the Taylor rule in our model with that implied by the
same Taylor rule in the data, we can determine whether the uncertainty
underlying our results is reasonable. If the volatility were much higher
in our simulations we would conclude that it is unreasonably large. In
fact, the 0.97 standard deviation in our Taylor rule simulations is only
a little larger than the 0.88 standard deviation we find in our data.
(29) Interest rates are more volatile under both the optimal and naive
policies because they respond to all fundamental shocks rather than to
inflation and output alone. (30)
RESULTS FOR THE BACKWARD-LOOKING MODEL Figure 2 is the analog of
figure 1 for the backward-looking model. The dynamics of return to
target are quite different from those in the forward-looking model, but
the key qualitative results are the same. As in the forward-looking
model, optimal policy is substantially looser than both the naive policy
and the Taylor rule. Here the optimal policy prescribes much more delay
in lifting off from the ZLB. Delay now occurs under the naive policy
because it is optimal to stimulate output strongly in order to return
inflation to target, but this delay is shorter than under the optimal
policy. The optimal policy also has a sharper liftoff than the naive
policy. However, the increases under optimal policy are equivalent to
just 25 basis points at each FOMC meeting, the same as the
"measured pace" followed during the Fed tightening over
2004-06. Qualitatively, the differences in the output and inflation
outcomes across the three policies are similar to those in the
forward-looking model as well. Taking into account uncertainty about the
ZLB leads the optimal policy to return inflation to target faster than
the naive policy, and it achieves this by allowing the output gap to
overshoot more in order to build a buffer against the possibility of bad
shocks in the future.
[FIGURE 2 OMITTED]
Table 3 is constructed analogously to table 2. It shows that
optimal policy provides only a marginal improvement over the naive
policy in terms of expected losses, due to the offsetting effects of the
inflation and output gaps. The median gaps are roughly closed at liftoff
under both the optimal and naive policies, but they are quite large
under the Taylor rule. The bad outcomes are similar across the three
scenarios. Finally, note that the volatility of the interest rate under
the Taylor rule is lower here compared to the data and the
forward-looking model, so the underlying uncertainty is not excessive.
We conclude by illustrating one of the risks the optimal policy is
able to address, namely the possibility that a shock will drive up
inflation before the baseline liftoff. Figure 3 depicts a particular
simulation where there is a large positive cost-push shock before the
liftoff under the optimal policy shown in figure 2. The shock triggers
earlier liftoff under the optimal policy so that the inflation response
is mild. The implication is that staying at zero longer under the
optimal policy does not impair the ability of the central bank to
respond to future contingencies. However, it does have to be prepared to
raise rates promptly. We obtain similar results with the forward-looking
model.
II. Historical Precedents for Risk Management
The previous section demonstrates that the ZLB justifies a risk
management approach to monetary policy. One may question whether
following such an approach would be a departure from past FOMC behavior.
Clearly, concerns about the ZLB are a relatively recent phenomenon.
Nevertheless there are many reasons why a risk management approach can
be justified when away from the ZLB, and we begin this section by
reviewing them. We then demonstrate that the Federal Reserve has used
risk management to justify its policy decisions over the period
1987-2008.
[FIGURE 3 OMITTED]
The FOMC minutes and other Federal Reserve communications reveal a
number of episodes when uncertainty or insurance were used to justify
the Fed's policy decisions. Sometimes the FOMC indicated that it
had a wait-and-see approach to taking further actions or muted a funds
rate move due to its uncertainty over the course of the economy or the
extent to which early policy moves had yet shown through to economic
activity and inflation. At other times the FOMC said its policy stance
was taken in part as insurance against undesirable outcomes; during
these times, the FOMC often noted that the potential costs of a policy
overreaction likely were modest compared to the scenario it was insuring
against.
Two episodes are particularly revealing. The first is the hesitancy
of the FOMC to raise rates in 1997 and 1998 to counter inflationary
threats because of uncertainty generated by the Asian financial crisis
and the subsequent rate cuts following the Russian default. The second
is the loosening of policy over 2000 and 2001, when uncertainty over the
degree to which growth was slowing and the desire to insure against
downside risks appeared to influence policy. Furthermore, in late 2001
the FOMC's aggressive actions also seemed to be influenced by
attention to the risks associated with the ZLB on interest rates.
While the historical record is replete with references suggesting
that the policy stance was influenced by uncertainty or insurance
motives, this does not establish that risk management actually had a
material impact on policy. Therefore, we conclude this section by
quantifying these references into variables that we then use in section
III to assess the importance of risk management for actual policy
decisions.
II.A. Rationales for Risk Management Away from the ZLB
Policymakers have long emphasized the importance of uncertainty in
their decision making. As Alan Greenspan (2004) put it: "The
Federal Reserve's experiences over the past two decades make it
clear that uncertainty is not just a pervasive feature of the monetary
policy landscape; it is the defining characteristic of that
landscape." (31) This sentiment seems at odds with linear-quadratic
models in which optimal policy involves adjusting the interest rate in
response to only the mean of the distribution of shocks away from the
ZLB. What kinds of factors cause departures from such conditions and
justify the risk management approach?
Relaxing the assumption of a quadratic loss function is perhaps the
simplest way to generate a rationale for risk management. The quadratic
loss function is justified by Woodford (2003) as being a local
approximation to consumer welfare. However, it might not be a good
approximation when large shocks drive the economy far from the
underlying trend; alternatively, it might simply be an inadequate
approximation of FOMC behavior. Examples of models with asymmetric loss
functions include those described by Paolo Surico (2007), Lutz Kilian
and Simone Manganelli (2008), and Juan J. Dolado, P. Ramon
Marfa-Dolores, and Francisco Ruge-Murcia (2004). (32) The model studied
by the last authors implies that the optimal policy rule can involve
nonlinear output gap and inflation terms if policymakers are less averse
to allowing output to run above potential than below it. The relevance
of higher moments in the distribution of shocks for optimal policy is an
obvious by-product of these nonlinearities.
Nonlinearities in economic dynamics are another natural motivation.
For example, suppose recessions are episodes when self-reinforcing
dynamics amplify the effects of downside shocks. This could be modeled
as a dependence of current output on lagged output, as in our
backward-looking model, but with such dependence being concave rather
than linear. Intuitively, negative shocks have a more dramatic effect on
reducing future output than positive shocks have on increasing it, so
greater uncertainty leads to looser optimal policy to guard against the
more detrimental outcomes. Alternatively, suppose the Phillips curve is
convex, perhaps owing to downward nominal wage rigidities that become
more germane with low inflation. Here, a positive shock to the output
gap leads to a significant increase of inflation above target while a
negative shock leads to a much smaller decline in inflation. The larger
the spread of these shocks, the greater the odds of experiencing a bad
inflation outcome. Optimal policy guards against this, leading to a
tightening bias. (33) The risk management approach also appears in the
large literature on how optimal monetary policy should adjust for
uncertainty about the true model of the economy. Brainard (1967) derived
the important result that uncertainty over the effects of policy should
lead to caution and smaller policy responses to deviations from target.
In contrast, the robust control analysis of Lars Hansen and Thomas
Sargent (2008) has been interpreted to mean that uncertainty over model
mis-specification should generate aggressive policy actions. As
explained by Gadi Barlevy (2011), both the attenuation and
aggressiveness results depend on the specifics of the underlying
environment. Nonetheless, these analyses still often indicate that
higher moments of the distribution of shocks can influence the setting
of optimal policy.
II.B. 1997-98
The year 1997 was a good one for the U.S. economy: real GDP
increased 3 3/4 percent (the March 1998 third estimate), the
unemployment rate fell to 4.7 percent, and core CPI inflation was 2 1/4
percent. With solid growth and tight labor markets, the FOMC clearly was
concerned about a buildup in inflationary pressures. As noted in the
Federal Reserve's February 1998 Monetary Policy Report:
The circumstances that prevailed through most of 1997 required that
the Federal Reserve remain especially attentive to the risk of a
pickup in inflation. Labor markets were already tight when the year
began, and nominal wages had started to rise faster than
previously. Persistent strength in demand over the year led to
economic growth in excess of the expansion of the economy's
potential, intensifying the pressures on labor supplies. (34)
Indeed, over much of the period between early 1997 and mid-1998,
the FOMC directive maintained a bias indicating that it was more likely
to raise rates to battle inflationary pressures than it was to lower
them. Nonetheless, the FOMC left the funds rate unchanged at 5.5 percent
from March 1997 until September 1998. Why did it do so?
Certainly the inaction in large part reflected the forecast for
growth to moderate to a more sustainable pace as well as the fact that
actual inflation had remained contained despite tight labor market
conditions. Based on the funds rate remaining at 5.5 percent, the Board
of Governors' staff forecast in the August 1998 Greenbook projected
GDP growth to slow from 2.9 percent in 1998 to 1.7 percent in 1999. The
unemployment rate was projected to rise to 5.1 percent by the end of
1999 and core CPI inflation was projected to edge down to 2.1 percent.
Additionally, however, on several occasions heightened uncertainty over
the outlook for growth and inflation apparently reinforced the decision
to refrain from raising rates. The following quote from the July 1997
FOMC minutes is a revealing example:
While the members assessed risks surrounding such a forecast as
decidedly tilted to the upside, the slowing of the expansion should
keep resource utilization from rising substantially further, and
this outlook together with the absence of significant early signs
of rising inflationary pressures suggested the desirability of a
cautious "wait and see" policy stance at this point. In the current
uncertain environment, this would afford the FOMC an opportunity to
gauge the momentum of the expansion and the related degree of
pressure on resources and prices. (35)
Furthermore, the FOMC did not regard "waiting and seeing"
as having a high cost. They thought any increase in inflation would be
slow and that, if needed, a limited tightening would be sufficient to
rein in any emerging price pressures. This is seen in the following
quote from the same meeting:
The risks of waiting appeared to be limited, given that the
evidence at hand did not point to a step-up in inflation despite
low unemployment and that the current stance of monetary policy did
not seem to be overly accommodative.
... In these circumstances, any tendency for price pressures to
mount was likely to emerge only gradually and to be reversible
through a relatively limited policy adjustment.
Thus, it appears that uncertainty and associated risk management
considerations supported the FOMC's decision to leave policy on
hold.
Of course, the potential fallout for the U.S. economy of the Asian
financial crisis was a major factor underlying the uncertainty about the
outlook. The baseline scenario was that the associated weakening in
demand from abroad and a stronger dollar would be enough to keep
inflationary pressures in check but would not be strong enough to cause
inflation or employment to fall too low. As Chairman Greenspan noted in
his February 1998 Humphrey-Hawkins testimony to Congress, there were
substantial risks to this outlook, with the delicate balance dictating
unchanged policy:
However, we cannot rule out two other, more worrisome
possibilities. On the one hand, should the momentum to domestic
spending not be offset significantly by Asian or other
developments, the U.S. economy would be on a track along which
spending could press too strongly against available resources to be
consistent with contained inflation. On the other, we also need to
be alert to the possibility that the forces from Asia might damp
activity and prices by more than is desirable by exerting a
particularly forceful drag on the volume of net exports and the
prices of imports. When confronted at the beginning of this month
with these, for the moment, finely balanced, though powerful
forces, the members of the Federal Open Market Committee decided
that monetary policy should most appropriately be kept on hold.
(36)
By late in the summer of 1998, this balance had changed, as the
strains following the Russian default weakened the outlook for foreign
growth and tightened financial conditions in the United States. The FOMC
was concerned about the direct implications of these developments for
U.S. financial markets, already evident in the data, as well as their
implications for the real economy, which were still just a prediction.
The staff forecast prepared for the September FOMC meeting reduced the
projection for growth in 1999 by about 1/2 percentage point to 1 1/4
percent, predicated on a 75 basis-point reduction in the funds rate
spread out over three quarters. Such a forecast was not a
disaster--indeed, at 5.2 percent the unemployment rate projected for the
end of 1999 was still below the staff's estimate of its natural
rate. Nonetheless, the FOMC moved much faster than the staff assumed it
would, lowering rates 25 basis points at its September and November
meetings as well as making an inter-meeting rate cut in October.
According to the FOMC minutes, the rate cuts were made in part as
insurance against a worsening of financial conditions and weakening
activity. As they noted in September of that year:
Such an action was desirable to cushion the likely adverse
consequences on future domestic economic activity of the global
financial turmoil that had weakened foreign economies and of the
tighter conditions in financial markets in the United States that
had resulted in part from that turmoil. At a time of abnormally
high volatility and very substantial uncertainty, it was impossible
to predict how financial conditions in the United States would
evolve ... In any event, an easing policy action at this point
could provide added insurance against the risk of a further
worsening in financial conditions and a related curtailment in the
availability of credit to many borrowers. (37)
While the references to insurance are clear, a case also can be
made that these policy moves were undertaken largely to realign the
misses in the expected paths for growth and inflation from the
FOMC's policy goals. At that time, the prescriptions to address the
risks to their policy goals were in conflict: risks to achieving the
inflation mandate called for higher interest rates while risks to
achieving the maximum employment mandate called for lower rates. As the
above quote from Chairman Greenspan's February 1998 testimony
indicated, in early 1998 the FOMC thought that a 5 1/2 percent funds
rate kept these risks in balance. Subsequently, as the odds of economic
weakness increased, the FOMC cut rates to bring the risks to the two
goals back into balance. As Chairman Greenspan said in his February 1999
Humphrey-Hawkins testimony:
To cushion the domestic economy from the impact of the increasing
weakness in foreign economies and the less accommodative conditions
in U.S. financial markets, the FOMC, beginning in late September,
undertook three policy easings.... These actions were taken to
rebalance the risks to the outlook, and, in the event, the markets
have recovered appreciably. (38)
Were the late 1998 rate moves a balancing of forecast
probabilities, insurance against a downside skew in possible outcomes,
or some combination of both? There is no easy answer. This motivates our
econometric work in section III, which seeks to disentangle the normal
response of policy to expected outcomes from uncertainty and other
related factors that may have influenced the policy decision.
II.C. 2000-01
In the end, the economy weathered the fallout from the Russian
default well. The strength of the U.S. economy and underlying
inflationary pressures led the FOMC to execute a series of rate hikes
that brought the funds rate up to 6.5 percent by May of 2000. At the
time of the June 2000 FOMC meeting, the unemployment rate stood at 4.1
percent and core PCE inflation, which the FOMC was now using as its main
measure of consumer price inflation, was running at about 1 3/4 percent,
up from 1 1/2 percent in 1999. The staff forecast that growth would
moderate to a rate near or a little below potential, the unemployment
would remain near its current level, and inflation would rise to 2.3
percent in 2001--and this forecast was predicated on another 75 basis
points tightening. Despite this outlook, the FOMC decided to leave rates
unchanged. What drove this pause? It seems likely to us that risk
management was an important consideration.
In particular, the FOMC appeared to want to see how uncertainty
over the outlook would play out. First, the incoming data and anecdotal
reports from committee members' business contacts pointed to a
slowdown in growth, although how much it was slowing was unclear.
Second, with rates having risen substantially over the previous year,
and given the lags from policy changes to economic activity, it was
unlikely that the full effects of the hikes had yet been felt. Given the
relatively high level of the funds rate and the slowdown in growth that
appeared in train, the FOMC seemed wary of over-tightening. Third,
despite the staff forecast, the FOMC apparently considered the costs of
waiting, in terms of inflation risks, to be small. Accordingly, the FOMC
thought it better to put a rate increase on hold and see how the economy
evolved. The June 2000 FOMC minutes contain a good deal of commentary
supporting this interpretation: (39)
The increasing though still tentative indications of some slowing
in aggregate demand, together with the likelihood that the earlier
policy tightening actions had not yet exerted their full retarding
effects on spending, were key factors in this decision. The
uncertainties surrounding the outlook for the economy, notably the
extent and duration of the recent moderation in spending and the
effects of the appreciable tightening over the past year ...
reinforced the argument for leaving the stance of policy unchanged
at this meeting and weighting incoming data carefully.... Members
generally saw little risk in deferring any further policy
tightening move, particularly since the possibility that underlying
inflation would worsen appreciably seemed remote under prevailing
circumstances. (40)
In the second half of 2000 it became increasingly evident that
growth had slowed to a pace somewhat below trend and inflation was
moving up at a slower pace than the staff had projected in June. The
FOMC's response was to hold the funds rate at 6.5 percent through
the end of 2000. But the data around the turn of the year proved to be
weaker than anticipated. In a conference call on January 3, 2001, the
FOMC cut the funds rate to 6 percent, and then at its end-of-month
meeting it lowered the rate again, to 5 1/2 percent. (41)
In justifying the aggressive ease, the minutes stated:
Such a policy move in conjunction with the 50 basis point reduction
in early January would represent a relatively aggressive policy
adjustment in a short period of time, but the members agreed on its
desirability in light of the rapid weakening in the economic
expansion in recent months and associated deterioration in business
and consumer confidence. The extent and duration of the current
economic correction remained uncertain, but the stimulus ... would
help guard against cumulative weakness in economic activity and
would support the positive factors that seemed likely to promote
recovery later in the year ... In current circumstances, members
saw little inflation risk in such a "front-loaded" easing policy,
given the reduced pressures on resources stemming from the sluggish
performance of the economy and relatively subdued expectations of
inflation. (42)
According to this quote, not only was the actual weakening in
activity an important consideration in the policy decision, but
uncertainty over the extent of the downturn and the possibility that it
might turn into an outright recession seemed to spur the FOMC to make a
large move. The "help guard against cumulative weakness" and
"front-loaded" language could be read as the FOMC taking out
some additional insurance against the possibility that the weakening
activity would snowball into a recession. This could have reflected a
concern about the kinds of nonlinear output dynamics or perhaps
non-quadratic losses associated with a large recession that we discussed
in section II.A.
The FOMC steadily brought the funds rate down further over the
course of 2001, against a backdrop of weakening activity, and the
economy seemed to be skirting a recession. Then the tragic events of
September 11 occurred. There was, of course, huge uncertainty over how
international developments, logistics disruptions, and the sentiment of
households, businesses, and financial markets would affect spending and
production. By November the board staff was forecasting a modest
recession: growth in the second half of 2001 was projected to decline 1
1/2 percent at an annual rate and rise at just a 1 1/4 percent rate in
the first half of 2002. By the end of 2002 the unemployment rate was
projected to rise to 6.1 percent and core PCE inflation was projected to
be 1 1/2 percent. These forecasts were predicated on the funds rate
remaining flat at 2 1/4 percent.
However, in the aftermath of the terrorist attacks the FOMC was
worried about something more serious than the shallow recession forecast
by the staff. Furthermore, a new risk came to light, namely the chance
that dis-inflationary pressures might emerge that, once established,
would be more difficult to fight with the funds rate already low. In
response, the FOMC again acted aggressively, cutting the funds rate 50
basis points in a conference call on September 17 and again at their
regular meetings in October and November. The November 2001 FOMC meeting
minutes note:
... members stressed the absence of evidence that the economy was
beginning to stabilize and some commented that indications of
economic weakness had in fact intensified. Moreover, it was likely
in the view of these members that core inflation, which was already
modest, would decelerate further. In these circumstances
insufficient monetary policy stimulus would risk a more extended
contraction of the economy and possibly even downward pressures on
prices that could be difficult to counter with the current federal
funds rate already quite low. Should the economy display
unanticipated strength in the near term, the emerging need for a
tightening action would be a highly welcome development that could
be readily accommodated in a timely manner to forestall any
potential pickup in inflation. (43)
This passage suggests that the large rate cuts were not only aimed
at preventing the economy from falling into a serious recession with
deflationary consequences, but that the FOMC was also concerned that
such an outcome "could be difficult to counter with the current
funds rate already quite low." Accordingly, the aggressive policy
moves could in part also have reflected insurance against the future
possibility of being constrained by the ZLB, precisely the policy
scenario and optimal policy prescription described in section I.
II.D. Quantifying References to Uncertainty and Insurance in FOMC
Minutes
We have shown that Federal Reserve communications contain many
references suggesting that uncertainty or insurance motives influenced
the stance of policy. But the question remains: Has risk management had
a material impact on policy? We now show how we quantified these
references into variables that can be used to assess the importance of
risk management for actual policy decisions.
In the spirit of the narrative approach pioneered by Christina
Romer and David Romer (1989), we built judgmental indicators based on
our reading of the FOMC minutes covering the period from the beginning
of Greenspan's chairmanship in 1987 to 2008. We concentrated on the
paragraphs that describe the FOMC's rationale for its policy
decision, reading these passages for references to when uncertainty or
insurance considerations appeared closely linked to the FOMC's
decision. Other portions of the minutes were excluded from our analysis
in order to better isolate arguments that directly influenced the policy
decision from more general discussions of unusual data or forecast
uncertainty.
We constructed two separate judgmental variables, one for
uncertainty (hUnc) and one for insurance (hIns), where "h"
stands for "human-coded." The uncertainty variable was coded
to plus (minus) one if we judged that the FOMC appealed to uncertainty
to position the funds rate higher (lower) than it otherwise would be
based on the staff forecast alone. If uncertainty did not appear to be
an important factor influencing the policy decision, we coded the
indicator as zero. We coded the insurance variable similarly by
identifying when the minutes cited insurance against some adverse
outcome as an important consideration in the stance of policy. (44)
As an example of our coding, consider the June 2000 meeting
discussed above when the FOMC decided to wait to assess future
developments before taking further policy action. The commentary below
highlights the role of uncertainty in this decision (our italics):
The increasing though still tentative indications of some slowing
in aggregate demand, together with the likelihood that the earlier
policy tightening actions had not yet exerted their full retarding
effects on spending, were key factors in this decision. The
uncertainties surrounding the outlook for the economy, notably the
extent and duration of the recent moderation in spending and the
effects of the appreciable tightening over the past year, including
the 1/2 percentage point increase in the intended federal funds
rate at the May meeting, reinforced the argument for leaving the
stance of policy unchanged at this meeting and weighting incoming
data carefully. (45)
We coded this meeting as a minus one for hUnc--rates were lower
because uncertainty over the economic outlook and the effects of past
policy moves appear to have been important factors in the FOMC's
decision not to raise rates. Similarly, the January and November 2001
quotes cited above led us to code hIns as a minus one for those
meetings, since, as we noted in the narrative, the FOMC appeared to be
making aggressive rate moves in part to insure against downside risks to
the baseline scenario.
We did not code all mentions of uncertainty or insurance as a plus
or minus one. For example, the March 1998 minutes referred to
uncertainties over the economic outlook and said that the FOMC could
wait for further developments before tightening to counter potential
inflation developments. However, at that time the FOMC was not obviously
in the midst of a tightening cycle; the baseline forecast seemed
consistent with the funds rate setting at the time; and the commentary
over the need to tighten was in reference to an indefinite point in the
future. So, in our judgment, uncertainty did not appear to be a very
important factor holding back a rate increase at that meeting, and we
coded it as a zero. (46)
Of course, this coding of the minutes is inherently subjective, and
there is no definitive way to judge the accuracy of the decisions we
made. Consequently we also constructed objective measures of how often
references to uncertainty or insurance appeared in the policy paragraphs
of the minutes. In particular, we constructed variables which measure
the percentage of sentences containing words related to uncertainty or
insurance in conjunction with references to economic activity,
inflation, or both. (47) The measures for uncertainty and insurance are
denoted mUnc and mIns, where "m" indicates these variables are
"machine-coded." Figures 4 and 5 show plots of our
minutes-based uncertainty and insurance variables.
Non-zero values of the human-coded variables are indicated by dots
and the bars indicate the machine-coded sentence counts. The uncertainty
indicator hUnc "turns on" in 31 out of the 128 meetings
between 1993 and 2008. Indications that insurance was an actor in
shading policy are not as common, but still show up 14 times in hIns.
Most of the time--24 for uncertainty and 11 for insurance---we judged
that rates were set lower than they otherwise would have been to account
for these factors.
The hUnc and hIns codings are not always reflected in the sentence
counts. There are also meetings where the sentence counts are positive
but we did not judge them to indicate that rates were set differently
than they normally would have been. For example, in March 2007 hUnc is
coded zero for uncertainty whereas mUnc finds uncertainty referenced in
nearly one-third of the sentences in the policy section of the minutes.
Inspection of the minutes indicates that the FOMC was uncertain over
both the degree to which the economy was weakening and whether their
expectation of a decline in inflation, which was running uncomfortably
high at the time, actually would materialize. In the end, they did not
adjust current policy in response to these conflicting uncertainties.
Hence we coded hUnc to zero in this case.
[FIGURE 4 OMITTED]
[FIGURE 5 OMITTED]
Note that we did not attempt to measure a variable for risk
management per se. The minutes often contain discussions of policies
aimed at addressing risks to attaining the FOMC's goals. However,
many times this commentary appears to surround policy adjustments aimed
instead at balancing (possibly conflicting) risks to the outlook for
output and inflation, not unlike the response to changes in economic
conditions prescribed by the canonical framework for studying optimal
policy under discretion. Such risk balancing was discussed in our
narrative of the 1997-98 period. (48)
III. Econometric Evidence of Risk Management
So far we have uncovered clear evidence that risk management
considerations have been a pervasive feature of Federal Reserve
communications. But it is not clear at this stage whether risk
management has had a material impact on the FOMC's policy
decisions. If it has, then calling for a risk management approach in the
current policy environment would be consistent with a well established
approach to monetary policy. In this section we describe econometric
evidence suggesting that risk management has had a material impact on
the FOMC's funds rate choices in the pre-ZLB era.
We estimate monetary policy reaction functions of the kind studied
by Clarida, Gall, and Gertler (2000) and many others. These have the
funds rate set as a linear function of output gap and inflation
forecasts; there is no role for risk management unless risk feeds
directly into the point forecasts. To quantify the role of risk beyond
such a direct influence we add variables that proxy for risk to the
reaction function. (49)
III.A. Empirical Strategy
Let [R.sup.*.sub.t] denote the notional target for the funds rate
in period t. We assume the FOMC sets this target according to
(13) [R.sup.*.sub.t] = [R.sup.*] + [beta]([E.sub.t][[[pi].sub.t,k]]
- [[pi].sup.*]) + [gamma][E.sub.t][[x.sub.t,q]] + [mu][s.sub.t],
where [[pi].sub.t,k] denotes the average annualized inflation rate
from t to t + k, [[pi].sup.*] is the FOMC's target for inflation,
[x.sub.t,q] is the average output gap from t to t + q, [s.sub.t] is a
risk management proxy, and [E.sub.t] denotes expectations conditional on
information available to the FOMC at date t. The coefficients [beta],
[gamma], and p are fixed over time. [R.sup.*] is the desired nominal
rate when inflation is at target, the output gap is closed, and risk
does not influence policy other than through the forecast, [mu] = 0. If
the average output and inflation gaps are both zero and the FOMC acts as
if the natural rate is constant and out of its control, then [R.sup.*] =
[r.sup.*] + [[pi].sup.*], where [r.sup.*] is the real natural rate of
interest. (50)
We make two more assumptions to arrive at our estimation equation.
First, the FOMC has a preference for interest rate smoothing and so does
not choose to hit its notional target instantaneously, and as a
practical matter it is necessary to include lags of the funds rate to
fit the data. Second, the FOMC does not have perfect control over
interest rates, which gives rise to an error term, [v.sub.t]. These
assumptions lead to the following specification for the actual funds
rate, [R.sub.t]:
(14) [R.sub.t] = (1 - A(1)) [R.sup.*.sub.t] + A(L[).sub.Rt-1] +
[v.sub.t]),
where A(L) - [[summation].sup.N-1.sub.j=0][a.sub.j+1][L.sup.j] is a
polynomial in the lag operator L with N denoting the number of funds
rate lags. The error term [v.sub.t] is assumed to be mean zero and
serially independent. Combining equations 13 and 14 yields our
estimation equation:
(15) [R.sub.t] = [b.sub.0] + [b.sub.1][E.sub.t][[[pi].sub.t,k]] +
[b.sub.2][E.sub.t][[x.sub.t,q]] + A(L) [R.sub.t-1] + [b.sub.3][s.sub.t]
+ [v.sub.t],
where [b.sub.i], i = 0, 1, 2, 3 are simple functions of A(1),
[beta], [gamma], [mu], [r.sup.*] and [[pi].sup.*]. (51)
We use the publicly available Federal Reserve Board staff forecasts
of core CPI inflation (in percentage points) and the output gap
(percentage point deviations of real GDP from its potential) to measure
[E.sub.t] [[[pi].sub.t,k]] and [E.sub.t] [[x.sub.t,q]] with k = q = 3.
(52) These forecasts are available for every FOMC meeting. We estimate
equation 15 both meeting-by-meeting and quarterby-quarter. (53) When we
estimate it at the quarterly frequency we use staff forecasts
corresponding to FOMC meetings closest to the middle of each quarter.
(54) We measure R, at the meeting frequency using the funds rate target
announced (or estimated) at the end of the day of a meeting, and we
measure it at the quarterly frequency using the average effective funds
rate over the 30 trading days following the meeting closest to the
middle of the quarter. Provided the error term [v.sub.t] is serially
uncorrelated and is orthogonal to the forecasts and the risk proxies, we
can obtain consistent estimates of [beta], [gamma], and [mu] by
estimating equation 15 by ordinary least squares. We keep N sufficiently
large to ensure that [v.sub.t] is serially uncorrelated.
To quantify the role of risk we study the magnitude and statistical
significance of estimates of p in equation 13. An insignificant estimate
of p cannot be interpreted as evidence against a role for risk
management, because risk might operate by influencing point forecasts as
in our forward-looking model. We also could find no effect because risk
might tilt policy in opposite directions depending on the circumstances.
With the exception of our human-coded FOMC-based variables, none of our
risk proxies accounts for the fact that perceived risks to the forecast
might have different effects on policy depending on the nature of the
risk and the state of the economy. For example, an increase in
uncertainty about the inflation outlook should lead to tight policy if
this increase occurs during a period of heightened concerns about rising
inflation, but to looser policy if concerns are over unwanted
disinflation. As such, estimates of the effect of any given proxy will
at best reflect the nature of the risk and the circumstances in which it
has arisen that have predominated over the sample period.
Finally, we do not allow for the coefficients on the forecasts to
depend on our risk proxies as is suggested by the work of Brainard
(1967) and others. However, we show in the online appendix that if these
forecast coefficients are linear functions of risk, then the null
hypothesis that a given proxy's coefficient is zero in our now
mis-specified model encompasses the null that the forecast coefficients
are invariant to risk as measured by that proxy.
III.B. Proxies for Risk Management
In addition to our human- and machine-coded FOMC-based variables we
consider several proxies for risk management that do not rely on
interpreting the FOMC minutes. Two of these variables are constructed
using the Federal Reserve Board staff's forecast, which is seen by
the FOMC at its regular meetings, and we study them using our meeting
frequency reaction functions. The remaining variables are measured at
the quarterly frequency and can be divided into two groups based on
whether they primarily reflect variance or skewness in the forecast.
The two additional FOMC-based proxies involve revisions to the
Federal Reserve Board staff's forecasts for the output gap (frGap)
and core CPI inflation (frInf). The revisions correspond to changes
between meeting m and m - 1 in the forecasts over the same one-year
period that starts in the quarter of meeting m - 1. A big change in the
forecast is usually triggered by unusual events that may be difficult to
interpret and hence generate uncertainty about the forecast. If the FOMC
were only worried about these events in making its point forecast, then
the post-shock forecasts of the output gap or inflation would be
sufficient to describe the policy setting. However, if uncertainty has a
separate effect on policy the forecast revisions might enter
significantly.
Three of the quarterly proxies exploit financial market data: VXO,
SPD, and JLN. VXO is the Chicago Board Options Exchange's measure
of market participants' expectations of volatility in the S&P
500 stock index over the next 30 days. Since the S&P 500 reflects
earnings expectations, VXO should, at least in part, measure market
participants' uncertainty about the economic outlook. (55) SPD is
the difference between the quarterly average of daily yields on BAA
corporate bonds and 10-year Treasury bonds. Gilchrist and Zakrajsek
(2012) demonstrate that this variable measures private-sector default
risk plus other factors that may indicate downside risks to economic
growth. (56) JLN is Kyle Jurado, Sydney Ludvigson, and Serena Ng's
(2015) measure of the common variation in the one-year-ahead
unforecastable components of a large number of activity, inflation, and
financial indicators. Given its basis in measuring uncertainty about
macroeconomic forecasts, JLN is a natural risk proxy to consider. But,
unlike VXO and SPD, it does not measure real-time uncertainty, and
similar to these two measures it confounds macroeconomic and financial
uncertainty.
The remaining proxies are based on the Survey of Professional
Forecasters (SPF) which surveys forecasters about their point forecasts
of GDP growth and GDP deflator inflation and their probability
distributions for these forecasts. We use both kinds of information to
construct measures of variance and skewness in the economic outlook one
year ahead. (57) Variance is measured using the median among forecasters
of the standard deviations calculated from each individual's
probability distribution (vGDP and vInf) and the interquartile range of
point forecasts across individuals (DvGDP and DvInf.) (58) Skewness is
measured using the median of the individual forecasters' mean minus
mode (sGDP and sInf) and the difference between the mean and the mode of
the cross-forecaster distribution of point forecasts (DsGDP and DsInf).
Consequently, a positive (negative) value for one of these proxies
represents upside (downside) risk to the modal forecast.
The principal advantage of these proxies is that they are real-time
measures of perceived risks in the forecast. The main drawback of the
measures based on survey respondents' forecast distributions is
that the bins they are asked to put probability mass on are relatively
wide, so statistics based on them may contain substantial measurement
error. The proxies based on the cross-section of forecasts are properly
thought of as measuring forecaster disagreement rather than variance or
skewness in the outlook per se. However, there is a large literature
that uses forecaster disagreement as a proxy for perceived risk. (59)
All estimates are based on samples that end in 2008 to avoid the
ZLB period but begin at different dates to address idiosyncratic
features of the data. The benchmark start date is determined by the
onset of Alan Greenspan's tenure as chairman of the FOMC in 1987,
but later dates are used in several cases. The sample for the FOMC-based
indicators starts in 1993 because inter-meeting changes in the target
funds rate were much more common prior to that year than afterwards; the
FOMC often voted on a bias to future policy moves and the chairman
subsequently acted at his discretion. We cannot use inter-meeting moves
because we lack contemporaneous staff forecasts. Furthermore, the change
in the frequency of intermeeting moves raises the spectre of instability
in the reaction function. (60) The pre-1993 inter-meeting moves are less
of a concern for our quarterly models, because in these specifications
the funds rate is not as closely tied to any particular meeting. So we
chose to include these data points to maximize the number of
observations, except when considering the proxies based on
individuals' forecast distributions from the SPF. In the latter
cases, the first observation is 1992Q1, to coincide with a discrete
change in SPF methodology. (61)
Tables 4 and 5 display summary statistics for Federal Reserve Board
staff forecasts of inflation and the output gap and the various proxies
for risk management at the meeting and quarterly frequencies. What is
most worth noting in these tables is that no risk proxy displays a
particularly large positive or negative correlation with either the
output gap or inflation forecast. This suggests that our proxies contain
information that is not already incorporated into these forecasts.
Nevertheless, some variables have moderately large correlations in
absolute value, so the forecasts do somewhat reflect underlying risks to
the outlook. Interestingly, skewness in forecasters' GDP forecasts
(GDP) is negatively correlated with the outlook for activity.
Tables 6 and 7 display cross-correlations of the FOMC-based and
quarterly proxies, respectively. As suggested by figures 4 and 5, the
human- and machine-coded FOMC variables for uncertainty and insurance
are essentially uncorrelated. These variables also appear unrelated to
the forecast revision variables. However, several correlations among the
quarterly proxies are worth noting. Forecaster variance and disagreement
about the GDP growth outlook (vGDP and DvGDP) are both positively
correlated with VXO and SPD, suggesting that the financial variables do
reflect some uncertainty about the growth outlook. Also, the relatively
high correlation of SPD with sGDP suggests that the former to some
extent captures skewness in the growth outlook. The correlation of vGDP
with vlnf and DvGDP with DvInf are both fairly large, suggesting that
uncertainty about inflation and uncertainty about GDP often move
together. The correlations of the corresponding forecaster uncertainty
and disagreement variables (vGDP with DvGDP and vInf with DvInf) are
somewhat large too. Evidently, the amount of disagreement among
forecasters is similar to the median amount of uncertainty they see.
Finally Jurado, Ludvigson, and Ng's (2015) measure of macroeconomic
uncertainty, JLN, is highly correlated with VXO and SPD and to some
extent with DvGDP, but much less so with any of the other risk proxies.