Central bank transparency and the signal value of prices.
Morris, Stephen ; Shin, Hyun Song
A CENTRAL BANK must be accountable for its actions, and its
decision-making procedures should meet the highest standards of probity and technical competence. In light of the considerable discretion
enjoyed by independent central banks, the standards of accountability
that they must meet are perhaps even higher than for most other public
institutions. Transparency allows for democratic scrutiny of the central
bank and hence is an important precondition for central bank
accountability. Few would question the proposition that central banks
must be transparent in this broad sense.
A narrower debate over central bank transparency considers whether
a central bank should publish its forecasts and whether it should have a
publicly announced, numerical target for inflation. This narrower notion
of transparency also impinges on issues of accountability and
legitimacy, but the main focus in this debate has been on the
effectiveness of monetary policy.
Proponents of transparency in this narrower sense point to the
importance of the management of expectations in conducting monetary
policy. A central bank generally controls directly only the overnight
interest rate, "an interest rate that is relevant to virtually no
economically interesting transactions," as Alan Blinder has put it.
(1) The links from this direct lever of monetary policy to the prices
that matter, such as long-term interest rates, depend almost entirely
upon market expectations, and monetary policy is effective only to the
extent that the central bank can shape the beliefs of market
participants. Long-term interest rates are influenced in large part by
the market's expectation of the future course of short-term rates.
By charting a path for future short-term rates and communicating this
path clearly to the market, the central bank can influence market
expectations, thereby affecting mortgage rates, corporate lending rates,
and other prices that have a direct impact on the economy. Having thus
gained a lever of control over long-term rates, monetary policy achieves
its effects through the IS curve, through quantities such as consumption
and investment.
Indeed, it would not be an exaggeration to say that many leading
monetary economists today see the management of expectations as the task
of monetary policy. For Lars Svensson, "monetary policy is to a
large extent the management of expectations"; Michael Woodford puts
it similarly: "not only do expectations about policy matter, but,
at least under current conditions, very little else matters." (2)
The reasons for this preeminent role of expectations in monetary
policy are explained particularly well for a general audience in a
policy speech by Ben Bernanke, titled "The Logic of Monetary
Policy." (3) Bernanke considers whether monetary policy' s
steering of the economy is in some way analogous to driving a car.
Monetary policy actions are akin to stepping on the accelerator or the
brake, to stimulate or cool the economy as appropriate given its current
state. Bernanke notes that, although this analogy is superficially
attractive, it breaks down when one notes the importance of market
expectations of the central bank's future actions. If the economy
is like a car, then it is a car whose speed at a particular moment
depends not on the pressure on the accelerator at that moment, but
rather on the expected average pressure on the accelerator over the rest
of the trip. Woodford employs a similar transport metaphor:
"central banking is not like steering an oil tanker, or even
guiding a spacecraft, which follows a trajectory that depends on
constantly changing factors, but that does not depend on the
vehicle's own expectations about where it is heading."
Instead, optimal policy is history dependent, in that the central bank
commits itself to a rule that takes into account past conditions,
including even some that no longer matter for an evaluation of what is
possible to achieve from now on. This is so because it was the
anticipation of such a rule that determined the market's
expectations today. (4)
The pivotal role of market expectations puts the central bank'
s communication policy at center stage, and this view has been adopted
to some extent by all central banks, but embraced most enthusiastically
by those that have adopted explicit inflation-targeting monetary
regimes. (5) However, one issue seems to have received less attention
than it deserves, namely, the consequences of central bank transparency
for the informativeness of prices. In order for the central bank to know
how it should manage expectations, it must obtain its cues from signals
emanating from the economy, which themselves are the product of market
expectations. On the face of it, there is an apparent tension between
managing market expectations and learning from market expectations: the
central bank cannot manipulate prices and, at the same time, hope that
prices yield informative signals. A recent speech by Federal Reserve
governor Don Kohn identifies limits to transparency for these reasons.
(6)
This tension between managing expectations and learning from them
reflects the dual role of a central bank in the conduct of monetary
policy. In addition to being an active shaper of events, the central
bank must act as a vigilant observer of events, in order to obtain its
cues for future actions. The roles are complementary, since only by
being a vigilant observer of events can it be effective as an active
shaper of outcomes. On the surface at least, there is a worry that an
emphasis on the latter role will detract from the former. The central
bank holds a mirror up to the economy for cues for its future actions,
but the more effective it has been in manipulating the beliefs of the
market, the more the central bank will see merely its own reflection.
The dilemma between managing market prices and learning from market
prices would disappear if the central bank were omniscient, not having
to rely on the information flowing from market prices. Then its only
task would be to convey its knowledge to the rest of the economy,
thereby aligning market expectations to its own (correct) view. Even if
the central bank is far from omniscient, one could argue that it is so
much better informed than any other individual agent in the economy that
it should convey what it knows forcefully, so as to educate the myriad
other actors in the economy. In this view the tension between managing
market expectations and learning from market expectations would be
resolved in favor of the former.
This way of resolving the tension is implicit in the following
argument in another speech by Bernanke, titled "Central Bank Talk
and Monetary Policy":</p> <pre> ... when the
monetary policy committee regularly provides information about its
objectives, economic outlook, and policy plans, two benefits result.
First, with more complete information
available, markets will price financial assets more efficiently.
Second, the policymakers will usually find that they have achieved
a closer alignment between market participants' expectations
about
the course of future short-term rates and their own views. (7)
</pre> <p>Here Bernanke makes two claims:
--When the central bank conveys its own views clearly, market
prices will be more informationally efficient.
--When the central bank conveys its own views clearly, the
market's expectations will be closer to the central bank's own
expectations.
We will argue that there are strong reasons for believing that the
second assertion holds true, but that the first assertion is more open
to question. In particular, the stronger are the reasons for believing
the second assertion, the more doubtful is the first. In short, the
first assertion may be false because the second assertion is true.
Informational Efficiency
Informational efficiency can be expected to have large favorable
welfare consequences via two broad channels: the allocational role of
financial market prices in guiding investment decisions, and the
information revealed by market outcomes as a guide to the central
bank's optimal control problem. We take each of these in turn.
Financial market prices have a large impact on the level and type
of investments undertaken. For central bankers, the allocational role of
the yield curve in determining the duration of investment projects is of
particular importance. Irving Fisher in his Theory of Interest gives the
example of three possible uses for a plot of land: for forestry,
farming, or strip mining. (8) The interest rate used to discount future
cash flows largely determines the ranking of the three projects.
Long-duration projects such as forestry, where the bulk of the payoffs
arrive in the distant future, are most attractive in an environment of
low interest rates. When interest rates are high, short-duration
projects like strip mining dominate. Since investment decisions are
often difficult to reverse, distortions to investment incentives can
have a lingering effect long after the initial misallocations.
The current debate in the United States on the booming residential
housing market and the appearance of "exotic" mortgage
products that backload repayments hinges on the correct pricing of
long-term cash flows. If long-term interest rates are low, wage income
in the distant future is given a large weight, and even exotic mortgage
products seem viable when backed by the large present values of lifetime
wage income. For central bankers who must also keep a vigilant eye on
overall financial stability, the allocational role of financial market
prices thus takes on great significance.
This allocational role is not limited to the yield curve as
revealed in the prices of fixed-income securities. Equities that promise
high payoffs in the distant future, as do, for example, many technology
stocks, are a prime example of investments in long-duration projects. In
the same way that high prices of long-duration fixed-income assets push
down long-term interest rates, high stock prices lower the implicit
discount rate on dividends paid by stocks. Christopher Polk and Paolo
Sapienza, as well as Qi Chen, Itay Goldstein, and Wei Jiang, document
evidence that investment is sensitive to the information conveyed by
stock prices. (9) Anecdotal stories of formerly staid power companies
venturing into telecommunications and other more fashionable business
areas and then coming up short of expectations have been a constant
theme since the bursting of the technology bubble a few years ago.
Central bankers have a large impact on financial markets. Indeed,
it could be argued that the central bank's impact can sometimes be
too large. By the nature of the problem, it is difficult to gauge
whether the reactions in the financial market are excessive or justified
by the fundamentals. However, the paper by George Perry in this volume
provides some rare, tantalizing evidence that market prices may be
distorted by anticipation of central bank actions. The evidence comes
from the contrasting market reactions to two sets of official data on
the labor market: the Current Population Survey of households and the
Current Employment Statistics survey of nonfarm business payrolls. Perry
shows that the two measures are roughly comparable in terms of their
ability to track broad labor market conditions, and a simple average of
the two does well in combining the informational content of both series.
However, the financial markets certainly do not accord equal weight
to the two series. The nonfarm payroll numbers are given far greater
weight and are much anticipated by market participants. Michael Fleming and Eli Remolona document the sometimes dramatic reactions in the bond
market to the nonfarm payroll numbers. (10) The fact that the Federal
Reserve is known to accord more weight to the nonfarm payroll survey
undoubtedly plays on the minds of market participants. Given the
importance of others' reactions to news, the fact that other market
participants take note of the announcement is reason enough for any one
market participant to take note. Thus the contrasting reactions to the
two labor market series may be attributable largely to the background
presence of the central bank. Keynes's famous "beauty
contest" analogy comes to mind, and we will return to this below.
Perry' s findings echo a similar phenomenon from an earlier
era, namely, the exaggerated reactions to announcements of money stock
aggregates during the period after 1979, when the Federal Reserve began
to emphasize growth of the money stock as an indicator of the monetary
stance. (11) Although the Federal Reserve had published weekly estimates
of monetary aggregates for some time (and continues to do so today), the
announcements in the early 1980s became particularly significant with
the added importance that the Federal Reserve placed on these aggregates
at the time. These money stock announcements became one of the focal
events in financial markets, if for no other reason than that
significant movements in interest rates were associated with sizable
unanticipated changes in the money stock. The market's reactions to
such announcements were noticeably larger during the period following
the shift in the monetary regime in 1979 than in that preceding it.
Roley shows that, "over the 1 1/2-hour intervals spanning the
weekly announcements the variance of the change in the three-month
Treasury bill yield in the three years following October 1979 is more
than thirty times larger than that in the previous two-year
period." (12) In particular, the variance of the change in the
three-month Treasury bill rate from 3:30 p.m. to 5 p.m. on the
announcement day (the announcement always came at 4:10 p.m. on Friday)
was 0.0016 for the period from 1977 to September 1979, but then jumped
to 0.0536 between October 1979 and October 1982.
When the market understands that the central bank itself is
watching the money stock for its policy stance, the strategic
interactions among market participants will take center stage. The
actual magnitudes will matter less than the fact that the announcements
make the numbers common knowledge. The forces at work are similar to the
forces that move markets after breaking news stories. The news itself
may not be a surprise to some market participants, but the fact that it
has become commonly known is news. It is this news that serves as the
lightning rod that moves markets.
We now turn to the second channel through which informational
efficiency may have an impact on economic welfare. In order for the
central bank to steer the economy, it must have good information on the
current state of the economy; in particular, it must know how close the
economy is to full capacity. If the signals reaching the central bank
are not informative, the control problem will be made more difficult.
The flattening of the Phillips curve in many countries in the 1990s
is perhaps one indication that this signal value of aggregate prices has
deteriorated. Flint Brayton, John Roberts, and John Williams note how
the main feature of the Phillips curve--that inflation rises when labor
markets tighten--was turned on its head during the economic expansion in
the 1990s, when the unemployment rate fell below its long-run average of
around 6 percent and then fell below 5 percent, even as inflation fell.
(13) (Figure 1 depicts the unemployment-inflation relation from 1967 to
2002 in both the United States and the United Kingdom.) Replacing
unemployment with a measure of capacity utilization does little better
to rescue the Phillips curve. Although a flattening of the Phillips
curve could be interpreted as a shift in the natural rate of
unemployment itself, the authors conclude that the explanation that best
fits the evidence is that firms' price-cost margins, rather than
prices themselves, have taken the brunt of the adjustment. To the extent
that costs are part of the fundamentals, and prices are expected to
signal cost conditions, the fact that price-cost margins bear the brunt
of the adjustment indicates that the signal value of aggregate prices
has changed in recent years for the worse. (Here the informational
efficiency of prices pertains to goods prices rather than asset prices.)
A telltale sign of the deterioration of the information value of
signals would be large revisions to past data. The larger are the
revisions to official data, the larger was (in retrospect) the
uncertainty about economic conditions at the time.
The Bank of England's experience with data revisions in 2005
is revealing in terms of the difficulties of trying to steer the economy
without good information about where the economy is at the moment.
Figure 2 shows two fan charts for GDP growth as published in two
consecutive issues of the Bank's Inflation Report. (14) It is
immediately evident that GDP growth in early 2005 was revised sharply
downward, so that the realized outcome in the first quarter of 2005 fell
in the outer tail of the projected distribution of outcomes as given by
the top panel. This led to some scrutiny and comment from the press.
(15)
[FIGURE 2 OMITTED]
Another evident difference is in the shape of the fan chart for the
August 2005 report: here the short-range forecasts are given much larger
dispersion--the shape is more like a hammer than a fan. The change was
introduced to emphasize the uncertainties surrounding current economic
conditions. The larger range of outcomes permissible in the short run
anticipates possible data revisions.
The August 2005 report describes the divergence between official
statistical estimates of GDP and those obtained through business
surveys, especially for the services sector. If the survey respondents
put excessive weight on the prevailing conventional wisdom about the
level of economic activity, it would make the surveys far less
informative about current conditions than otherwise. Christopher
Sims' notion that economic agents exercise "rational
inattention," economizing on their information gathering, would
exacerbate this deterioration of information value. (16)
Greater uncertainty about current conditions would be a telltale
sign of the deterioration of signal values, and the new convention of
drawing a fan chart with a "fatter" base could be seen as an
indication of greater awareness of such uncertainty. Strictly speaking,
the base of the fan ought to start in the distant past--after all, there
is great uncertainty even about the past. The broadening of the base at
the current period is a nod to the convention of drawing the fan chart
as if current conditions were known. The more the information value of
central bank signals deteriorates, the wider will be the base of the
fan. Thus one of the key questions we will address is how the precision
of the central bank's information changes as it shifts from one
disclosure regime to another.
Much evidence has accumulated recently that the publication by
central banks of numerical targets for inflation has been associated
with a more effective anchoring of inflation expectations. (17) By
squeezing out the corrosive and insidious effects of inflation
expectations from the economy, the anchoring of inflation expectations
brings large welfare gains. But, we would argue, the flipside of
"well-anchored" is "uninformative": price signals
that are well anchored are also price signals that have little signal
value. They reveal very little to observers looking to them for signs of
underlying shifts or trends in the economy. When informational
efficiency emerges as a concern, well-anchored expectations cease to be
unambiguously desirable.
Gregory Mankiw, Ricardo Reis, and Justin Wolfers examine the
dispersion of inflation expectations in survey data and find that
inflation expectations have become more concentrated around the mean.
(18) Andrew Levin, Fabio Natalucci, and Jeremy Piger investigate how
well the mean inflation expectations have been anchored, also from
survey data. (19) They examine data on inflation forecasts from
Consensus Forecasts, Inc., who poll market forecasters twice a year on
their forecasts of inflation one to ten years ahead. Levin, Natalucci,
and Piger conclude that long-term inflation expectations (looking six to
ten years into the future) for a group of inflation-targeting countries
(Australia, Canada, New Zealand, Sweden, and the United Kingdom) have
become delinked from actual inflation outcomes, but that there is
evidence that they still respond to actual outcomes in the United States
and the euro area. They reach their conclusions by running regressions
of the following form:
(1) [DELTA][[pi].sup.q.sub.it] = [[lambda].sub.1] +
[beta][DELTA][bar.[[pi].sub.it]] + [[epsilon].sub.it],
where [[pi].sup.q.sub.it] is the expectation (formed at date t) of
inflation q years ahead in country i, and [[pi].sub.it] is a three-year
moving average of inflation in country i ending at date t. The
coefficient [beta] is the focus of the investigation, since it measures
the extent to which expectations of future inflation are influenced by
the experience of recent inflation. The authors' results indicate
that [beta] is small and insignificant in the formal inflation-targeting
countries, but positive, large, and significant for the
non-inflation-targeting countries.
A similar picture emerges from studies that examine the
expectations embedded in financial market prices. Refet Gurkaynak,
Levin, and Eric Swanson use daily data to examine the market's
compensation for expected future inflation as revealed in the difference
between forward rates on nominal government bonds and inflation-indexed
bonds. (20) The authors apply this measure of forward inflation
compensation to interest rates for U.S., Swedish, and U.K. government
bonds to extract estimates of long-run inflation expectations. (Sweden
and the United Kingdom are inflation-targeting countries, but the United
States is not.) For all three countries the authors find stable long-run
inflation expectations, but there are revealing differences. For the
United States long-term inflation expectations appear to be influenced
by recent experiences of inflation, whereas no such dependence is
observed for the United Kingdom or Sweden. These results echo those
obtained by Levin, Natalucci, and Piger. They imply that such
"excessive" dependence, relative to the baseline model, in the
forward inflation premium in the United States occurs because the
Federal Reserve does not have a numerical objective for inflation to
help tie down long-term inflation expectations. In addition, Gurkaynak,
Levin, and Swanson show that long-term forward yield differences in the
United States respond excessively to economic news, including surprises
in the federal funds rate, a result that the authors attribute to shifts
in market participants' views of the Federal Reserve's
long-term inflation objectives. (21) To contrast their results for the
United States with those for the inflation-targeting countries, the
authors show that such excess sensitivity in long-term inflation
expectations disappears in the United Kingdom after May 1997, when the
Bank of England gained operational independence and monetary policy
moved to a formal inflation-targeting regime.
The nature of the problem makes it difficult to bring empirical
evidence to bear on whether the central bank's information has,
indeed, deteriorated. Thus, at best, our argument is conjectural and
speculative. However, we show below that the precision of the central
bank's information will depend, in general, on the disclosure
regime. The more the central bank discloses, the less precise the
signals it receives will be. The fan charts of central bank forecasts
become fatter, sometimes considerably so, when it chooses to disclose
more. We also return, in closing, to the Bank of England's
experience in 2005 for some clues as to what kind of evidence one might
look for to help answer the question of informational precision for the
central bank.
The theme explored in this paper is the tension between managing
expectations and the impaired signal value of both financial market
prices and goods prices when expectations are managed. We argue that the
quality of the central bank' s information is endogenous, and that
a central bank that attempts to steer the market's beliefs more
vigorously will suffer a greater deterioration in the information value
of its signals. We will begin by outlining some technical considerations
in developing our argument. But first we revisit a much older debate in
economics between Hayek and his socialist contemporaries on the
informational role of prices and the role of the market mechanism in
aggregating the distributed information of economic agents.
Hayek Revisited
Friedrich von Hayek's 1945 essay "The Use of Knowledge in
Society" has remarkable resonance for today's debate on the
informational role of prices. (22) As is well known, Hayek was arguing
against his socialist con
temporaries and other advocates of Soviet central planning.
However, his comments are equally relevant for today's debate on
central bank transparency. He poses the problem in the following
terms:</p> <pre>
The peculiar character of the problem of a rational economic order
is determined precisely by the fact that the knowledge of the
circumstances of which we must make use never exists in concentrated
or integrated form, but solely as the dispersed bits of incomplete
and frequently contradictory knowledge which all the separate
individuals possess. The economic problem of society is thus not
merely a problem of how to allocate "given" resources--if
"given" is taken to mean given to a single mind which
deliberately solves the problem set by these "data." It is
rather a problem of how to secure the best use of resources known to
any of the members of society, for ends whose relative importance
only these individuals know. Or, to put it briefly, it is a problem
of the utilization of knowledge not given to anyone in its totality.
(23) </pre> <p>Hayek was directing his argument against his
contemporaries who argued, from Paretian optimality principles, for the
superiority of a centrally planned economy. Chief among this group was
Oskar Lange, who developed his arguments in his paper "On the
Economic Theory of Socialism," published in two parts in the
fledgling Review of Economic Studies in 1936 and 1937. (24) Lange was an
economist in the Paretian tradition who, together with contemporaries
such as Abba Lerner and John Hicks, provided the formal apparatus for
the development of modern welfare economics. Lange presented one of the
first formal arguments for what economists now know as the "two
fundamental theorems" of welfare economics. But rather than seeing
these results as buttressing the case for the market system, Lange saw
them as compelling arguments in favor of central planning and socialism.
For Lange, following Pareto's lead, prices are merely rates of
exchange of one good for another, and it is immaterial whether they are
set by the central planner or determined in the market by supply and
demand. The central planner, however, has the advantage that he or she
can act "as if" the Walrasian auctioneer were setting prices,
thereby overcoming the distortions of the market economy resulting from
imperfect competition, transactions costs, and externalities and
achieving a superior allocation of resources.
It was into this debate that Hayek weighed in. Prices, he argued,
are not merely the rates of exchange between goods. They have a second
role, that of conveying information on the fundamentals of the economy
and the shocks that constantly buffet it. In more modern parlance, price
systems are mappings from states of the world to observed market prices,
and, as such, they convey information on the shifting fundamentals of
the economy that is not available to any one agent or subset of agents.
Hayek's argument for the superiority of the market mechanism rests
on the premise that the information revealed in prices is likely to be
far more illuminating and timely than that which any central planner
could possibly hope to amass, let alone maintain and update in a timely
manner in line with shifting fundamentals and continuing shocks to the
economy.
Hayek's emphasis on the informational role of prices
anticipates the modern microeconomic literature on rational expectations
equilibria. (25) His argument is also relevant for the issue of central
bank transparency and monetary policy. If the central bank aims to
manipulate market expectations in its own image, it cannot at the same
time expect the market outcome to serve as the aggregator of the
"dispersed bits of incomplete and frequently contradictory
knowledge which all the separate individuals possess." The more
important the informational role of prices, the greater the tension
between managing market expectations and learning from market
expectations.
Hayek was not disputing that the central planner may be relatively
better informed than any other particular agent in the economy. Indeed,
the central planner could have an absolute advantage in this respect
over any particular agent. But that is not the point. The point is that
prices reveal the collective wisdom of all agents in the economy, by
aggregating the diverse information they possess individually. Thus
Christina Romer and David Romer's finding that central bank
forecasters are better informed than their private sector counterparts
is not sufficient to conclude that the central bank does not face the
dilemma posed above (although, as we will show later, any formal
calculus of the effects must consider the relative informational prowess
of the central bank over the individual agents). (26) The corner
shopkeeper serving a small clientele would be hard pressed to match the
insights of the central bank forecasting department. However, the
shopkeeper is best placed to observe the economic fundamentals ruling in
his or her small sliver of the real world. These small slivers, across
geographical regions and sectors of the economy, when pieced together in
mosaic fashion, may reveal a far clearer picture than any central
planner can hope to achieve.
To be sure, the modern central bank has an awesome array of
expertise and technical resources at its disposal. Sims describes
vividly how central bank forecasts are the culmination of a major
logistical effort, drawing together the results of formal modeling from
an array of models as well as the unquantifiable insights of experts on
individual sectors. (27) In this respect, central bank forecasting
departments bear some resemblance to the economic planning ministries
that Hayek has in his sights. Hayek argues, "We cannot expect that
this problem will be solved by first communicating all this knowledge to
a central board which, after integrating all knowledge, issues its
orders." (28) The central bank's resources and expertise, as
formidable as they are, may fail to match the collective wisdom of the
economically active population as a whole.
The Double-Edged Nature of Public Information
The fundamental debate over the relative superiority of the market
mechanism versus central planning that raged in Hayek's time
reminds us that the stakes in the economic debate were once much higher
than they are today. However, some of the lessons from the Hayek-Lange
debate are applicable even in the more modest arena of central bank
transparency and monetary policy. Rather than presenting a stark choice
between socialism and the market economy, the issues arise in the role
of public information in an economy with distributed knowledge--an
economy where each agent has a "window" on the world, each
with a slightly different perspective, and each with a possible relative
advantage in ascertaining the truth about some smaller sliver of the
real world.
In general, an individual facing a choice under uncertainty will
benefit from gaining greater access to information that reduces the
uncertainty, since better information permits actions that are better
suited to the circumstances. This conclusion is unaffected by whether
the incremental information is public (in the sense of being shared by
everyone) or private (available only to that individual).
But there are reasons why improved public information, although
privately valuable to the decisionmaker receiving it, might not be
valuable to society. If the private information of some market
participants is to be revealed to others, that information must be
reflected in market outcomes. But if public information leads market
participants to ignore or downplay their own private information in
their actions, the market will reveal less new information about market
conditions. Thus public signals generate an informational externality.
This effect is further exacerbated if decisionmakers'
interests are intertwined in such a way that a decisionmaker is an
interested party in the actions taken by others. Public information in
such contexts has attributes that make it a double-edged instrument. On
the one hand, public information conveys information concerning the
underlying fundamentals, as before. However, it also serves as a focal
point for the beliefs of the group as a whole and thus serves a powerful
coordination role. The "sunspots" literature has emphasized
how even signals that are "extrinsic," having no direct
bearing on the underlying fundamentals, may nevertheless serve to
coordinate the actions of individual agents because of their very public
nature. To the extent that public information allows coordination on
good outcomes, greater precision of public information may be
beneficial. But, equally, that coordination could be coordination on
less desirable outcomes. With sunspots, some indeterminacy would always
rule.
In most cases of interest, public information about monetary policy
is not merely a sunspot, however; it conveys important information
concerning the economic fundamentals. The question then is how the
coordination effect of public information will affect the inferences
drawn by individual economic agents, and how their intertwined interests
will affect their individual incentives and the collective outcome that
results from their acting on these incentives.
When there is the potential for a strong consensus to prevail or a
conventional wisdom to take hold, individual incentives may become
distorted in such a way as to reduce the informational value of economic
outcomes. Central bank pronouncements may then serve as a lightning rod,
reinforcing the conventional wisdom or consensus and suppressing dissent
from those individuals whose own private signals tell them that the
conventional wisdom is flawed. When individual incentives are thus
eroded, the signals that would otherwise emerge from dissenting voices
to undermine the flawed consensus may be muted, serving to perpetuate
the flawed consensus.
In an earlier paper, (29) we explored the trade-offs that result in
such a setting by examining the outcome of a collective decision problem
reminiscent of Keynes' s celebrated metaphor of the beauty contest.
Keynes drew a parallel between financial markets and a form of newspaper
competition of his day that invited readers to pick the six prettiest
faces from 100 photographs. (30) Readers won by picking the set of faces
that "most nearly corresponds to the average preferences of the
competitors as a whole." Under these rules, Keynes noted, a reader
would win by anticipating which faces would become the popular choice,
rather than by choosing those that the reader found most attractive. If
individual readers voted on the basis of their own sincerely held
judgments, the aggregate outcome would reveal much about the true
collective judgment of the contestants as a whole. However, the more the
contestants vote on the basis of "anticipating what average opinion
expects average opinion to be," the more the aggregate vote will
reflect the outcome of this second-guessing game among the contestants.
Now imagine how much worse the distortion would be if a widely
watched authority figure were to weigh in, offering his or her public
judgment on the faces in the photographs. The authority's judgment
may or may not be sound. What counts is that his or her pronouncements
reach a wide audience, and that everyone knows that his or her
pronouncements reach a wide audience. For this reason alone, the
authority's public judgment would serve as a powerful rallying
point around which average opinion could coalesce. Once the public
pronouncement has been issued, it would be futile for any reader to
expend effort in scouring the faces in the photographs, to form an
independent opinion of their fundamental attributes. Knowing that others
would likewise regard this as futile and will not gainsay the authority
figure, such a reader would have little incentive to expend effort in
reaching an independent judgment. The aggregate outcome would thus
reveal little about the genuine collective judgments of the individual
contestants, but instead would be dominated by the public pronouncement.
The signal value of the aggregate vote would thus be severely impaired.
Arguably, central bank transparency raises similar issues. When the
central bank issues regular pronouncements on the economic outlook and
publishes its forecasts of the output gap and the path of its policy
rate, such pronouncements provide a powerful rallying point around which
market expectations can coalesce. The more market participants consider
the beliefs of other market participants, the greater will be the impact
of the central bank's pronouncements in determining the aggregate
market outcome.
The dilemma for monetary policy transparency is that such
pronouncements by the central bank will, invariably, also offer genuine
and valuable insight on the current and future state of the economy.
But, however sound as a guide to the underlying fundamentals, the
central bank's pronouncements are even better as guides to what
average opinion will be. As a result, traders give the opinions of
central bankers undue weight and place less weight on their own
independent assessments of the economy. Public pronouncements can thus
crowd out private opinions, and the market may cease to function as a
way of aggregating and revealing diverse, private judgments about the
world in the way that Hayek envisaged.
Most interestingly (and most disturbingly), consider how the
problem is altered when the central bank becomes even better informed.
Suppose that the central bank, in light of the disappointing performance
of its forecasts, decides to beef up its forecasting effort by
recruiting yet more experts and pouring in yet more resources.
Paradoxically, the problem may become worse, not better, when the
central bank's competence in reading the economy improves. There
are two countervailing effects. On the one hand, the improved ability of
the central bank to read the economy will provide better-quality
information to other economic agents. However, the better the central
bank becomes at reading the economy, the more authority it gains in the
eyes of those other agents. As the central bank's pronouncements
become more authoritative, its ability to serve as the rallying point
for coordinating market expectations becomes stronger, suppressing
further the channel through which dissenting agents can express their
views. The net effect of improved central bank transparency is thus
ambiguous. This is one aspect of what might be called the "paradox
of transparency."
Elements of a Theory
The simplest way to motivate the problem is in terms of a decision
problem akin to Keynes's beauty contest, although it will be
important to show how real-world economic decisions can be understood
within a similar framework. We will return to the economic applications
after seeing how the key effects enter in a simpler, abstract decision.
Suppose that there are many small agents, each of whom faces the
problem of tailoring his or her action to the underlying state [theta],
but also tries to second-guess the decisions of the other agents.
Suppose that each agent i follows the decision rule
(2) [a.sub.i] = (1 - r)[E.sub.i]([theta]) + r[E.sub.i]([bar.a]),
where [bar.a] is the average action in the population, and
[E.sub.i](*) is the expectations operator for player i. Each agent puts
a positive weight on the expected fundamental state E([theta]) and on
the expected actions of others and chooses a weighted average of the
two. The parameter r, where 0 < r < 1, indicates the extent to
which agent i is motivated by the concern to second-guess the actions of
others. If r is large (close to 1), decisions are influenced
predominantly by anticipation of what others do, rather than by
one' s own perception of the fundamentals.
The Public Information Benchmark
In the simplest case, if [theta] is commonly known, equilibrium
entails [a.sub.i] = [theta] for all i. When instead agents face
uncertainty concerning [theta] but have access to a source of
information shared by all, their actions approximate [theta] most
closely when uncertainty is small. Suppose [theta] is drawn from an
(improper) uniform prior over real numbers, but agents observe the
single public signal
(3) y = [theta] + [eta],
where [eta] is normally distributed and independent of [theta],
with mean zero and variance [[sigma].sup.2.sub.[eta]]. The signal y is
"public" in the sense that the actual realization of y is
common knowledge among all agents. They choose their actions after
observing the realization of y. Conditional on y, all agents believe
that [theta] is distributed normally with mean y and variance
[[sigma[.sup.2.sub.[eta]]. Hence, agent i's optimal action is
(4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]
where [a.sub.i](y) denotes the action taken by agent i as a
function of y. Since E([theta]|y) = y, and since everyone can condition
on y, we have E([a.sub.j]|) = [a.sub.j](y), so that
(5) [a.sub.i](y) = y
for all i. The average action is then y, and the distance between
[a.sub.i] and [theta] is
E[[(y - [theta]).sup.2]|[theta]] = [[sigma].sup.2.sub.[eta]].
Thus the less noise in the public signal, the closer is the action
to the fundamentals. We will now contrast this with the Hayekian case in
which each agent has his or her own "window on the world."
The Hayekian Case
Consider now the case where, in addition to the public signal y,
agent i observes the realization of a private signal:
(6) [x.sub.i] = [theta] + [[epsilon].sub.i],
where noise terms [[epsilon].sub.i] are normally distributed with
zero mean and variance [[sigma].sup.2.sub.[epsilon]], independent of
[theta] and [eta], so that E([[epsilon].sub.i][[epsilon].sub.j]) = 0 for
i [not equal to] j. The private signal of one agent is not observable by
the others; each agent has a privileged view of his or her own small
sliver of the world.
As before, the agents' decisions are made after observing
their signals. Denote by [alpha] and [beta] the precision of the public
and the private information, respectively, where
(7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]
Then, based on both private and public information, agent i's
expected value of [theta] is
(8) [E.sub.i]([theta]) = [alpha]y + [beta][x.sub.i]/[alpha] +
[beta].
One simple way to solve for the equilibrium is to posit that
actions are a linear function of signals. We will follow this with a
demonstration that this linear equilibrium is the unique equilibrium,
which also gives important insights into the double-edged nature of
public information. Thus, as the first step, suppose that each agent
follows a linear rule,
(9) [a.sub.j] = [kappa][x.sub.j] + (1 - [kappa])y
for some constant [kappa]. Then agent i's conditional estimate
of the average expected action across all agents is
[E.sub.i]([bar.a]]) = [kappa] ([alpha]y + [beta][x.sub.i]/[alpha] +
[beta]) + (1 - [kappa])y
= ([kappa][beta]/[alpha] + [beta])[x.sub.i] + (1 -
[kappa][beta]/[alpha] + [beta])y.
Agent i's optimal action is
(10) [a.sub.i] = (1 - r)[E.sub.i]([theta]) + r[E.sub.i]([alpha]) =
(1 - r) ([alpha]y + [beta][x.sub.i]/[alpha] + [beta]) +
r[([kappa][beta]/[alpha] + [beta])[x.sub.i] + (1 - [kappa][beta]/[alpha]
+ [beta])y]
= [[beta](r[kappa] + 1 - r)/[alpha] + [beta]][x.sub.i] + [1 -
[beta](r[kappa] + 1 - r)/[alpha] + [beta]]y.
Comparing coefficients in equations 9 and 10, we therefore have
[kappa] = [beta](r[kappa] + 1 - r)/[alpha] + [beta],
from which we can solve for [kappa]. The equilibrium action
[a.sub.i] is given by
(11) [a.sub.i] = [alpha]y + [beta](1 - r)[x.sub.i]/[alpha] +
[beta](1 - r),
and the average action is
(12) [a.sub.i] = [alpha]y + [beta](1 - r)[theta]/[alpha] + [beta](1
- r).
First observe that the larger is [alpha], the further away is the
average action from [theta]. This is true even if r = 0. However, if r
> 0, even less weight is put on the true state and even more weight
on the public signal.
Together with Franklin Allen, we have developed this theme in an
asset pricing model where the price of an asset today is the average
expectation of tomorrow's price. (31) Average expectations fail to
satisfy the "law of iterated expectations." That is to say,
the average expectation today of the average expectation tomorrow of
future payoffs is not the same thing as the average expectation of
future payoffs. In a Hayekian environment, the failure of the law of
iterated expectations follows a systematic pattern that puts too much
weight on shared information--conventional wisdom or other public
signals, including past prices. Past prices, in particular, receive too
much weight relative to the statistically optimal weight in a
frictionless world. Given the importance of the failure of the law of
iterated expectations, it is illuminating to dwell briefly on how the
example sketched above can be shown to be an example of such a failure.
Higher- Order Beliefs
Recall that agent i's decision rule is
(13) [a.sub.i] : (1 - r)[E.sub.i]([theta]) + r[E.sub.i]([bar.a]).
Substituting and writing [bar.E]([theta]) for the average
expectation of 0 across agents, we have
(14) [a.sub.i] = (1 - r)[E.sub.i]([theta]) + (1 -
r)r[E.sub.i]([bar.E]([theta])) + (1 -
r)[r.sup.2][E.sub.i]([[bar.E].sup.2]([theta]))+ ... = (1 -
r)[[infinity].summation over
k=0][r.sup.k][E.sub.i]([[bar.E].sup.k]([theta])).
To evaluate this expression, one must solve explicitly for
[E.sub.i]([[bar.E].sup.k]([theta])). Recall that agent i's expected
value of [theta] is
(15) [E.sub.i]([theta]) = [alpha]y + [beta][x.sub.i]/[alpha] +
[beta].
Thus the average expectation of [theta] across agents is
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]
31. Allen, Morris, and Shin (forthcoming).
Now agent i's expectation of the average expectation of
[theta] across agents is
[E.sub.i]([bar.E]([theta])) = [E.sub.i]([alpha]y +
[beta][theta]/[alpha] + [beta])
and the average expectation of the average expectation of 0 is
[[bar.E].sup.2]([theta]) = [bar.E]([bar.E]([theta]))
= [[([alpha] + [beta]).sup.2] - [[beta].sup.2]]y +
[[beta].sup.2][theta]/[([alpha] + [beta]).sup.2].
Higher-order expectations put more weight on the (noisy) public
information at the expense of the truth. Not only does the law of
iterated expectations fail; it fails in a systematic way where
higher-order expectations are less informative about [theta]. By
induction we have [[bar.E].sup.k]([theta]) = (1 - [[mu].sup.k])y +
[[mu].sup.k][theta], where [mu] = [beta]/([alpha] + [beta]), and
[a.sub.i] = (1 - r)[[infinity].sub.k=0][r.sup.k][(1 -
[[mu].sup.k+1])y + [[mu].sup.k+1][x.sub.i]] = [1 - [mu](1 - r)/1 -
r[mu]]y + [[mu](1 - r)/1 - r[mu]][x.sub.i] = [alpha]y + [beta](1 -
r)[x.sub.i]/[alpha] + [beta](1 - r).
This is exactly the unique linear equilibrium we identified
earlier.
Economic Interpretations of the Decision Rule
The decision rule in equation 2, which was motivated by the beauty
contest analogy, can be given more familiar macroeconomic underpinnings
by appealing to the "island economy" model of Robert Lucas and
Edmund Phelps, (32) although, for reasons to be discussed below, we
favor another interpretation of the decision rule. Suppose that there
are a large number of small islands, which can be interpreted either as
distinct geographical regions or as different sectors of the economy.
There is a single good in this archipelago, and the supply of this good
on island i is denote by [q.sup.s.sub.i]. The supply of the good is
increasing in the difference between the price on island i and the
perceived average price across all islands. In particular, we take the
linear supply function
(16) [q.sup.s.sub.i] = b[[p.sub.i] - [E.sub.i]([bar.p])],
where Pi is the price on island i, [bar.p] is the average price
across all islands, and b > 0 is a supply parameter. The expectations
operator [E.sub.i](x) denotes the expectation with respect to the
information available to residents of island i.
The demand for the good on island i is a decreasing linear function
of the price on the island, but it also depends on the best estimate of
some underlying fundamental variable [theta]. In the original treatment
of this problem by Lucas and Phelps, [theta] is construed as being the
money supply and is under the central bank's control. Demand on
island i is thus given by
(17) [q.sup.d.sub.i] = [E.sub.i]([theta]) - [p.sub.i],
where [theta] is the money supply. Market clearing then implies
(18) [p.sub.i] = (1 - r)[E.sub.i]([theta]) + r[E.sub.i]([bar.p]),
where r = b/(b + 1). This is the pricing rule obtained by Phelps,
(33) which extends the standard Lucas-Phelps island economy model by
incorporating a role for the expectations of prices set on other
islands. Thus we retrieve the beauty contest decision rule.
Although the outward form of equation 18 conforms to the beauty
contest decision rule, the fundamental variable [theta] in the original
version of the island economy model is something that the central bank
has full control over. Thus learning about [theta] is not an issue, and
so the informational role of prices has no part to play in the analysis.
A more appropriate interpretation of the pricing rule in equation 1
8 is in terms of the price-setting decisions of firms that have some
market power due to imperfect substitutability between goods. (34) In
this context [theta] represents the underlying marginal cost conditions
for the firms, which also correspond to the output gap under some
simplifications. (35) Firms care about the prices set by other firms
because there is price competition across firms. Woodford considers
pricing rules for firms of the form
(19) [p.sub.i] = [E.sub.i](p) + [xi][E.sub.i](x),
where [p.sub.i] is the (log) price set by firm i, p is the average
price across firms, is marginal cost (in real terms), and [xi] is a
constant between 0 and 1.36 The operator [E.sub.i] denotes the
conditional expectation with respect to firm i's information set.
The parameter [xi] is related to the elasticity of substitution between
goods, and it becomes small as the economy becomes more competitive.
(37) An active literature has developed exploring the Hayekian theme in
the context of an imperfectly competitive economy. (38)
Rewriting equation 19 in terms of nominal marginal cost, defined as
[theta] [equivalent to] - x + P, we have
(20) [p.sub.i] = (1 - [xi])[E.sub.i](p) + [xi][E.sub.i]([theta]),
yielding another way to derive the beauty contest decision rule.
The examples above pertain to the pricing of goods, but many of the
properties of beauty contests arise also in the context of financial
market pricing. Financial market prices present the additional
complication that they are forward looking: the price today equals the
discounted expected payoff at some future date. Nevertheless, the
excessive impact of public information can be shown, at the cost of some
additional apparatus. Allen, Morris, and Shin derive asset pricing
formulas of the form
(21) [p.sub.t] = [[bar.E].sub.t][[bar.E].sub.t+1] ...
[bar.E].sub.t+h][[theta].sub.t+h] - [omega][s.sub.t],
where [p.sub.t] is the price of a financial asset at time t,
[[theta].sub.t+h] is the fundamental payoff at time t + h, [omega] is a
constant, and s, is the asset supply, with a mean of zero. (39) Thus the
price of an asset today is the average expectation today of the average
expectation tomorrow, and so on, of the eventual realized fundamentals.
The law of iterated expectations fails also in this context, and
the direction of the failure is toward excessive influence of public
information. Asset pricing applications present some technical
difficulties, such as the fact that past (and even current) prices
constitute public signals that enter into the information sets of
traders. Thus the innocuous-looking notation Et actually conceals much
subtlety.
However, the broad conclusions are the same as for the static
beauty contest. Public information has a disproportionate impact on
financial market prices. The precise sense in which this is true is that
the market price deviates systematically from the average expectation of
the fundamental value, and the bias is always toward commonly shared
information, including past prices. More formally,
(22) [p.sub.t] = [[bar.E].sub.t][ [bar.E].sub.t+1] ...
[[bar.E].sub.t+h][[theta].sub.t+h] - [omega][s.sub.t] [not equal to]
[[bar.E].sub.t]([[theta].sub.t+h]) - [omega][s.sub.t],
and the distance between Pt and the expectation of
[[theta].sub.t+h] based purely on public information is smaller than the
distance between [[bar.E].sub.t][[theta].sub.t+h] and the expectation of
[[theta].sub.t+h] based purely on public information. Thus the key
features of the overreaction to public information apply to financial
markets also.
The Precision of Endogenous Information
So far we have treated the precision [alpha] of public information
as given. But the information available to central banks derives from
outcomes in the economy itself and hence is the result of actions taken
by individual economic agents. To the extent that individuals'
decisions are affected by public information, we can expect the signal
values of the resulting outcomes to be sensitive to the disclosure
regime.
We show that this is indeed the case. The information precision of
a central bank that issues regular forecasts is lower than that of a
central bank that simply tracks the evolution of the fundamentals
through its signals. We postpone a discussion of the potential welfare
effects of such impaired signal precision until the next section, and
instead concentrate here on why the information value deteriorates when
a central bank discloses more.
Time is discrete and indexed by t [member of] {..., -2, -1, 0, 1,
2, ...}. The fundamentals {[[theta].sub.t]} evolve as a Gaussian random
walk:
(23) [[theta]t = [[theta].sub.t-1] + [[phi].sub.t],
where {[[phi].sub.t]} are independent standard normal innovations.
At each date a new generation of private sector actors observe a
noisy signal of the fundamentals as of that date, together with any
present and past disclosures by the central bank. Individual i's
noisy signal in generation t is given by
[[x.sub.it] = [[theta].sub.t] + [[member of].sub.it],
where [[epsilon].sub.it], are independently and identically
distributed (i.i.d.) normal across individuals and across generations
with precision [beta].
We assume a sparse information set for the individual at date t
(for instance, there is no access to the private information of previous
generations). But we do this as a way of setting the basic level of
information to which the central bank can add by disclosing its own
estimates and forecasts, if it chooses to. Extending the model to
encompass richer settings would be worthwhile for specific applications.
The private sector agents play a beauty contest game, following the
decision rule
(24) [a.sub.it] = (1 - r)[E.sub.it]([[theta].sub.t]) +
r[E.sub.it]([[bar.a].sub.t]),
where [[bar.sub.a].sub.t] is the average action across individuals
at date t.
The central bank observes a noisy signal about what the private
sector individuals did in the previous period. At date t that signal is
(25) [z.sub.t] = [[bar.a].sub.t-1] + [[phi].sub.t],
where {[[phi].sub.t]} are i.i.d. Gaussian noise terms independent
of all other random variables, and with variance 1/[gamma].
The central bank's information set at date t is the collection
of all past signals {... , [z.sub.t-2], [z.sub.t-1], [z.sub.t]}. We are
interested in the central bank's information precision at date t as
measured by
Var([[theta].sub.t]|[z.sub.t], [z.sub.t-1], ...).
Compare two possible regimes. In the first the central bank makes
no disclosures but simply tracks the fundamentals through its signals.
In the second the central bank discloses its best estimates of the
fundamentals. Since the fundamentals follow a random walk, the
disclosure of the central bank' s estimate of [theta] is tantamount to issuing forecasts of future values of [theta] at all horizons.
The Case without Disclosure
In the first case, since there is a continuum of private sector
agents, and they receive i.i.d, signals conditional on the fundamentals,
the average action [[bar.a].sub.t] fully reveals the true fundamental
state [[theta].sub.t]. Thus the central bank' s signals are given
by
[z.sub.t] = [[theta].sub.t-1] + [[phi].sub.t].
We write [z.sub.t] as the linear estimate of [[theta].sub.t], based
on {[z.sub.t], [z.sub.t-1], [z.sub.t-2], ...}, and let [[alpha].sub.t],
be the precision of this estimate as measured by
1/Var([[theta].sub.t]|[z.sub.t], [z.sub.t-1], ...). Then, on observing
[z.sub.t+1] at date t + 1, the linear estimate of [[theta].sub.t] is
(26) [[alpha].sub.t][z.sub.t] + [gamma][z.sub.t+1]/[[alpha].sub.t]
+ [gamma],
with precision [[alpha].sub.t] + [gamma]. Since [[theta].sub.t+1] =
[[theta].sub.t] + [[phi].sub.t+1], we have a recursive formula for the
central bank's information precision over time, namely,
(27) 1/[[alpha].sub.t+1] =
Var([[theta].sub.t+1]|[z.sub.t+1],[z.sub.t])
= 1 + 1/[[alpha].sub.t] + [gamma].
The steady-state information precision in the nondisclosure case is
thus the value of [alpha] that solves
(28) [alpha](1 + 1/[alpha] + [gamma]) = 1.
The Case with Disclosure
In the second case, where the central bank discloses its signals to
the individual agents, the information set of agent i in generation t is
(29) {[x.sub.it], [z.sub.t], [z.sub.t-1], ... }.
Let [z.sub.t] be the linear estimate of [[theta].sub.t] based on
the central bank's disclosures only, and let [[alpha].sub.t], be
the precision of this estimate. Then this individual will take action as
follows:
(30) [a.sub.it] = [[alpha].sub.t][z.sub.t] + (1 -
r)[beta][x.sub.it]/[[alpha].sub.t] + (1 - r)[beta].
By taking the average across individuals in equation 30, the
average action is
Solving for [[theta].sub.t] as a function of [z.sub.t],
[z.sub.t+1],
[[theta].sub.t] = [1 + [[alpha].sub.t]/(1 - r)[beta]][z.sub.t+1] -
[[alpha].sub.t]/(1 - r)[beta][z.sub.t] - [1 + [[alpha].sub.t]/(1 -
r)[beta]][psi].sub.t+1].
Thus the incremental information value to the central bank from
observing [z.sub.t+1] comes from the signal
(31) [s.sub.t+1] [equivalent to] [1 + [[alpha].sub.t]/(1 -
r)[beta]][z.sub.t+1] - [[alpha].sub.t]/(1 - r)[beta][z.sub.t],
which is orthogonal to {[z.sub.[tau]}.sub.[tau][less than or equal
to]t]. The precision of [s.sub.t+1] is
[gamma][[1 + [[alpha].sub.k]/(1 - r)[beta]].sup.-2],
which we denote by [[gamma].sub.t].
Since [[gamma].sub.t] [less than or equal to] [gamma], we can
conclude that the incremental information value to the central bank of
observing its signal [z.sub.t+1] is lower in the disclosure case.
Moreover, this incremental information value is lower, the higher was
the central bank's overall precision [[alpha].sub.t] in the
previous period. In other words, raising the central bank's overall
information precision has the effect of lowering the value of its
subsequent signal. The intuition is that increased precision at t
intensifies the beauty contest and reduces the information value of the
average action. This then lowers the information precision of the signal
arriving at t + 1.
At date t + 1 the central bank's linear estimate of
[[theta].sub.t] is
(32) [[alpha].sub.t][z.sub.t] +
[[gamma].sub.t][s.sub.t+1]/[[alpha].sub.t] + [[gamma].sub.t],
with precision [[alpha].sub.t] + [[gamma].sub.t]. Since
[[theta].sub.t+1] = [[theta].sub.t+1] + [[phi].sub.t+1], we have a
recursive formula for the central bank's information precision in
the disclosure case:
(33) 1/[[alpha].sub.t+1] = 1 + 1/[[alpha].sub.t] + [[gamma].sub.t]
= 1 + 1/[[alpha].sub.t] + [gamma][(1 - r)[beta]/[[[alpha].sub.t] +
(1 - r)[beta]].sup.2].
The steady-state information precision in the disclosure case is
the value of [alpha] that solves
(34) [alpha]{1 + 1/[alpha] + [gamma][[(1 - r)[beta]/[alpha] + (1 -
r)[beta]].sup.2]} = 1.
Comparing the Regimes
Comparing the steady-state information precision levels in the two
regimes given by equations 28 and 34, we can see that steady-state
precision is lower under the disclosure regime, since the value of the
expression in curved brackets in equation 34 is greater than 1 +
[1/([alpha] + [gamma])]. For parameter values r = 0.85 and [beta] = 1,
(40) we can plot in figure 3 the steady-state information precision
[alpha] as a function of the variance 1/[gamma] of the noise term [psi].
The central bank has higher information precision in the nondisclosure
case. In both cases this precision can be raised to the upper bound of 1
by reducing the variance of the noise, but the disclosure case requires
very little noise to get close to the upper bound.
It is worth noting that the deterioration of the central
bank's signal value under the disclosure regime holds even when r =
0, so that there are no coordination elements, although coordination
elements exacerbate the effect. We return to this issue below.
It is also apparent from the implicit formula in equation 34 for
steady-state [alpha] that the central bank's information precision
is a function of the private sector agents' information precision.
This is intuitively clear, since the central bank learns by observing
what the individual agents do. The reason [beta] enters in this relation
is that the aggregate actions [[bar.[alpha]].sub.t] are revealing only
to the extent that private agents put weight on their own private
signals. The more informative their private signals, the greater the
information value of the aggregate action. In this sense the central
bank's information value is dependent on (and derivative of) the
private sector agents' information precision.
Figure 4 plots [alpha] as a function of [beta] for varying values
of the noise [[psi].sub.t], in the central bank's signal (holding
other parameter values the same as in figure 3). The central bank's
information precision is increasing in the private sector's
information precision, but [alpha] can lie below [beta], especially when
[beta] is large. One reason is that, whereas private sector agents have
contemporaneous information about [[theta].sub.t] from their signals
(say, by observing a signal about their current marginal cost), the
central bank's signal comes with some delay, and the innovation to
[[beta].sub.t] increases the central bank's uncertainty.
Finally, it is worth noting that the forecasting rule for the
central bank will change if it moves from a nondisclosure regime to a
disclosure regime. Since 0, follows a random walk, the linear estimates
given by equations 26 and 32 are also forecasts of all future values of
[theta]. Needless to say, if the central bank continues to use the old
(nondisclosure) forecasting rule under the new (disclosure) regime, its
forecast will be off the mark. One could view this as a variation of the
Lucas critique as applied to central bank transparency. The time-series
properties of aggregate actions will change as the disclosure regime
changes.
Welfare Effects of Transparency
The impaired signal value for the transparent central bank will
impinge on any control problem that it faces, and it poses greater
challenges in making decisions under uncertainty. Thus the central bank
will face a trade-off between the welfare gains that result from being
able to steer the future beliefs of economic agents, and the impaired
signal value that results from disclosure of its forecasts. Evaluating
the terms of such a trade-off is an important topic for future
investigation. Furthermore, the degree of transparency itself emerges as
one dimension of the optimal control problem: one can expect to see a
future debate on "optimal transparency." We believe these
issues to be the key to resolving the welfare effects of transparency in
the spirit of the Hayek-Lange debate.
The debate to date, however, has concentrated on the one-shot model
of beauty contests sketched earlier, where [alpha] is taken as given.
Although the current debate sheds no light on the endogenous nature of
central bank information, it is nevertheless illuminating in outlining
some of the other dimensions of the debate.
Welfare Effects in the Static Analysis
Recall that, under the equilibrium decision rule (equation 12), the
average action of individuals was equal to
[alpha]y + [beta](1 - r)[theta]/[alpha] + [beta](1 - r)
Expressed in terms of basic random variables [theta] and [eta], we
have
(35) [bar.a] = [theta] + [alpha][eta]/[alpha] + [beta](1 - r).
Thus increases in [alpha] and increases in r unambiguously reduce
the informativeness of the average action as a signal of [theta]. In
particular, public information always reduces the informativeness of
average actions, even if r = 0, but strategic complementarities
exacerbate the effect. Although we have not formally modeled the social
value of information about [theta], we have focused on the importance of
the information aggregation role of prices in private investment
decisions and in the forecasting of the central bank. The welfare losses
come not from coordination itself, but rather from the externalities
imposed by the agents playing the beauty contest on those other agents
who rely on prices to be informationally efficient. There is a market
failure in which agents fail to internalize the externalities flowing
from uninformative prices.
Some recent debates on the social value of information have focused
on the welfare of the coordinating players themselves (rather than their
external effect on others learning from their actions). In a previous
paper we assumed that the beauty contest element of the
individuals' decision is socially wasteful, entering only as a
zero-sum component in payoffs, so that social welfare depends only on
the variation of individual actions around [theta]. (41) In this case
social welfare is enhanced only to the extent that individuals'
actions approximate the fundamental state [theta]. In such a
formulation, increased precision of public information is not guaranteed
to raise welfare. Expected welfare in that paper is given by
(36) E[W|[theta]] = [[alpha].sup.2]E([[eta].sup.2]) +
[[beta].sup.2][(1 - r).sup.2][E([[epsilon].sup.2.sub.i])]/[[[alpha] +
[beta](1 - r)].sup.2]
= - [alpha] + [beta][(1 - r).sup.2]/[[[alpha] + [beta](1 -
r)].sup.2], since Var([eta]) = 1/[alpha]
and Var([[epsilon].sub.i]) = E([[epsilon]2.sup.u.sub.i]) =
1/[beta].
Welfare is increasing in the precision of the private signals, as
we can see by differentiating equation 36 with respect to [beta]:
(37) [differential]E(W|[theta])/[differential][beta] = (1 - r)[(1 +
r)[alpha] + [(1 - r).sup.2][beta]]/[[[alpha] + [beta](1 - r)].sup.3]
However, the derivative of equation 36 with respect to [alpha] is
(38) [differential]E(W|[theta])/[differential][alpha] = [alpha] -
(2r - 1)(1 - r)[beta]/[[[alpha] + [beta](1 - r)].sup.3],
so that
(39) [differential]E(W|[theta])/[differential][alpha] [less than or
equal to] 0 if and only if r [greater than or equal to] 0 if and only if
r [greater than or equal to] 1/2 and [beta]/[alpha] [greater than or
equal to] 1/(2r - 1)(1 - r).
When r > 0.5, there are ranges of the parameters where increased
precision of public information is detrimental to welfare.
If [alpha] is restricted to some interval [0, [bar.[alpha]]] for
technical feasibility reasons, we can expect a corner solution to the
optimization of [alpha], in which the social optimum entails either
providing no public information at all (that is, setting ([alpha] = 0)
or providing the maximum feasible amount of public information (setting
[alpha] = [bar.[alpha]]). The better informed is the private sector, the
higher the hurdle rate of the precision of public information that would
make it welfare enhancing.
However, the zero-sum nature of the coordination element in payoffs
is crucial to our 2002 result. If instead the coordination itself has
some social value, the ambiguous effect of [alpha] disappears. Woodford
describes utility functions that give rise to the same linear decision
rule (equation 2) but allow for a social value of coordinated action;
(42) in this case welfare is no longer given by equation 36 but rather
by
(40) -[alpha] + [beta](1 - [r.sup.2])/[[[alpha] + [beta](1 -
r)].sup.2].
Woodford points out that this function is globally increasing in
[alpha], and so greater precision of public information cannot be
harmful.
In the same spirit, George-Marios Angeletos and Alessandro Pavan propose a microfounded model that incorporates coordination elements in
the welfare function. (43) The coordination element comes from an
investment problem with positive spillover effects, and so an explicit
coordination premium is built into the problem. In particular, the
welfare effects are closely tied to the fact that better public
information allows the agents to eliminate the inefficiencies associated
with coordination failure.
The coordination element in Hellwig's analysis is more subtle.
(44) Hellwig presents a macroeconomic model with monopolistic
competition based on the Dixit-Stiglitz aggregators for consumption and
price. In particular, the average price for the economy as a whole is
given by the index
(41) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.],
where [theta] > 1 is the elasticity of substitution between
goods, and [p.sup.i.sub.t] is the price charged by firm i at date t. In
effect, the average price is a generalized harmonic mean of prices. (45)
This has important consequences. Although profit is increasing in
[P.sub.t], it is decreasing in price dispersion, reflecting the fact
that the harmonic mean always lies below the arithmetic mean. Thus a
firm's expected profit increases when price dispersion is reduced.
In turn, the form of the price index reflects the preference for variety
implicit in the Dixit-Stiglitz utility function. For other
specifications of preferences, an alternative perspective is to note
that a consumer's indirect utility (utility as a function of prices
at the optimum) is a convex function of prices, reflecting the ability
of consumers to switch away from expensive goods in favor of cheaper
ones. Price dispersion then has a beneficial effect. It would be
desirable to understand more generally how welfare effects in
Hellwig's model depend on attitudes to price dispersion.
Our original results are thus sensitive to the microfoundation
given to the coordination motive. We highlighted this sensitivity in the
online appendix to our 2002 paper. (46) A recent paper by Angeletos and
Pavan provides a unified analysis of the value of public information
strategic problems with quadratic payoff. (47) Their analysis makes use
of a comparison of equilibrium behavior with an "efficient"
outcome, corresponding to a constrained planner's problem, where
the planner can mandate players' actions as a function of their
signals but cannot observe the signals. Their analysis highlights that
the results in our 2002 paper rely on both the lack of externalities in
payoffs and that the equilibrium involves an inefficiently high level of
coordination. Thus continuing work on the welfare analysis of public
information with a microfounded coordination motive is clearly valuable.
However, we have emphasized elsewhere in this paper that the dynamic
effects of information revelation may be important and that one should
not rely too heavily on the one-shot version of the beauty contest model
in drawing conclusions about the desirability of greater transparency.
Relative Precision
In a reply to our 2002 paper, Lars Svensson raises another issue.
(48) Taking our payoffs at face value, he makes two observations. First,
the result that welfare is locally decreasing in the precision of public
information holds only with restrictions on the information parameters
that are empirically very restrictive ([alpha] has to be small relative
to [beta]). Second, even on a global analysis, when the precision of the
public signal is no lower than that of the private signal, welfare is
higher with the public signal than without it.
Svensson's point can be explained by referring back to welfare
as given by equation 36, but expressed as a function of [alpha]. Let us
denote this by V([alpha]). Thus,
(42) V([alpha]) = - [alpha] + [beta][(1 - r).sup.2] / [[alpha] +
[beta][(1 - r)].sup.2]
On the assumption that withholding the public signal is equivalent
to setting [alpha] = 0, the ex ante welfare in the absence of the public
signal is thus
(43) V(0) = - 1/[beta].
There is a hurdle rate [bar.[alpha]] for the precision of the
public signal such that welfare with the public signal is lower than
welfare without it if and only if [alpha] < [[bar.[alpha]]. The
hurdle rate is the value of [alpha] that solves V([alpha]) = V(0) and is
given by
(44) [bar.[alpha]] = [beta](2r - 1).
Since 0 < r < 1, the hurdle rate is lower than the precision
[beta] of the private information. Thus, for the benchmark case where
the precision of public information is no lower than that of private
information (that is, where [alpha] [greater than or equal to] [beta]),
welfare is higher with the public signal than without it.
Without taking fully into account the endogenous nature of [alpha],
it would be difficult to come to a firm conclusion on the relative sizes
of [alpha] and [beta]. We can see from figure 3 above that, when good
public information depends on a high precision of private information,
choosing one has implications for the other.
[FIGURE 3 OMITTED]
The evidence from Romer and Romer that the Federal Reserve' s
Greenbook forecasts outperform the forecasts of the Federal
Reserve's private sector counterparts is often cited as evidence
that [alpha] is greater than [beta]. (49) However, private sector
forecasters are not the typical "economic agent" studied in
most economic models. Rather they are special types of agents who try to
mimic the central bank's decision problem, but with fewer
resources. Thus [beta] should be understood to refer to the information
precision of genuine economic agents who learn about the current state
of the economy from their own transactions. For instance, in the price
setting game version of the beauty contest rule, each economic agent is
a firm trying to balance the competitive effects of price changes with
the need to keep price above marginal cost. Here [beta] is the precision
of the firm's estimate of its own marginal cost. The stylized model
in our 2002 paper also suffers from the fact that it imposes
independence of the signals conditional on [theta]. The online appendix
to that paper dealt with a more general case that allows for correlated
signals, and Morris, Shin, and Hui Tong present an example, in a reply
to Svensson, (50) showing the variety of welfare effects that may arise
away from the simple benchmark case, and in particular that public
information may be damaging even at high levels of public precision.
The debates formulated in the static model have extended our
understanding along several dimensions, but the limited nature of the
static framework and the sensitivity of the results to the assumed
payoffs suggest that we have come close to the limits of useful debate
within the confines of such a restrictive framework. Much more important
would be the endogenous nature of public information precision itself.
It is this issue that lies at the heart of the debate between Hayek and
his socialist contemporaries, and the largest stakes in the monetary
policy transparency debate lie here.
Implications for Monetary Policy
One of the pitfalls of engaging in any debate is that of
overselling one's case and making possibly untenable claims. The
dangers are large, especially if the issue is something as basic and
desirable as the transparency of a prominent and influential public
institution. Thus it is worth taking a deep breath and a larger
perspective. The arguments presented in this paper do not address the
question of whether any particular forecast or other information should
or should not be disclosed. Rather, the objective has been to review
arguments about the trade-offs involved in the choice of framework for
communicating with diverse economic actors. Nor do we claim that
transparency (or inflation targeting, or publication of forecasts) is
undesirable. Our aim is the much more modest one of drawing attention to
the two-sided nature of the debate.
Transparency affords considerable leverage to central bankers in
influencing the beliefs of economic agents. But this in turn may reduce
the signal value of private sector actions. The Bank of England's
recent experience provides a glimpse into the difficulties faced by a
central bank when it has poor information on the current state of the
economy. At its August 2005 meeting the Monetary Policy Committee voted
by a majority of five to four to lower the policy rate. The minutes of
the meeting represent the views of the members who voted against the cut
as follows:</p>
<pre> For these members, there appeared to be little risk
in waiting for further data. Given the difficulty in explaining a
reversal of a decision soon after a turning point, should that prove
necessary in the light of future data, it was advisable to accumulate
a little
more evidence than usual before changing interest rates. (51)
</pre> <p>The uncertainty about the current state of the
economy clearly played on the minds of all members. Uncertainty as to
where the economy actually was at the time was also a prominent theme at
the press conference following the publication of the August 2005
Inflation Report. (52)
To the extent that uncertainty about current conditions makes
forecasting more difficult, another telltale sign of a drop in signal
values would be a deterioration in forecasting performance. Of course,
such a deterioration cannot be seen as a clinching argument for a drop
in signal values (there may be other culprits), but we have seen that
changes in the disclosure regime are associated with changes in the
time-series properties of aggregate outcomes, as well as changes in the
signal value of those outcomes. When poor signal values conspire with
structural change in the economy, forecasting will be extremely
difficult. Thus forecasting failures would certainly be consistent with
a drop in signal values. There is some recent evidence of a
deterioration in the forecasting accuracy of central banks and their
private sector counterparts. (53)
At the Federal Reserve, the staff of the Board of Governors
prepares a detailed forecast before each scheduled meeting of the
Federal Open Market Committee (FOMC). The purpose of this forecast,
known as the Greenbook, is to serve as background to the deliberations
in the FOMC, and the views expressed are those of the staff rather than
individual committee members. Sims provides a detailed description of
the process by which the Greenbook forecasts are arrived at. (54) The
forecasts are posted on the website of the Federal Reserve Bank of
Philadelphia, except for the most recent five-year window, which remains
confidential. Peter Tulip's study therefore uses the publicly
disclosed data that include forecasts of outcomes up to the end of 2001.
(55)
Tulip finds that, even as output fluctuations have moderated
substantially in recent years,56 forecast errors have not. The fact that
policy responses are endogenous and that output fluctuations have
moderated both make forecasting more difficult. Nevertheless, it is
notable from Tulip's study that the performance of longer-term
forecasts for output (up to eight quarters) has not been encouraging.
Since the late 1980s, mean squared prediction errors have been similar
to, and sometimes greater than, the variance itself. In other words, the
simple sample mean (the most naive forecast) has proved a more accurate
guide to GDP growth than the actual forecasts, which is to say that the
forecasts have had negative predictive value.
One way to picture this is to consider a regression of the two-year
change in GDP on a constant and the corresponding forecast. This
regression has a negative coefficient when estimated on the last ten
years of the sample (1992 to 2001). C. Goodhart reports findings for the
Bank of England's forecast performance, which demonstrate a similar
lack of predictive power for longer-term forecasts, but Goodhart points
out that, when the central bank acts on the forecasts themselves, the
lack of correlation between the initial forecast and the final outcome
should be expected. (57) Sean Campbell reports results from private
sector forecasts and finds that short-term forecasts of the Survey of
Professional Forecasters have a negative R-squared over the period
1984-2003, echoing Tulip's result for the Federal Reserve. (58)
If the central bank does not recognize that signal values are
impaired in a world of managed expectations, it may be lulled into a
false sense of security when in fact imbalances are building in the
economy. Even though consumer goods prices may be stable and the flows
in the IS curve are well behaved, asset prices may be buoyed by
excessively lax credit conditions, building up problems for the future
despite no obvious signs of trouble.
If inflation targeting is practiced flexibly, the output costs of
financial distress could figure in the overall calculations. Less easy
to overcome would be the political economy hurdles facing a central
bank's monetary policy committee whose mandate is interpreted
narrowly as inflation and output stabilization over a relatively short
horizon. The key issue is whether a monetary policy committee that
suspects that imbalances are building up under the radar feels that it
can justify departing from the inflation target over the targeting
horizon in order to forestall larger problems over a longer horizon.
Australia provides one recent instance where a central bank has
acted to lean against the wind, raising interest rates and then keeping
them high in the face of an overheating residential property market,
even though consumer prices and output were well behaved. The Reserve
Bank of Australia (RBA) came under considerable criticism for acting
beyond its mandate: critics claimed that it was looking "beyond its
horizon" of two years in targeting inflation. By taking such
actions, the RBA was undoubtedly risking its own reputation, since the
politically more expedient path would have been to stick to a more
narrow interpretation of its mandate. In the event, the RBA's
preemptive actions proved well advised, and its reputation has been
enhanced. Thus, in practice, central bankers have adapted well to the
new inflation targeting regime, and debates are frequently conducted in
broader terms.
It is beyond the scope of this paper to broach the larger topic of
central bank accountability. Transparency in this broader sense is
crucial for establishing and maintaining the political legitimacy of the
central bank as a public institution. (59) But there is a potential
paradox of transparency. One of the inevitable fruits of the success in
influencing beliefs is that the central bank has to rely on less
informative signals to guide its decisions. If policymakers are to
consolidate the successes achieved to date, they will have to turn their
attention to how monetary policy should be conducted in an era when
prices are less informative.
We thank Benjamin Friedman and Christopher Sims for their comments
and guidance. We also thank Marios Angeletos, Alan Blinder, Christian
Hellwig, Anil Kashyap, Don Kohn, Chris Kent, Phil Lowe, Ellen Meade,
Lars Svensson, and T. N. Srinivasan for comments at various stages of
the project.
(1.) Blinder (1998, p. 70).
(2.) Svensson (2004, p. 1); Woodford (2005, p. 3).
(3.) Bernanke (2004b).
(4.) Woodford (2005, p. 2); see also Blinder (1998); Woodford
(2003b, chapter 7); Svensson and Woodford (2005).
(5.) See Kuttner (2004) for an overview of the various ways in
which inflation targeting has been implemented. Early contributions
include Leiderman and Svensson (1995), Bernanke and Mishkin (1997), and
Bernanke and others (2001).
(6.) Kohn (2005).
(7.) Bernanke (2004a).
(8.) Fisher (1930).
(9.) Polk and Sapienza (2004); Chen, Goldstein, and Jiang (2005).
(10.) Fleming and Remolona (1999).
(11.) Roley (1983), Cornell (1983), and Roley and Walsh (1985) have
documented the heightened reaction to money stock announcements in the
early 1980s.
(12.) Roley (1983, p. 344).
(13.) Brayton, Roberts, and Williams (1999).
(14.) Bank of England (2005).
(15.) See Geoffrey Dicks, "Bank of England Needs to Re-examine
Its Forecasts," Financial Times, August 10, 2005.
(16.) Sims (2003). See also Mackowiak and Wiederholt (2005), who
show how Sims' (2003) framework can be used to explain the
simultaneous occurrence of persistent average prices and large price
shifts in some subsets of goods.
(17.) Kuttner (2004); Ball and Sheridan (2003).
(18.) Mankiw, Reis, and Wolfers (2004).
(19.) Levin, Natalucci, and Piger (2004).
(20.) Gurkaynak, Levin, and Swanson (2005).
(21.) Gurkaynak, Levin, and Swanson (2005); this result is
elaborated in Gurkaynak, Sack, and Swanson (2005).
(22.) Hayek (1945).
(23.) Hayek (1945, pp. 519-20).
(24.) Lange (1936, 1937).
(25.) Grossman (1976); Radner (1979).
(26.) Romer and Romer (2000).
(27.) Sims (2002).
(28.) Hayek (1945, p. 524).
(29.) Morris and Shin (2002).
(30.) Keynes (1936).
(32.) Lucas (1972, 1973); Phelps (1970).
(33.) Phelps (1983).
(34.) Woodford (2003a) has popularized this interpretation of the
beauty contest rule.
(35.) Gali and Gertler (1999).
(36.) Woodford (2003a).
(37.) Woodford (2003a). Townsend (1983) discusses similar linear
rules but in the context of investment.
(38.) See Adam (2003), Amato and Shin (forthcoming), Hellwig (2002,
2004), and Ui (2003). See also Kasa (2000) and Pearlman and Sargent
(2005). The latter show how the problem can sometimes be reduced to the
case with common knowledge. Similar issues arise in the context of asset
pricing: see Allen, Morris, and Shin (forthcoming) and Bacchella and Van
Wincoop (2003, 2004).
(39.) Allen, Morris, and Shin (forthcoming).
(40.) The value r = 0.85 is implied by [xi] = 0.15 in the imperfect
competition interpretation of the beauty contest rule. See Woodford
(2003b) for a discussion of the magnitude of [xi].
(41.) Morris and Shin (2002).
(42.) Woodford (2005, appendix A).
(43.) Angeletos and Pavan (2004).
(44.) Hellwig (2004).
(45.) More accurately, it is the power mean with a negative power.
See, for instance, mathworld.wolfram.com/HarmonicMean.html.
(46.) Morris and Shin (2002); the online appendix is at
www.e-aer.org/data/dec02_app_morris.pdf.
(47.) Angeletos and Pavan (2005).
(48.) Svensson (forthcoming).
(49.) Romer and Romer (2000).
(50.) Morris, Shin, and Tong (forthcoming).
(51.) www.bankofengland.co.uk/publications/minutes/mpc/pdf/2005/index.htm, p. 9.
(52.) A video of the press conference is available at the Bank of
England website, www.bankofengland.co.uk.
(53.) The evidence is documented in Schuh (2001) and Campbell
(2004) for private sector forecasters, and in Goodhart (2004) and Tulip
(2005) for the Bank of England and the Federal Reserve, respectively.
(54.) Sims (2002).
(55.) Tulip (2005).
(56.) The moderation in output fluctuations has been documented by
McConnell and Perez-Quiros (2000), Kim and Nelson (1999), and Blanchard
and Simon (2001).
(57.) Goodhart (2004).
(58.) Campbell (2004).
(59.) Geraats (2002) gives a taxonomy of the different types of
transparency.</p>
<pre> Australia: quarterly change in the consumption deflator Percent a year, annualized 2002:1 3.68 2002:2 1.22 2002:3
2.43 2002:4 1.61 2003:1 4.79 2003:2 -1.19 2003:3 0.80
2003:4
1.59 2004:1 3.55 2004:2 1.17 2004:3 1.17 2004:4 2.33
2005:1 2.70 </pre> <p>STEPHEN MORRIS
Princeton University
HYUN SONG SHIN
London School of Economics