Optimal monetary policy.
Blake, Andrew P. ; Weale, Martin ; Young, Garry 等
1. Introduction
The new monetary arrangements have created a system where the Bank of
England sets interest rates in order to deliver an inflation target. Mr
King, the Deputy Governor, is reported(1) to have said that eventually
he hopes markets will be surprised not by the Monetary Policy
Committee's actions but by the data. If their response to the data
is known then the outcome of their meetings will be more or less
predictable.
Before one can address whether a monetary policy of this sort could
work better or worse than 'judgement' it is necessary to
decide what type of structure it should have. In this article we set out
a possible structure for a predictable monetary policy and investigate
how it might be conducted in an uncertain environment. The National
Institute's Model of the domestic economy, NIDEM, provides the tool
for our study.
Our basic framework is to assume that monetary policy is set by means
of a simple rule. A wide range of studies suggests that rules of this
type can do a reasonably good job of controlling inflation. However, the
use of simple rules always raises the question whether one might do
better in responding to any particular shock. Or rather, how much better
might one do through choosing an optimal response to an identified
shock. Thus we look at a policy structure which is represented by
optimal deviations from a simple rule.
Such a policy has obvious advantages on the one hand for the Monetary
Policy Committee and on the other hand for the Treasury Select Committee
which has the job of holding the Monetary Policy Committee to account.
The simple rule would define what normally happens and the Monetary
Policy Committee would explain its policy not as a justification for a
rise or fall in interest rates but as a deviation from the outcome
dictated by the simple rule. Even if the markets were not fully
appraised of the optimization exercises undertaken by the Committee,
policy would immediately become much more transparent.
The article is set out as follows. Firstly, we describe the modelling
of monetary policy in macromodels together with the same problem faced
in the real world by a monetary authority. Secondly, we briefly outline
a simple model and how that fits into the policy frameworks proposed by
various authors. Thirdly, we describe our approach and the advantages
that offers both as a modelling solution and as a guide to real world
policymakers. Fourthly, we describe the stochastic simulations and
control outcomes.
2. Modelling Monetary Policy
It is generally agreed that models need a well defined set of
financial policies in order to function as forecasting and analysis
tools. Using constant interest rates and tax rates makes no sense
because any well-specified model will be unstable with such policies
(Weale et al, 1989). However, it is often difficult to decide on the
appropriate form of policy rules. This is partly because the objectives
of policy are seldom disclosed in sufficient detail to enable modellers
to implement a representative policy in their models. With the UK's
entry into the exchange rate mechanism the policy framework was easy to
model. Its abandonment for an explicit inflation targeting regime
presented a new set of problems.
Whilst the target is well defined (the retail price index excluding
mortgage costs) and the instrument to control it (short term interest
rates), how the instrument should be moved in response to deviations
from the target is not. Even how the policy framework should be judged
to be successful used to be rather vague - although a commitment to
obtaining an inflation rate of 2 1/2 per cent or less by the end of the
1992-97 parliament was met. Since the election of a new government, a
number of changes have been implemented, not the least giving
operational independence to the Bank of England. However, the target
rate is set by central government. Additionally, a narrow range of 1 per
cent either side of the same target is now judged to be acceptable.
Operationally it seems that an assessment of inflation prospects over
the next two years is used to decide whether interest rates should be
changed, with the magnitude and timing of changes left to the Monetary
Policy Committee at the Bank.
Macromodellers are faced with the task of reproducing such a policy
regime, firstly to make forecasts (including of the inflation rate
itself) and secondly to analyse how effective monetary policy can be. In
many ways this is a problem identical to that faced by the Monetary
Policy Committee. New information about the state of the economy has to
be acted upon in some, presumably systematic, way.
Modellers have approached this in two basic frameworks. Firstly, they
have advocated simple policy rules that have a given structure guided by
both economic and control theory. There is a huge variety of approaches
to this. The argument is typically that there should be a rule which
feeds back on the inflation rate deviations from target, perhaps with
additional indicators or indeed targets of policy. Such policy rules may
have several coefficients, which are sometimes chosen by reference to
historical experience (Taylor, 1996), or by optimisation where the
discounted value of expected inflation deviations is minimised by choice
of coefficients given the state of the economy (Westaway, 1986), or
heuristically by designing rules 'by hand' that seem to
perform well in a variety of circumstances (Blake, 1997). The second
framework is to use optimisation not just to choose the coefficients of
a rule but to set the entire trajectory for interest rates to minimise
the representative cost function. Church et al (1996)compare the various
approaches for recent vintages of the major macromodels.
This closely follows the rules versus discretion debate, with simple
rules associated with the former and optimal control the latter. The
modern twist on this is, of course, that an enforced rule can improve on
the discretionary outcome because time inconsistency renders the
discretionary outcome highly suboptimal. In what follows we rather
sidestep the time inconsistency debate, not because we feel it is an
unimportant one, but rather because we see it as an empirical issue, of
which this article is part of the evaluation process. Indeed, we are
trying to have the rule as a reference point and the discretion to
depart from it. We do need to assume that the policy regime is credible
for our analysis, and see this as an important extension for future
investigation.
3. Inflation Targeting Using Simple and Optimal Rules
It is easiest to outline the various approaches that might be used to
model inflation targeting using a simple model. In doing so it is best
to adopt a model where the optimal policy is able to be calculated
analytically. This is usually both a function of the model and the
objective function. One such combination has recently been extensively
analysed. Svensson (1997) suggested that the proper target of monetary
policy is expected rather than actual inflation. This is so whenever
there is a lag in the operation of monetary policy. But in this case
exact deterministic control of the inflation target is leasable after a
fixed horizon. Then the optimal control reduces to a per period
targeting of the h period ahead inflation rate where this is the
response lag.
A simple two equation model that gives this property is
[Mathematical Expression Omitted]
[Z.sub.t+l] = [KZ.sub.t]
where [Pi] is the inflation rate, r the interest rate and z an
arbitrary state variable, perhaps aggregate demand. This is a simplified
version of Svensson's model.
The second part of the problem is to model the objectives of the
monetary authority. An objective function that might represent the
central bank's preferences is
[Mathematical Expression Omitted].
The central bank has a target for the inflation rate in each period
[Mathematical Expression Omitted] (which may be a constant) and
penalises deviations from it quadratically. This is the only object of
policy.
As there are no 'instrument costs' and a transmission lag
from policy to the final target of policy, the problem reduces to a
series of static problems where the optimal policy is to achieve the
future inflation rate exactly. Substituting the model into the objective
function gives an unconstrained optimisation problem of the form
[Mathematical Expression Omitted].
The first order conditions yield
[Mathematical Expression Omitted]
for each value of t[greater than]l, which gives
[Mathematical Expression Omitted]
as the optimal policy. [[Pi].sub.0] cannot be affected by the
instrument so is irrelevant to the optimisation.
If z were to be stochastic we could modify the equation so that
[z.sub.t +1] = [Kz.sub.t] + [[Epsilon].sub.t+1]
where [Epsilon] is not observable when interest rates are set. This
means that the forecast inflation rate still appears in the optimal
rule, but is not realised. As before, controlling the forecast inflation
rate is not the same as feeding back on it, as indicated by the optimal
rule which includes only currently dated variables.(2)
Svensson is essentially saying that for this type of model with
transmission lags one should adopt a target that one can achieve and be
seen to be achieving.(3) This is in complete contrast to the alternative
of a fixed simple rule for interest rates, even a well designed one.
For, say, a complex model without a closed form for the optimal interest
rate policy rule it may be that a given (suboptimal) rule has the
drawback that there is considerable incentive to depart from it when
another policy is better. In essence Svensson is saying that
operationally policymakers do whatever is required to achieve the given
target.
The other side of the debate can be epitomised by Taylor (1996) who
proposes a specific rule. Such a rule in the context of the above model
might be
[Mathematical Expression Omitted]
where [Mu] is chosen to bring the inflation rate close to the desired
rate over time. The dating we have chosen for the feedback indicates
that the optimal policy is not available to the monetary authority. In
general the choice of parameter can be approached using optimisation by
minimising a loss function such as the one above, although Taylor
favours using historical experience, and others choose parameters
'which work'.
The argument about which form of policy should be used seems clear
cut. Svensson argues that optimality is attainable and therefore can be
achieved by using the optimal policy. This places him firmly in the
discretion camp. The Taylor approach is in the rules camp, where in
complex models a simple and transparent approach that yields a
sub-optimal but very good outcome is preferable because following a
mechanical rule reduces uncertainty about likely actions by the monetary
authority, whereas using optimality criteria makes it much less clear
what the authorities will do in the future or in response to shocks.
Svensson's comeback is that there will come a time when it is
imperative to depart from the simple rule because the outcome otherwise
will be disastrous.
This suggests a synthesis. If we use a (perhaps Taylortype) rule as
our basis for monetary policy we can then consider optimal divergences
from that policy, where shocks make it imperative to react in the short
run to disturbances. In what follows we outline a framework which allows
us to derive the optimal deviation from a rule in the face of an
identified shock.
4. Our Approach
The National Institute's model of the UK economy is usually
operated with feedback rules for several reasons, some of them merely
practical. We will cover why later in this section, but if the model
operates with simple rules we can firstly ask the question are simple
rules good enough? If the answer is yes, then is it worth doing
something else? We will argue below that there are frequently times when
departing from an announced policy rule is required.
The necessary additional step required to make our approach feasible
is to modify the way that we operate monetary policy by having two
elements to setting interest rates. We can apply optimal control to see
if one could do better than a default policy rule, and do no harm to the
process to leave that rule in place - in fact, as we argue below, the
modeller is forced to use such a policy rule.(4)
Our modelling approach is the following. We 'mix up' simple
and optimal rules in the sense that we wish to look at the effect of
optimisation relative to the simple rule. The simple rule is one which
we use routinely in forecasting. The main additional innovation is that
we implement optimal control by considering optimal deviations from the
simple rule over the subsequent two years from a new shock. This has two
major benefits. Firstly, for a variety of reasons, it reduces the major
costs of optimal control. The next subsection explains how. Secondly, it
allows us to model more closely the actual policy process.
4.1 Reducing the Costs of Optimal Control, Guaranteeing Model
Solution and Modelling the Decision Process We adopt the following split
of the monetary instrument
[Mathematical Expression Omitted]
where we will set the two parts of the 'new' instrument in
separate ways. Firstly, we will set part by a given policy rule of the
form
[Mathematical Expression Omitted]
where [Gamma](L) and [Beta](L) are polynomials in the lag operator.
This simple policy rule will usually have proportional, integral and
derivative components and perhaps also indicator variables which
anticipate future inflation. The precise rule used in the stochastic
control exercise is given below. Secondly, [Mathematical Expression
Omitted] will be set by explicit reference to an objective function,
similar to the one above. This split serves three purposes:
* A rule may be required to give determinacy to a model which
otherwise would not solve 'open loop'. In these circumstances
an optimal control policy can be implemented open loop which would
usually require derivatives that could not otherwise have been obtained.
* At the very least a sensible rule puts you closer to the final
optimum, so less work has to be done in optimisation. This is
particularly important in a stochastic exercise where repeated
re-optimisation is required.
* Given that the 'predetermined' part of policy does put
the model near its optimum, the rule can be used to provide a
'terminal condition' for the optimal policy that reduces the
dimensionality of the problem significantly.
We discuss each of these in turn.
4.1.1 Ensuring determinacy
Price level indeterminacy (and indeed expectational indeterminacy in
exchange rates) plagues models with fixed nominal interest rates. To use
the interest rate as the monetary instrument means a resolution of this
problem needs to be found. Splitting the instrument vector as described
will provide an effective means of so doing.
Even without additional optimal control the model will then have
properly articulated monetary policy which resolves the problems
described. This is a simple model solution problem rather than anything
else. It does not reduce the computational burden but rather makes
computation possible. The National Institute's model is solved with
a simple rule in place to enable us to use it properly.
As we are forced to use 'open loop' control methods it
follows that without a determinate model the required derivatives would
be simply unobtainable. This is because in effect we linearise the model
about a reference trajectory by simulation.
4.1.2 Reducing computational costs by a simple rule
If the policy problem can be solved by just the simple rule, the
optimisation procedure will show just that. If the rule puts the
instruments close to their final optimum without the need to move the
solution this is likely to speed computation. This requires the rule to
be well specified and to involve the optimal control in 'fine
tuning' the policy represented by the rule. For complex objectives
this is unlikely to be the case, but for simple objectives, such as
inflation targeting, then a simple rule does a considerable amount of
the work required. In the case where objectives are complex then a
simple rule to ensure determinacy alone may be the best alternative.
The rule we use for interest rates is remarkably effective at
targeting inflation. Blake (1997) shows how effective a simple rule can
be in changing the inflation rate. Note that optimising a simple rule is
much harder than the optimal control approach we suggest as the problem
becomes highly nonlinear in the coefficient of the policy rule. It also
produces a rule which is a compromise between short and long run
considerations, not one that can be used to react quickly to identified
disturbances.
4.1.3 Reducing the dimension of the control problem
We believe that a powerful advantage offered by this framework is by
recognising that policy rules in conjunction with achievable objectives
can be used to take care of the long run of the problem, meaning that
only the immediate first few periods need to have instruments set by the
control methods. This reduces the dimensionality of the problem
considerably without compromising the optimal control very much. In
fact, it may be an improvement on the standard procedure of attaching a
'tail' to the optimisation which fixes the instruments at the
end to some value, possible chosen endogenously. Blake and Westaway
(1995) demonstrated that this was often unsatisfactory with rational
expectations models.
A simple rule that does most of the long run control can leave the
optimal control to short run control, perhaps for the first two or three
years. In our example, we use a two year horizon, in common with the
stated aim of the monetary authorities. The rest of the simulation is
dealt with by the simple rule alone. This is a huge reduction in
computing cost relative to the case where a full optimal control of,
say, twenty years of interest rates being manipulated. Whilst such
problems are amenable to solution they increase the number of
derivatives required from eight to eighty. This tenfold increase is
pretty much reflected in the computing time required.
4.2 The Role of Time Inconsistency
An important consideration with optimal control of the sort that we
necessarily use is that optimising policy with forward expectations is
usually time inconsistent. In a deterministic simulation this raises no
problems in so far as the policy is inconsistent (and therefore may not
be sustainable) but at least one can distinguish ex post whether one has
reneged or not on announced policies. In a stochastic context, this is
no longer true.
For a stochastic control problem solved open loop there is no policy
rule to adhere to and therefore no default. A policymaker can only
re-optimise, and change the path of the instruments. This is partly in
response to the incoming shock, and partly a result of time
inconsistency. This is not an issue that can be ducked, and it is
important to adopt policies which are 'not very' time
inconsistent.
The obvious way to do this is to implement time consistent policies.
This has two difficulties. Firstly, time consistent policies are not
uniquely defined. Hall (1986) and Fisher (1992) used one simple solution
concept. Blake and Westaway (1995) proposed two alternative algorithms
but the computational requirements are very high and neither have yet
been implemented. Note that our proposed policy regime deviates from the
principle of optimality both by violating the time consistency
constraint and by eschewing the full dynamic control solution by
truncating the time horizon artifically.
This reinforces the result that the other way to ensure time
consistency is to solve a problem that cannot be bettered by reneging
and re-optimising. Targeting future inflation - and achieving it - is
precisely one such mechanism. Although this runs the risk of relocating
the source of time consistency to the institution that sets the target
it is certainly in keeping with the UK's current policy regime.
5. Optimal Stochastic Control on NIDEM
The control exercise we consider is intended to show the benefits of
departing from the simple rule in a variety of circumstances. Rather
than demonstrate a single, deterministic, optimal control problem which
would involve reporting a welfare loss improvement for a very artificial
exercise, the stochastic aspects seem to us to be the most important
part of the problem faced by the monetary authorities. A full stochastic
optimal control exercise, even with the computational gains that we have
made by simplifying the problem in the way outlined above, is still an
extraordinarily demanding exercise for a model such as ours and in many
ways not as illuminating as we might hope.
Our approach is to break up the simulation exercise into three parts
where we identify different sources of shocks. These are quantity, price
and foreign shocks separately. This is at least partly to mimic the
problem facing the monetary authorities, where at least part of the
problem is to decide if there has been a change in conditions and if so
what is the major source of any new disturbance. For our model we might
reclassify the quantity shocks as demand shocks and the price shocks as
supply ones. This is due to the particular forms of adjustment in the
model, where price is adjusted by mark-up formulae as a supply response,
although obviously there are elements of demand and supply in both
quantity and price shocks. The foreign shocks are easier to classify,
and are those variables either exogenous to most of the rest of the
model and in theory determined abroad, although some behavioural equations for investment abroad and so on must at least partly depend on
domestic decisions.(5)
This constitutes about a hundred equations of the model, roughly a
third of which are for each set of shocks. They also account for two
thirds of the model's behavioural equations, with identities, pure
output variables and expectational variables numbering well over two
hundred of the total four hundred. Some behavioural equations have been
omitted because they are either policy variables (we do, after all,
assume policymakers are acting rationally!), did not fit into any of our
chosen categories or fit into too many. They do contain the most
important equations of the model in the separate categories. For
quantity shocks we include consumption, price shocks the major domestic
price indices and for foreign shocks the exchange rate.
We then shock each of the equations for these variables by a vector
of residuals taken from historical experience(6) for each of the
variables in each of the three cases. The historical shocks were taken
from the relatively recent past, from 1984 onwards. This is partly to
address the view that the variance-covariance of shocks to the economy
has changed markedly, certainly since the 1970s. This should not impact
on our exercise if the model we use is a genuinely structural
representation of agent's decisions, but it may be that our model
better describes more recent behaviour as that accounts for the greater
part of the modelling effort.
5.1 Simulation Results
The stochastic exercise is set to mimic the problem faced by a
monetary authority by shocking the first period and setting policy to
try to improve on that outcome. We can demonstrate the impact of the
different shocks by considering the forecast error variance. With our
default monetary policy rule in place this is akin to a stochastic
forecast. The forecast base we have used was produced in October
1997.(7) It is important to stress that this is not the model prediction
with 'fixed' monetary policy in the sense that interest rates
are constant, but fixed in that it uses a feedback policy rule of
[Mathematical Expression Omitted].
This is the default forecasting rule and is similar to the rule used
by Blake (1997) in a stochastic simulation exercise.(8)
In Chart 1 we plot the forecast variances of inflation for the first
three years. The timing of responses clearly depends on the source of
the shock. The most obvious feature is that domestic price shocks impact
immediately, which is to be expected given that they constitute
components of the final target. Foreign shocks, which include the
exchange rate and foreign prices, take longer, and peak after four
quarters. This is perhaps surprisingly slow given that the exchange rate
plays such a key role in determining the domestic price level. Quantity
shocks peak another quarter after foreign, and also take longer to die
away.
Table 1. Welfare Losses
Costs
Simple Optimal % Reduction
Base 11.34 10.06 11.3
Quantity 12.64 10.24 19.0
Price 22.78 17.19 25.6
Foreign 15.04 12.40 17.5
Total 17.19 13.50 21.5
Note that these are all contemporaneous shocks, observed by all
agents; it is merely their source that differs. Their varying time
profile reflects that each impacts on different parts of the model and
then afterwards affects the inflation rate through the rest of the
model.
Welfare losses are calculated on the basis of the simple loss
function
[Mathematical Expression Omitted]
where the initial period normalised on the fourth quarter of 1997.
The summation can run over all periods as the included instrument costs
represents a cost of 'using discretion' relative to the rule.
These are zero for no additional control and beyond two years ahead in
the case where we use eight periods of active control. For the shocks
from Chart 1 we plot the welfare losses for each case in the first
column of Table 1. Note that there is no element of optimisation here,
the welfare losses are just the objective function evaluated for each
realisation. This includes in the first row the cost associated with the
forecast base itself.
The average difference from target of the inflation rate will, of
course, depend on how far it is away when the simulation starts. This is
reflected in the cost of the base case, a case itself which is open to
optimisation. Note that it is perfectly possible that a shock improves
the outcome relative to the case when there is none. Indeed, for the
quantity shocks alone, in about a quarter of the cases the unoptimised
cost is actually reduced by the shock.
The forecast with the simple rule in place necessarily depends on
that simple rule. The forecast variances, of course, depend on the rule,
but it is important to recognise that the deviation from target at a
fixed future date gives you relatively little information about how to
react with monetary policy. The price shocks die away quite quickly, but
as we shall see a considerable amount can be done in the short run to
reduce inflation.
The control exercise is then to take each replication and use two
years of additional active interest rate movements on top of the simple
rule to minimise the above loss function. As a preamble in Chart 2 we
show the simple deterministic control, essentially optimising the
forecast. This leads to a reasonable 11 per cent reduction in cost.(9)
Note that the base forecast is for inflation to be above target and then
to fall below it for some time. Part of the control problem for this
'realisation' is to get the inflation rate up over the
intermediate future.
We could, of course, do this by optimising the parameters of our
simple rule.(10) We could even do this in conjunction with our proposed
policy framework. This would clearly reduce the cost in so far as it is
not the absolute minimum attainable by full optimal control and we would
have additional degrees of freedom, but probably not by much.
The required initial reduction and then increase in inflation
incorporated in the base around which the stochastic control exercise is
conducted colour the optimisation exercises we conduct. The exercise
will depend on the time profile of inflation that the shocks induce but
very few of the shocks actually change the basic requirement of the
deviations to the simple rule, even though the timing and pattern of the
deviations are highly dependant.
In Table 2 we present two illustrations of the real costs (or
otherwise) of the additional interest rate rises. The most obvious of
these is the average increase in the interest rate over the two year of
additional active control. This ranges from 0.74 per cent to over 0.9
per cent. The mark-up is not actually constant over the period, as we
show in the illustrative examples in the next subsection, but usually
the interest rate is markedly higher over this period, reflecting the
required initial deflation present in most of the simulations. A perhaps
surprising result is the average growth rates with and without the
optimal control. The averages are over five years, and give an
indication of the 'output forgone' in each policy regime. As
can be seen clearly, the benefits to the stabilised inflation rate is
marginally more output in the long term, although the estimates are not
significantly different. This, of course, provides further justification
for the additional policy activism.
Table 2. Interest-Rate Mark-Up and Cumulated Output Growth
(standard errors in parenthesis)
Average Growth Average
short-run
interest rate
Simple Optimal mark-up
Quantity 2.22 (0.06) 2.23 (0.07) 0.740
Price 2.22 (0.03) 2.24 (0.02) 0.841
Foreign 2.15 (0.14) 2.16 (0.13) 0.911
Total 2.20 (0.09) 2.22 (0.08) 0.830
5.2 Illustrative Simulations
In Charts 3 to 5 we plot the forecast inflation rate for each of the
chosen sets of shocks before and after control together with the
corresponding interest rate profiles and growth forecasts for
representative shocks.
Chart 3 shows a quantity shock, and this is similar to the
deterministic case in many ways. The initial reduction in the inflation
rate is achieved with some loss of output growth with interest rates
peaking in the short run about 1/4 per cent over the simple rule.
Inflation gets back to base more quickly than with the simple rule and
stays on track. In Table I we show that an average 19 per cent reduction
in the welfare loss was achieved for all the replications. This
replication gives a 21 per cent reduction, about average.
In Chart 4, the illustrated price shock impacts much quicker, as
expected. The impact on the inflation rate peaks nearly a year later at
almost 5 per cent and although it quickly declines from then on it
persists below base after that. An additional 3 per cent hike in
interest rates in the short run reduces the inflation rate by about 1
per cent from its peak, and the inflation rate is much closer to target
afterwards. The cost in the short run is a recession, but in the longer
run higher growth compensates. For price shocks in general an average 25
per cent reduction in the cost function was achieved although Chart 4
shows a replication with a 33 per cent gain.
Chart 5, the foreign shock, has quite an interesting profile. A
modest decrease in inflation is followed by a too high inflation rate in
the medium term. This must be a case which could be mitigated by a
longer period of active control past the two year horizon. This is
despite a considerable short run increase in interest rates, and
indicates how the persistent effects of the foreign shock are difficult
to eradicate. A small negative growth rate for one quarter is again
compensated for by later strong growth. Table 1 again shows a good
overall gain in welfare of 17 1/2 per cent, but less than for the other
shocks.(11) The illustrated shock is actually not completely
representative as it gives a 27 per cent gain, but the time profile is
similar to the others.
It is perhaps worthwhile considering the operation of the policy
framework. Firstly, a policymaker makes an inflation forecast based on
the monetary policy rule and consequent interest rate forecast. It then
assesses how much the policy needs to be modified to take account of any
new information. Although the mechanism by which this is achieved in our
model is by having a split instrument, in terms of implementing the
policy it would clearly be preferable to announce the interest rate as
plotted in the Charts, as a modified policy relative to the no
optimisation case.
6. Conclusions
In this article we have tried to marry two approaches to policy
design in the very contemporary context of inflation targeting. This has
been for two reasons. Firstly, each on its own is rather inadequate. The
full optimal control approach gives too few clues as to the value of
revising policy in the light of new information and can appear much too
opaque to policymakers to be informative. Using simple rules can also be
seen as being unnecessarily restrictive, giving too little discretion
when obvious policy choices ought to be made. Secondly, we aim to
provide a framework that can feasibly be adopted by policymakers, that
focuses on the role of new information whilst emphasising the need for a
long run policy regime that delivers the target of policy properly.
That regime is one where a policymaker uses a simple rule, designed
to control the inflation rate using the base interest rate, and
supplements it by using two years of deviations from that rule, designed
using optimal control techniques. Using stochastic optimal control we
are able to show quite considerable gains to inflation control in this
way. As a additional point of interest we are also able to show that the
source of the shock makes a considerable difference to the time profile
of inflation responses.
Good policy advice cannot be model free although there is much to be
said for choosing policies which work well on a range of models. The
policy regime we have outlined is also time inconsistent in the way that
both optimal and simple rules are. An analysis of the value of
commitment needs to be added to this study to see how robust the
framework is. This is a promising avenue for future research, combining
as it does previous policy proposals with practical application to the
current regime of inflation targeting.
NOTES
(1) 'The Inflation Target Five Years On'. Lecture given by
Mr Mervyn King, Deputy Governor of the Bank of England at the London
School of Economics to mark the 10th Anniversary of the Financial
Markets Group at LSE.
(2) In effect this in macromodelling terms is a 'type 2
fix' over time.
(3) We abstract from the rather tricky problem of targeting a
variable which is never actually hit. Bernanke and Woodford (1997) offer
an interesting critique of the approach.
(4) Use of a 'two part' policy rule originated in Weale et
al (1989) where it was used to stabilise a model for linearisation
purposes. There are similarities in that part of the necessity of the
simple rule in the first place is to solve the model satisfactorily.
Using a stabilising rule with optimal control was first done on the
Institute's model by Westaway (1995).
(5) Full details of the variables shocked are available on request.
(6) This is an approach to stochastic simulation known as
bootstrapping.
(7) As an aside it gives us a certain amount of information about how
uncertain our forecasts are based on current information.
(8) The exercise we describe is different to that for two reasons.
Firstly there was no objective function and therefore no optimisation.
Secondly, here we focus on the information content of the current state.
The forecast error variances are then a function only of current
information, not the stochastic steady state as there. The object of our
previous work was to evaluate an inflation targeting 'band
width', to see how close one could keep inflation to target using a
simple rule.
(9) A full optimal control exercise does further reduce this by about
another 5 per cent, a significant amount. This opens the question as to
what is the appropriate active horizon to use, an important area for
further research.
(10) Computationally this is likely to involve much more work than
optimising the value of variables as the results depend much more
nonlinearly on the parameters of the simple rule and 'corner
solutions' close to instability become much more likely.
(11) This shock also proved to be the most troublesome to optimise,
clearly reflecting the need to do more work later as the effects of the
shock build up.
REFERENCES
Bernanke, Ben S. and Mark Woodford (1997), 'Inflation forecasts
and monetary policy', NBER Working Paper 6157.
Blake, Andrew P. (1996), Forecast error bounds by stochastic
simulation, National Institute Economic Review, no. 156, 72-79.
Blake, Andrew P. (1997), 'Evaluating policy rules by stochastic
simulation', mimeo, NIESR.
Blake, Andrew P. and Peter F. Westaway (1995), 'An analysis of
the impact of finite horizons on macroeconomic control', Oxford
Economic Papers, 47, 98-116.
Church, Keith B., Peter R. Mitchell, Peter N. Smith and Kenneth E
Wallis (1996), 'Targeting inflation: Comparative control exercises
on models of the UK economy', Economic Modelling, 13, 169-184.
Fisher, P. (1992), Rational Expectations in Macroeconomic Models,
Dordrect, Kluwer Academic Publishers.
Hall, S.G. (1986), 'Inconsistency and optimal policy formulation
in the presence of rational expectations', Journal of Economic
Dynamics and Control 10, 323-326.
Svensson, Lars E.O. (1997), 'Inflation forecast targeting:
Implementing and monitoring inflation targets', European Economic
Review, 41, 1111-1146.
Taylor, John B. (1996), 'How should monetary policy respond to
shocks while maintaining long-run price stability? - Conceptual
issues', in Achieving Price Stability, Federal Reserve Bank of
Kansas City.
Weale, Martin, Andrew Blake, Nicos Christodoulakis, James Meade and
David Vines (1989), Macroeconomic Policy: Inflation, Wealth and the
Exchange Rate, London: Unwin-Hyman.
Westaway, Peter E (1986), ',Some experiments with simple
feedback rules on the Treasury model', GES Working paper no. 87.
Westaway, Peter F. (1995), The role of macroeconomic models in the
policy design process', National Institute Economic Review, no.
151, 53-64.