Forecast error bounds by stochastic stimulation.
Blake, Andrew P.
1. Introduction
What can the National Institute model tell us about the accuracy of
forecasting inflation and growth? We make 'point' forecasts
over the short to medium term, and assess the accuracy of those
forecasts by examining past forecast errors (see Poulizac, Weale and
Young, 1996). But the model itself can be used for the same purpose and
can inform us better than historical exercises if a new policy regime
has been adopted which is a major departure from past experience. In
that case, the behaviour of the economy would be expected to be
considerably different and so using a model which captures the
structural effects of the changes may give a more accurate view of the
likely behaviour of policy targets, policy instruments and other
variables.
We use stochastic simulation to analyse the forecast accuracy and
asymptotic variance of inflation and growth. We assume the monetary
policy regime is one of targeting inflation at 2 1/2 per cent using a
feedback rule for the interest rate. This is comparable with, although
not identical to, the current UK monetary policy framework. That has
been reiterated by the Chancellor, Kenneth Clarke, in June 1995, as:
'Beyond this Parliament, I propose that our aim will be to
continue to achieve underlying inflation ... of 2 1/2 per cent or less.
Monetary policy should be set consistently to achieve this target. This
should ensure that inflation should remain in the range 1-4 per
cent.'(1)
By contrast, we use an explicit rule, outlined below, that we assume
is understood by agents and implemented exactly by the monetary
authorities. It pays no specific attention to upper and lower limits,
but rather concentrates on achieving the target.
The exercise is to subject the National Institute model of the UK
economy to representative shocks(2) with a policy rule which guarantees
an inflation rate in the long run in the middle of the current target
range. This paper is intended to be a largely non-technical assessment
of the issues involved in conducting such an exercise. However, we also
report new results on our empirically based model which provide a first
assessment of how well the UK monetary policy regime could be expected
to perform in practice. We hope to make a valuable contribution to
assessing the likely effectiveness of an inflation targeting regime in
the real world.
There are particular problems associated with using the National
Institute model for stochastic simulation because the presence of
expectations terms makes this a considerable computational task. There
are also issues such as how a suitable policy can be designed and
implemented. We have used a method similar to that used by Blake and
Westaway (1996) for a small linear model. This determines the form of
the policy rule by theory. The rule is calibrated by experimentation rather than by either estimation(3) (which would be wide open to the
Lucas critique, particularly in the context of a fairly new policy
regime) or optimal control (which is usually rather less than
transparent in its application).
In the next section we discuss the ideas behind stochastic simulation
and how we approached it in practice. In section 3 we discuss why the
policy rule we have used was adopted, and in section 4 describe the
statistics that may be calculated and what they tell us about forecast
accuracy. Section 5 gives the results of the stochastic simulation
exercise. The results do seem to be quite encouraging for the overall
effectiveness of an inflation targeting regime in the UK.
2. Stochastic simulation
What is stochastic simulation?(4) Our published forecast is
deterministic. The model is solved without unexpected shocks, and the
forecast is our best estimate about what would happen if there were no
unanticipated disturbances. In a single stochastic simulation
representative shocks are added into the solution. These shocks need to
be consistent with historical experience and share historical
variance-covariance properties. For a typical single replication target
variables will be driven away from their desired level. Even if the
model has effective policy rules in place it is only on average that
targets will be reached. Only after the shocks cease will target
variables be forced permanently back to their non-stochastic
equilibrium. By running a large number of replications with different
sets of shocks it is possible to evaluate the range over which the
target variable can be driven away from the deterministic forecast. We
can then calculate forecast standard errors and confidence limits for
variables of interest.
In this paper we consider a scenario where the inflation rate is
driven away from the target level by exogenous shocks and interest rates
are used to return it to the target level. For our chosen policy rule we
use the model to evaluate the standard errors for the forecasts of
inflation, interest rates and GDP growth. These grow as the forecast
horizon extends until they settles down at some long-run (asymptotic)
level. They must grow because at any one time the uncertainty in the
next period is a function of next period's shocks. The period after
has an additional set of shocks to contend with. This continues until
the additional shocks in a future period do not contribute further to
the overall forecast uncertainty.
To explain how shocks are applied, it is necessary to consider the
nature of the equations for the variables of our model. These are of
several types. Firstly, there are behavioural equations that represent
decisions made by agents, perhaps the factors which affect consumption
or investment behaviour, or the evolution of prices in response to
movements in relative costs or the changing demand for goods. A second
type are identities, which might be simple adding up constraints such as
the GDP identity, or reflect the stock behaviour of flow variables, such
as capital stocks being the sum of investments. A third group are policy
equations, where policy instruments are adjusted to keep policy targets
on track, most notably, for our purposes, interest rates being moved in
response to changes in inflation, designed to keep inflation at a
specified target level. These three types of equation describe the
movements of endogenous variables, whose values are determined by the
model itself. By contrast, exogenous variables are determined outside
the model. Such variables can be thought of as not having an equation.
It is simple to turn an endogenous variable into an exogenous one by
suppressing the equation for it, for example omitting the interest rate
reaction function.(5)
The classification of variables and their equations is very important
when considering the stochastic properties of the model. There are (or
at least should be) no shocks to the identities, and these equations and
data add up exactly so there are zero residuals.(6) Behavioural
equations have residuals that represent the unexplained part of the time
series over the estimation period. We treat the residuals of these
equations as unexplained shocks to those variables. A stochastic
simulation adds in shocks consistent with the residuals to replicate the
type of disturbances that hit the economy. However, there must also be
added noise from the exogenous variables, which although not modelled,
clearly have random components. For this exercise we have used simple
time-series equations for the exogenous variables and the residuals from
these as the shocks. This will somewhat overstate the amount of noise
associated with these equations as there are much more sophisticated
equations that could be used to explain the behaviour of these
variables.(7) However, the residuals of some behavioural equations are
smaller than they would have been if the equation had been freely
estimated, as in some cases shocks have been identified and removed in
estimation through the use of dummy variables. This implies that the
behavioural equations should be examined to check where genuine shocks
are suppressed or exaggerated by the final equation. The relative
importance of the overstating of external disturbances relative to
endogenous ones is a question deserving further, future research. For
now, we simply note that some shocks are clearly 'too big' and
some 'too small'.
Finally, the policy variables are treated as following deterministic
equations. This need not be the case, as sometimes a policy might be
implemented with error, perhaps because of difficulties in measurement.
We have so far ignored such considerations. There are about 250
behavioural equations and 30 exogenous variables to shock.
A single stochastic simulation is then achieved by applying a series
of shocks to the model. The shocks have to be consistent with the
residuals for the equations described above. In particular they should
have the same contemporaneous covariance structure. This is because a
shock to investment might be correlated with a shock to consumption or
stockbuilding. A variety of methods exist to generate pseudo-random
shocks consistent with the patterns found in the residuals, and we refer
to Ireland and Westaway (1990) for a description. We have relied on
something rather simpler but nonetheless effective. Instead of
generating new shocks, we have used the historical ones and randomly
picked the order that we use them in. Therefore all shocks for a
particular historical time period are applied across all the equations.
This, of course, maintains the historical variance-covariance properties
of the data across variables but not through time. This
'bootstrap' method requires that shocks are serially
uncorrelated. The method eliminates the need both to artificially
truncate shocks which are simply too big, and to consider their
variance-covariance properties. This approach also reduces the amount of
intervention required before the model begins to solve reliably. More
general methods can be applied at a later date.
The model must then be solved. For rational expectations models there
are complications. If there were no forward expectations in models, then
the procedure would be straightforward. The shocks are added in as
residuals and the model solved for the current period. In the next
period, a new set of shocks is added in and the model solved again and
so on. For a five year simulation a quarterly model is to be solved for
each of the twenty quarters and the computational burden is only
increased over a deterministic simulation (i.e. one where the shocks are
not added in) by the fact that in general the more noise the slower the
solution is found. This is likely to be much less than a doubling of
solution time for a single replication.
With rational expectations the procedure has to be different. At any
given time all future shocks are unknown, and only those shocks which
happen contemporaneously are observed. So a sequence of full rational
expectations solutions needs to be found for each shock, where the only
new information in a given period is the current shock. At any given
period the future is important and affects today but because future
shocks are unknown and despite there being an expectation of future
disturbances the best guess is that they are zero. To find the solution
values for the current period the model has to be solved into the future
far enough so that the first period solution is unaffected by the
terminal date. To solve a five year stochastic simulation requires a
data base of the five years plus long enough for a full rational
solution past the end. For the main exercise we solved the model over
twenty years, with the last nineteen years and three quarters discarded every time and the first quarter retained as the initial condition for
the next drawing of shocks. It this way it can be likened to an
econometric exercise of a 'rolling regression' where a twenty
year window is moved forward through a twenty five year data period. In
comparison with a deterministic simulation, where for a full rational
expectation solution for the first five years a single solution of
twenty five years would be expected to give reliable results, the
stochastic solution requires twenty twenty-year simulations. It would be
unsurprising if this were to be twenty times as expensive for a single
replication. In what follows we did fifty replications, a total of a
thousand model solutions.
3. Inflation Dynamics, Monetary Policy and Fiscal Policy
A substantial difference between the approach adopted here and an
historical exercise is that the monetary policy rule has been designed
to guarantee the target inflation rate in a non-stochastic equilibrium.
The behaviour of inflation is thus expected to be very different from
historical experience. It is often hard to reject the hypothesis that
the annual inflation rate in the UK is non-stationary (i.e. it is
integrated of order one and will only revert to a mean value as a
differenced series). Even if we accept stationarity, the mean inflation
rate over the past is somewhat higher than the current avowed target
range. Neither of these is a problem for our analysis, as we are
assuming that the behaviour of inflation over the future is determined
by a policy regime which is patently different from previous ones.
What determines a good policy rule? Considerable attention has been
devoted to studying such a question, and a complete account can be found
in Weale et al. (1989). Here we offer an intuitive explanation of how we
have decided on a policy rule. Firstly, it should be a feedback rule,
which relates the setting of the policy instrument (interest rates) to
the final target of policy (inflation as a deviation from its target
rate). There is scope for the additional use of indicator variables,
such as pressure of demand. This might indicate that a particular shock
is likely to cause the inflation rate to deviate from target in the
future and therefore reaction now will suppress it early. However, this
requires model-based analysis of what is a good indicator, and it turns
out that we can do rather well even without that.(8) For an analysis of
the design of rules which investigates such an approach more thoroughly
see Blake and Westaway (1996). The simplest rule is that if inflation is
above target the interest rate should be raised, and if it is below it
should be reduced. Experimentation is used to determine an appropriate
strength of response.
In practice it is better to make two modifications to such a simple
rule. Firstly, when one incorporates a model with a pure proportional
rule of this sort, it is easy to show that an equilibrium can be reached
where the nominal interest rate has been raised and inflation remains
above base, such that the equilibrium real interest rate is reached at a
higher than desired inflation rate. In these circumstances, it is
important to ensure that the interest rate continuously varies unless
the target is actually met. This means that the change in the interest
rate is then related to the difference from target. This is technically
known as an integral control rule, because it can be expressed as
relating the level of the interest rate to the integral of all past
errors in tracking the target. This, however, introduces a further
complication. It can mean that the nominal interest rate is excessively
and needlessly volatile. The simplest remedy for this is to use the ex
post real interest rate as the instrument. Practically this is done by
having a policy rule for the change in the nominal interest rate and
including the change in the inflation rate on the right hand side. This
turns out to be a perfectly adequate rule, and a little trial-and-error
determines that a real interest rate rule(9) with an integral
coefficient of 0.25 gives satisfactory deterministic control.
The target level for inflation we have adopted is 2 1/2 per cent.
This is consistent with the forecast base over which the exercise is
conducted, and although it represents the maximum of the stated policy
objective over the medium term we are taking it to be the level that the
inflation rate is desired to be 'on average' rather than on or
below. The rule then looks like:
[[Delta]RBASE.sub.t] = [[Delta]INF.sub.t] + 0.25([INF.sub.t] - 2.5)
where RBASE is the base rate of interest INF is the annual inflation
rate defined as ([RPIX.sub.t] - [RPIX.sub.t-4])/[RPIX.sub.t-4] where
RPIX is the retail price index excluding the mortgage interest
component.
Note that our monetary policy rule does not have provisos for
intervening more heavily if a particular ceiling is breached, and has
the implication that if one really did want an inflation rate below 2
1/2 per cent then a target of much less than that would have to be aimed
for. This departure from the announced UK policy framework should be
borne in mind when assessing the results.
Although our focus is primarily on monetary policy and inflation
targeting it is important to specify fiscal policy rules for the
behaviour of policymakers to be properly articulated. A reasonable
approach is to use rules(10) intended to ensure 'fiscal
solvency' over the long run. Governments cannot run up debt as a
proportion of GDP indefinitely without there being some unfavourable
consequences, some of which will be inflationary. Rules for spending and
taxes which pay attention to the public sector's financial position
ensure fiscal solvency. The fiscal rules used in this exercise:
[Mathematical Expression Omitted]
where PAC is government current expenditure, GDP is gross domestic
product at factor cost, DFAPY is the net acquisition of financial assets by public sector as a proportion of GDP, [DFAPY.sub.*] is the target
level of DFAPY and TRS is the standard rate of income tax. The target
level is phased in over the forecast base to be zero in the long run.
Set up this way the spending to GDP ratio is used as an instrument, with
both proportional and integral control, there is simple proportional
control for income tax. As the target value is public sector financial
saving as a ratio to GDP the coefficients are set to reduce taxes and
encourage spending with a surplus. Neither of the fiscal rules targets
inflation explicitly, although they may help mitigate long run
inflationary pressures. In stochastic simulation the interest rate rule
will do almost all the work in keeping the inflation rate on target.
4. Some Representative Simulations
Although the main interest in stochastic simulation is in the
statistical properties of forecasts the actual numbers generated can be
interesting in their own right. To obtain a clear visual impression of
the results we ran two ten-year simulations and plotted the resulting
inflation paths in Figure 1a and Figure 1b. The method used is exactly
the same as described above but the twenty-year window is rolled forward
an additional five years. Note that below we find that five years seems
amply long enough to determine the asymptotic standard error, so the
forecast standard error should be assumed to be the asymptotic one over
the additional five years.
The figures show the target measure of inflation over the past, the
deterministic forecast and a stochastic simulation. The target measure
is, of course, annual RPIX inflation with the historical value starting
in the first quarter of 1986. The stochastic simulation begins in the
first quarter of 1996. The deterministic forecast is that of our
November 1995 Review shown as a dotted line. Note first that the
stochastic simulations appear noisier than the historical series. This
is actually more a reflection that the very recent past has been much
smoother than usual, and it is much less noticeable if the historical
series is extended further back. It is also very noticeable that the two
realisations are markedly different, although they do both appear to
revert towards the non-stochastic mean.
Figure 1a shows fairly uniform variation around the target with a
maximum inflation rate over the future of 5.1 per cent, but in Figure 1b
there is quite a high peak, in excess of 6.7 per cent. There is nothing
to rule out such behaviour. It might be a reflection of an inadequate
control rule, but in deterministic simulation the rule delivers the
desired 2 1/2 per cent inflation rate within two years from a variety of
starting points, which seems quite rapid.
The point of doing a large number of replications is to see just how
unlikely that 6.7 per cent inflation rate is. In particular, we can
calculate standard error bounds around the forecasts or 'event
probabilities' (Fair, 1993). The latter are simple to do if a
stochastic simulation exercise is being carried out, but are nonetheless
very informative. We pick an event, count the number of times it occurs
and divide by the total number of replications. The event can be
anything, but favoured ones are the inflation rate exceeding a specified
level and recessions. Using standard error bounds as confidence limits
for the inflation forecast requires further assumptions, such as the
distribution of the deviations from the deterministic forecast being
normal. We turn to the statistical properties revealed by the
simulations next.
5. Standard Errors and Event Probabilities
The main stochastic simulation exercise, then, consists of fifty
replications of five year simulations. The per period standard errors
are easily calculated as the square root of the sum of the squares of
the forecast errors, divided by fifty, in each of the twenty forecast
periods. For inflation these rise steadily from 0.4 in the first period
to 1.1 from about two years onwards. In Figure 2 we plot the
deterministic forecast and one and two standard errors either side. We
have smoothed the error bounds a little to provide clearer graphs.
If the distribution around the mean is approximately normal, then it
should be expected that actual RPIX inflation rate should be within two
standard errors of the forecast about 95 per cent of the time and within
one standard error 70 per cent of the time. Given the structure of the
interest rate reaction function, where intervention follows a simple
linear rule, this does not seem to be too strong an assumption. It might
be if intervention was much stronger at a boundary rather than in the
centre, so the inflation rate drifted towards a boundary faster than it
passes it.
Given the standard errors and the present inflation rate there is
only a 15 per cent chance that the inflation rate will exceed 4 per cent
at the end of two years. This is not the same, of course, as assessing
the likelihood of exceeding 4 per cent inflation during the next two
years. This can simply be assessed by counting the number of times that
the inflation rate exceeds that mark in the first two years of the
stochastic simulations. For the fifty replications we carried out this
happened thirty-five times. This seems to indicate a 70 per cent chance
of a greater than 4 per cent inflation rate. Higher rates of inflation
were achieved correspondingly fewer times. 5 per cent was exceeded 34
per cent of the time in the first two years, and 6 per cent only 4 per
cent of the time. This seems rather pessimistic on the 4 per cent
'upper limit', but optimistic about how inflation can be quite
easily contained.
Over the longer, five year horizon, all but one simulation exceeded 4
per cent at one time or another, and in 58 per cent of the simulations 5
per cent was breached. Only 12 per cent of simulations ever breached 6
per cent. This indicates that the simulation depicted in Figure 1b is
indeed something of an unlikely one, and shows why it is risky to base
too many conclusions on a single realisation.
As interest rates are continuously manipulated to maintain the
inflation rate within these bounds there is an associated standard error
for the policy instrument. In Table 1 we list the per period standard
errors for the first two years of the simulations and the average for
the last two years for each of inflation, interest rate and growth. We
discuss the last shortly.
Although further replications would improve the accuracy of our
estimated standard errors, there is a clear pattern in all of them. The
interest rate standard error is almost double that of the inflation
rate. This reflects the vigourous use of interest rates implied by our
policy rule to achieve the given target. The price to pay for a stable
inflation rate may be quite high interest rate variability. This high
degree of intervention is not something that a deterministic analysis
always suggests (Blake and West-away, 1996).
Before examining the impact on growth, we draw two conclusions for
inflation targeting. Firstly, 4 1/2-5 per cent inflation seems likely
even if the inflation rate is targeted around 2 1/2 per cent. To ensure
average inflation less than 4 per cent more than about 95 per cent of
the time seems to require a much lower target rate, perhaps as low as 1
per cent. Secondly, whilst the forecast standard errors and eventual
asymptotic standard error (which we estimate to be about 1.1 per cent)
are perhaps a little wider than would be desired, overall control of
inflation in the face of a wide variety of fairly large shocks is quite
successful.
Table 1. Forecast Standard Errors (Variances)
Inflation Interest rate Growth
1996Q1 0.42(0.18) 0.52 (0.27) 0.81 (0-66)
Q2 0.58(0.34) 0.77 (0.59) 0.83 (0.69)
Q3 0.72(0.52) 1.03 (1.06) 1.17 (1.37)
Q4 0.76(0.58) 1.21 (1.46) 1.30 (1.69)
1997Q1 0.96(0.92) 1.52 (2.31) 1.54 (2.37)
Q2 0.99(0.98) 1.57 (2.46) 1.74 (3.03)
Q3 0.85(0.72) 1.41 (1.99) 1.45 (2.10)
Q4 0.98(0.96) 1.48 (2.19) 1.51 (2.28)
(1999 Q1-
2000Q4)/8 1.07(1.14) 1.95 (3.80) 1.77 (3.13)
We turn now to the variance of output growth and the probability of a
recession over the forecast horizon. Table 1 gives annual GDP growth
standard errors. As we are not specifically targeting growth, the
behaviour of growth is not tied to any one particular value, and we
would expect it to cycle around the model equilibrium. As we do not know
the equilibrium with the same certainty as the inflation rate - that is
designed to be 2 1/2 per cent - we have to assume that the five year
growth forecast is somewhere near equilibrium. The standard errors are
quite large, and imply that the 95 per cent confidence interval is
between -1 and 6 per cent. This is much more in line with historical
experience than the inflation standard error and implies that there is
little point in forecasting anything other than the mean of any
stationary process over a distant enough forecast interval. Whilst a
forecast other than the mean can outform a forecast equal to the mean in
the short term, in the long run the mean is always the best guess. More
generally this amounts to identifying the trend.
What about event probabilities? If we define recession as falling
output in two successive quarters, the number of times a recession
occurs within the first two years is 14, giving a 28 per cent chance in
the short term. Over five years, output falls in two successive quarters
33 times, a 66 per cent chance of a recession. This is of course
dependent on where the stochastic simulation starts from, but a 1/3
chance of a recession in two years and a 2/3 chance over the next five
seems to accord very much with historical experience and general
expectations. Perhaps it is a little disappointing that an effective
inflation targeting regime cannot reduce the expected frequency of
recessions below the numbers we obtained.
6. Conclusions
We have shown how to use an empirical model of the economy to
evaluate the forecast and steady-state standard errors of both inflation
and growth when there is an entirely new policy regime in place. The
inflation standard error is smaller than that found by Poulizac, Weale
and Young (1996), who use historical data to assess the forecast
standard error. We think this is a very useful alternative to their
approach, with the model used to analyse the change in policy regime.
Given that the National Institute model has changed very little since
the November forecast the standard errors in Table 1 can be used with
our present forecast to assess the approximate likelihood of breaching
the target ranges. Although the computational burden is quite high,
reflecting the relatively few replications that we have been able to
carry out, nonetheless it now seems computationally feasible for even a
large rational expectations model. The method used here should be added
to the forecasters' routine armoury.
NOTES
(1) Noted by Bowen (1996). He describes the UK inflation targeting
regime in detail, discussing aspects of appropriate targets, credibility
and operation.
(2) The Institute has frequently been in the vanguard of research
using stochastic simulation for diagnostic purposes. See Hall and Henry
(1987) and Ireland and Westaway (1990) for previous related work.
(3) This approach has been adopted by the proponents of so called
'Taylor' rules, see Taylor (1993).
(4) As noted above, this is intended to give a non-technical flavour of how to approach stochastic simulation of a nonlinear rational
expectations macroeconometric model. More complete accounts can be found
in Hall and Henry (1987) and Fisher (1992). Further discussion including
alternative policy regimes can be found in Bryant et al. (1993).
(5) In practice, some 'exogenous' variables do have
equations, but usually they only depend on their own past values. In the
case of something like world trade it would usually be a first-order
difference equation in log of the variable, for example the Institute
model has: In [WT.sub.t] = 0.01 + ln [WT.sub.t-1].
(6) Residuals are sometimes known as 'add-factors'.
(7) For variables such as world trade we have an entire world model
which could be used to explain variations from the individual
behavioural equations.
(8) The rule described in the forecast chapter does use an indicator
by feeding back an output. We feedback only on the final target, and
have found this adequate for our purposes.
(9) Alternatively this can be viewed as approximately equivalent to a
proportional coefficient of unity.
(10) Weale et al. (1989) discuss various fiscal policy regimes and
assess both the requirement for fiscal activism and appropriate rules.
Barrell and In't Veld (1992), used a tax rule to control the budget
deficits. They provide an analysis of the fiscal solvency approach.
REFERENCES
Barrell, R. and In't Veld, J. (1992), 'Wealth effects and
fiscal policy in the National Institute Global Econometric Model',
National Institute Economic Review, no. 140.
Blake, A.P. and Westaway P. (1996), 'Credibility and the
Effectiveness of Inflation Targeting', The Manchester School,
forthcoming.
Bowen, A. (1996), 'Targeting inflation: the British
experience', Centre Piece, no. 1, pp. 10-14.
Bryant, R.C., Hooper, P. and Mann, C.L. (eds.) (1993), Evaluating
Policy Regimes: New Research in Empirical Macroeconomics, Washington:
Brookings.
Fair, R.C. (1993), 'Estimating event probabilities from
macroeconometric models using stochastic simulation', in J.H. Stock
and M.W. Watson (eds.) Business Cycles, Indicators, and Forecasting,
NBER Studies in Business Cycles Volume 28, Chicago: University of
Chicago Press.
Fisher, P.G. (1992), Rational Expectations in Macroeconomic Models,
Dordrecht: Kluwer Academic Publishers.
Hall, S.G. and Henry, S.G.B. (1987), Macroeconomic Modelling,
Amsterdam: North-Holland.
Ireland, J. and Westaway, P.F. (1990) 'Stochastic simulation and
forecast uncertainty in a forward-looking model', National
Institute Discussion Paper no. 183.
Poulizac, D., Weale M. and Young G. (1996), 'The Performance of
National Institute Economic Forecasts', National Institute Economic
Review, no. 156.
Taylor, J.B. (1993), 'Discretion Versus Policy Rules in
Practice', Carnegie-Rochester Conference Series on Public Policy,
no. 39, pp. 195-214.
Weale, M., Blake, A., Christodoulakis, N., Meade J. and Vines D.
(1989), Macroeconomic Policy: Inflation, Wealth and the Exchange Rate,
London: Unwin Hyman.