首页    期刊浏览 2024年11月06日 星期三
登录注册

文章基本信息

  • 标题:A forward-looking approach to learning in macroeconomic models.
  • 作者:Westaway, Peter
  • 期刊名称:National Institute Economic Review
  • 印刷版ISSN:0027-9501
  • 出版年度:1992
  • 期号:May
  • 语种:English
  • 出版社:National Institute of Economic and Social Research
  • 摘要:This paper illustrates how learning can be incorporated into an existing forward-looking macroeconomic model as an alternative to the more conventional but arguably more extreme assumption of model consistent or rational expectations. The key characteristic of the model consistent learning approach to be adopted here is that agents are assumed to know the true structure of the model but that they need to learn about some parameters of that system, for example those defining the government's policy decision rule. Importantly, models solved under this assumption retain the property that the current behaviour of economic agents can be influenced by the expected future effects of policy changes. This type of learning may be contrasted with one where economic agents may also be uncertain about some structural parameters of the true model but in addition, they do not possess sufficient information to form future expectations consistent with their estimated model. As a consequence, expectations are formed using backward-looking reduced form equations with parameters which agents continuously learn about. This approach, known as boundedly rational learning, has been adopted in Hall and Garratt (1992) who apply these techniques to a full-scale non-linear macroeconometric model.
  • 关键词:Macroeconomics;Rational expectations (Economics)

A forward-looking approach to learning in macroeconomic models.


Westaway, Peter


Introduction

This paper illustrates how learning can be incorporated into an existing forward-looking macroeconomic model as an alternative to the more conventional but arguably more extreme assumption of model consistent or rational expectations. The key characteristic of the model consistent learning approach to be adopted here is that agents are assumed to know the true structure of the model but that they need to learn about some parameters of that system, for example those defining the government's policy decision rule. Importantly, models solved under this assumption retain the property that the current behaviour of economic agents can be influenced by the expected future effects of policy changes. This type of learning may be contrasted with one where economic agents may also be uncertain about some structural parameters of the true model but in addition, they do not possess sufficient information to form future expectations consistent with their estimated model. As a consequence, expectations are formed using backward-looking reduced form equations with parameters which agents continuously learn about. This approach, known as boundedly rational learning, has been adopted in Hall and Garratt (1992) who apply these techniques to a full-scale non-linear macroeconometric model.

At the outset, it is useful to emphasise the similarities and differences between the model consistent learning approach described in this paper and the more usual approach to capturing learning adopted in the literature, as for example in the bounded rationality model mentioned above. The key difference is not in how agents learn about uncertain parameters; for example, Hall and Garratt (1992) use a Kalman filter technique to update parameters in the reduced form expectations equation, but this sophisticated method could equally well be applied to the uncertain parameters of a policy decision rule in the model consistent learning approach. Nor is it simply that bounded rationality models assume that agents are uncertain about more parameters. Rather, the crucial distinction is in how agents are assumed to take into account the fact that expected future realisations of variables affect current behaviour. In this paper, it is argued that while the bounded rationality assumption appears to suggest an intuitively attractive way of doing this, it suffers from several disadvantages compared to the more structural approach offered by the model consistent learning scheme adopted here. In particular, it appears to throw out the baby of forward-looking anticipatory behaviour with the bath water of perfect foresight.

The next section of this paper explains in more detail why the extreme rational expectations assumption may be unsatisfactory in a macromodelling context. The bounded rationality learning model, adopted in the London Business School model (by Hall and Garratt (1992)) is then outlined. A number of important practical drawbacks with this approach are identified. Next, a form of model consistent learning is described which overcomes many of these shortcomings. The distinction between closed-loop and open-loop learning is introduced. Empirical examples are given on the latest version of the National Institute UK macroeconometric model, first showing the implications for a standard simulation of a government spending shock, second illustrating the consequences of an exchange rate realignment when the private sector is uncertain about the government's future realignment intentions. The final section draws conclusions for macromodelling practice.

Motivation

Although it has long been recognised that forward-looking expectations were likely to be an important determinant of economic behaviour, macroeconomic modellers were slow to realise the consequences of this, mainly because of the technical difficulties involved. Early empirical attempts to incorporate expectations explicitly invariably adopted the adaptive expectations hypothesis (for example Cagan (1956)). Long after the seminal work on rational expectations by Muth (1961), the adoption of the rational expectations hypothesis was generally confined to small analytic models in the US (see Lucas and Sargent (1981) for a summary), although often the importance of the expectational assumption adopted was overshadowed by the rather strong policy implications of the New Classical models in which they were embodied (see Begg (1982) for a lucid discussion of this point).

Thus, the conventional wisdom amongst macromodellers was firmly rooted in the backward-looking approach where consequently the structural dynamics arising from adjustment costs or contract dynamics, and the dynamics arising from expectations were inextricably mixed. Furthermore, this approach to dynamic structure was later actively encouraged by the widespread adoption of 'general-to-specific' econometric methodology popularised by Hendry and others (see for example Hendry (1992)).

The Lucas critique (Lucas (1976)) shocked macromodellers into treating expectations more seriously. As its simplest level, this highly influential work emphasised that macroeconomic models which were estimated as characterisations of behaviour under a particular policy regime would not be valid if the policy regime changed. It is easy to illustrate this point even in a backward-looking model. Suppose the structural relationship for prices is given by the equation,

P = f(L) [G.sup.e]

where Ge is expected government policy and [(L) are structural dynamics due, for example, to institutional price inertia.

If the decision rule for government policy, G, can be written as

G = g(L)Z

where Z represents all variables which affect government policy, then if price setters correctly perceive this, i.e.

[G.sup.e] = G

the equation for the determination of prices conditional on Z will be given by

P = f(L) g(L) Z = h(L)Z.

In practice, the separate structural parameters of [(L) and g(L) will not be identified in the composite lag structure h(L). Now, if the government's policy rule changes, from g(L) to g'(L) say, then the original equation for prices will no longer be valid: if an equation is invariant to a change in policy regime, the relevant explanatory variable is said to be 'super exogenous' (see Ericsson (1991) for example).

In fact, it had been well understood before the Lucas critique that estimated models were liable to mis-specification error in the face of regime changes. The main importance of the Lucas critique, however, was to highlight the weak microfoundations of traditional macroeconomics, in particular the absence of optimising behaviour on the part of economic agents. Importantly, this suggested that an approach to modelling which involved embodying particular decision rules into behavioural equations should be replaced by one which emphasised underlying structure, thus introducing a likely role for forward-looking expectations. This arises because most private sector structural decision rules will be derived from a forward-looking cost minimisation or utility maximisation exercise which in general will introduce lead terms (i.e. expected values of future realisations of variables) into the behavioural equations, (see Nickell (1985)).

In the UK, the incorporation of forward-looking behaviour based on these utility maximising microfoundations was pioneered by Minford in the University of Liverpool macroeconomic model (see Minford (1979)). Unfortunately, as with applied work in the US, the recognition of the importance of this work was diminished by its association with New Classical economics. Arguably it was only when these techniques were applied to more mainstream macroeconomic models, not only more 'Keynesian' but also more firmly based on econometric practice, that RE modelling become 'respectable'; Hail and Henry (19 85) described the introduction of forward-looking expectations into the NIESR model while Keating (1985) described the incorporation of RE into the financial sector of the LBS model.

The properties of consistent expectations models

So far, the discussion has merely dealt with the fact that structural decision rules will often involve forward-looking behaviour. In practice modellers must determine how expectations are formed, first over the past so that the structural equations involving expectations can be estimated, second in the future and in the face of shocks, so that the model can be used for forecasting and policy analysis. The rational expectations (RE) assumption first suggested by Muth (1961), simply suggests that agents should use information as efficiently as possible. In policy analysis, this implies that expectations should be consistent with the prediction of the model in which they are embedded. In its strongest form, this imposes severe information requirements on the private sector; not only do they need to know the true underlying model but they also need to know how to solve the model, a non-trivial task as macromodellers understand.

In fact, the strong form RE assumption is not really relevant in an estimation context since most applied work relies on the approach of McCallum (1976) and Wickens (1982) which only requires that agents make no systematic errors in forming expectations, an assumption known as the weak form RE assumption. Once these equations are incorporated within a large macroeconomic model, however, the conventional assumption in using the model for simulations or in conducting policy analysis is that period-by-period model consistent expectations are assumed. It is important to emphasise that, while this would seem to require a considerable feat of calculation from the private sector, the RE solution represents the only solution technique available for directly allowing expected future variables to affect current behaviour (we shall see that this applies to learning models too since the learning approach of this paper relies on successive RE solutions while the boundedly rational learning one is completely backward-looking).

If the only objection to the consistent expectations solution was its implied information requirement, this may not be too serious; after all, many models in economics involve 'as if' assumptions which may not be wholly realistic. However, the particular aspect of these simulations which is intuitively implausible is the very sudden and sharp response of the forward-looking 'jump' variables to new information. QRE in Chart 1 shows the effect on the effective exchange rate of an announced 5 year 1 per cent increase in government spending"l; in reality it seems unlikely that the full implications of this announced policy would feed through into the exchange rate so quickly so completely and so 'correctly'. An even more dramatic illustration of the implausibility of the RE assumption is shown in Chart 2 which shows the effect of an announced temporary (for 1 quarter) increase in interest rates occurring immediately, and for comparison the same shock due to occur in one year's time. The exchange rate immediately rises in both cases, but only slightly less when the interest rate change is still four quarters away. Theoretically, this is explained by the (uncontroversial) use of an exchange rate equation which is approximately equivalent to the open arbitrage condition; this implies that the exchange rate moves to equalise period by period returns on sterling and foreign currency assets (the jump is slightly less for the future interest rate increase because the system root associated with the exchange rate is slightly greater than unity). Since the theory itself is not at question, the implausibility of the large response arises from the total belief that the announced policy really would occur in 4 quarters' time; in reality, foreign exchange operators may not attribute the same degree of credibility to this announcement.

Learning

In order to build in this concept of uncertainty about policy changes, we need to introduce learning. In general, this is a non-trivial extension to the usual hypothesis. It raises the question of how agents learn about the behaviour of the economy, in particular in the face of regime changes announced or otherwise. This issue has spawned a large literature which has a number of related strands summarised in Bullard (1991); Bray and Savin (1986) examine the circumstances under which sequential least squares estimation will allow agents to uncover the true model and so converge on the rational expectations equilibrium; Marcet and Sargent (1989) and Jordan (1992) show how the form of the learning mechanism can determine which, if any, of a number of multiple equilibria will be reached; Woodford (1990) shows how 'frivolous' (i.e. false) beliefs about fundamentals can generate 'sunspot' equilibria different from the rational expectations equilibrium in the presence of many agents learning about each others' forecasts (see Townsend (1983)).

In fact, empirical macromodellers have so far made relatively little effort to assimilate the implications of this literature. Hail and Garratt (1992), however, have attempted to address the problem facing macromodellers directly by adopting a bounded rationality approach which assumes that economic agents are intelligent but do not fully understand the environment in which they operate. In general, this involves modification of full model consistent expectations in two distinct ways.

(i) First, it is assumed that agents are uncertain about the parameters of particular equations. Agents are assumed to update their estimates of these parameters as new information becomes available. Various updating schemes can be adopted to do this. Hail and Garratt (1992) assume that agents use a form of Kalman filter, i.e. as they observe the outturns to be compared with their original estimates, so agents will update the uncertain parameters (see Marcet and Sargent (1989), Hall and Garratt (1992) for more details). If agents never discount past information, then this will amount to a rolling OLS regression with an increasing sample; on the other hand, if past information becomes less important because of a change in regime, then a 'forgetting factor' can be included which gives a rolling window, or more accurately a form of weighted least squares.

(ii) Second, bounded rationalty involves assuming that agents do not have all the information required to enable them to compute the necessary path for any expectations variables which will be consistent with the predictions of the model itself. Consequently, agents are assumed to form expectations using backward-looking reduced form equations. As above, agents must learn about the parameters of this reduced form, which will obviously be uncertain if any of the structural parameters are uncertain. As a consequence, the rule that is used for generating expectations will be 'incorrect' while they are learning about the true structure, although in equilibrium, expectations will converge on the model consistent equilibrium.

In principle, this scheme seems an intuitively attractive way to mimic learning behaviour. In practical applications, however, for example in the exchange rate model used by Hail and Garratt (1992), there are a number of significant drawbacks with this approach, in particular with the second assumption which abandons the role for model consistent expectations.

-- the Kalman filter updating scheme for the expectations equations, as with other ordinary least squares-based models of learning (as in Bray and Savin (1986)), may be inefficient at assimilating new information when regimes change. This is a considerable limitation for any variable which is affected by expectations which are very likely to 'jump' in the face of new information (albeit by less than the full RE solution would predict). This drawback can be overcome by introducing arbitrary 'announcement effects' as in the LBS approach, but this type of scheme is very similar to the type of adjustments made to backwardlooking reduced form models before the problem of computing RE systems was solved. (see Barber (1982) for example).

-- the genuine reduced form of the full model which the expectations equation is attempting to capture is likely to involve many more variables than the few that are permitted in practice (this contrasts with structural consistent expectations models where changes in any exogenous variables will be reflected directly in the jump variable)(2). This greatly restricts the usefulness of the reduced form equation for use in full model simulation exercises.

-- the time-varying Kalman filter model, which purports to capture gradual learning about structural or reduced form parameters, may in practice suffer from an inability to distinguish between models which genuinely have timevarying parameters, and those which are simply mis-specified.

-- even if boundedly rational agents are uncertain about the true structural model, it is unclear why they should not be able to take into account (albeit incorrectly) the influence of expected future events on current behaviour. Of course, this may require a considerable computational burden, but this does not seem any more extreme an assumption than one which allows agents to perform period-by-period time-varying Kalman filter estimation.

To summarise, it would seem that in attempting to capture a model of expectation formation which falls between the two undesirable extremes of reduced form adaptive expectations on the one hand and full model consistent expectations on the other, the bounded rationality approach described above falls far too close to the former, and retains too many of its disadvantages.

Model consistent learning

The version of model consistent learning to be described here differs from the above in one important respect; agents are assumed to understand the true structure of the model and how it reacts to shocks or changes in regime. Crucially, this implies that the model retains the advantage of being 'forward-looking'. As above, agents are not assumed to have perfect information about the nature of the shocks which impinge on the model, or about regime changes, when they are announced.

Instead, agents' beliefs, which may be reflected in a subjective conditional probability distribution, are sequentially updated as new information becomes available(13). This approach to learning has been attempted relatively infrequently on large scale empirically based non-linear models (the first application of the technique is described in Cooper and Young (2987), see also Westaway (1991)).

It is useful to distinguish between two different types of learning: closed-loop learning, where agents learn about the parameters of the decision rule or of the time series process generating the shock, and open-loop learning where agents form an expectation of the path for a particular variable which they sequentially update.

Closed-loop learning will be virtually identical to the parameter updating scheme using Kalman filtering described in Hall and Garratt (1992). The approach adopted here is slightly more general since it allows subjective prior probabilities to influence the parameters immediately after a regime change; of course, this shift in probability distribution is what is proxied by the arbitrary imposition of 'announcement effects'. In applying any form of learning to a macromodel, it may be more straightforward to assume that agents simply form expectations of open-loop trajectories (indeed, this is consistent with the open-loop Nash assumption which is typically adopted in the standard utility maximisation exercises which underpin most forward-looking equations in macromodels).

To implement model consistent learning in practice requires two key components;

(i) A model solution technique which allows sequential solution of the model under the assumption of model consistent expectations, but where each period, the private sector receives new information which requires expectations to adjust and the model to be re-solved over the remaining periods.

(ii) The specification of the subjective probability distribution; this will take the form of a probability tree (in reality, an infinite dimensional one) which specifies for each period the probability distribution for the parameter or expected variable of interest conditional on past history.

These two aspects are illustrated in the two examples that follow.

Example 1: An announced increase in government spending

Suppose that the government announces a 1 per cent increase in government spending which will last five years. We have already seen in chart 1 the rather extreme exchange rate response which results if this policy announcement is believed completely immediately.

On announcment of this policy, let us assume that the private sector can observe the first period outturn for government spending but must form an expectation of its profile over the rest of the five year period. In this example, we assume that in the first period, the private sector does not believe that the spending increase will be sustained, and that the shock will follow the time series equation;

QPAC(t) = [alpha] QPAC(t-1)

where [alpha] = 0.05. In reality, of course, the true value of o[alpha] is unity but only the government knows this. Hence, the private sector are learning about 0{. This is a simple form of closed-loop learning. At the beginning of period 2, the private sector again observes that the shock is truly sustained; as a consequence, their subjective estimate of [alpha] is upgraded to [alpha][alpha]=0.1. Let us assume that the private sector increase their estimate of [alpha] (which only applies to the periods remaining) by 0.05 for every period that the announced policy is carried out. Adopting this relatively crude updating strategy, the government's announcement will only be fully credible after five years (at which time the shock is complete anyway).

The actual implementation of this type of learning solution is more complicated than the usual consistent expectations solution procedure since expectations are continually being falsified. As a consequence, a sequential or rolling forecast procedure needs to be adopted. A fuller description of how this 'stacked solution' technique can be adopted in a stochastic simulations context is given in Ireland and Westaway (1990), Fair and Taylor (1990). This technique is necessary whenever expectations are formed in a forward-looking manner and when the information set on which those expectations are conditioned alters unexpectedly within the forecast period; in this case the 'news' which is obtained during the forecast horizon is that the government spending shock has turned out differently from expectations and hence that the subsequent expected path must change. Thus, the actual simulation procedure adopted is as follows;

(i) Shock the model with the expected government spending increase at the beginning of period 1 (i.e. 1 per cent increase in the first period, declining with [alpha] = 0.05 thereafter). Solve model in consistent expectations mode over full simulation horizon 1 to T. Save the solution for period 1 only.

(ii) Shock the model with the expected government spending increase at the beginning of period 2 (i.e. 1 per cent increase in the second period declining with [alpha] = 0.1 thereafter). Solve model in consistent expectations mode over remaining simulation horizon 2 to T. Save the solution for period 2.

(iii) Shock the model with the expected government spending increase at the beginning of period 3 (i.e. 1 per cent increase in the third period declining with [alpha] = 0.15 thereafter). Solve model in consistent expectations mode over remaining simulation horizon 3 to T. Save the solution for period 3.

(iv) and so on until all the solutions that have been saved from periods 1 to T give the final ex post outcome of the model simulation under the assumption of model consistent learning.

Importantly, although each run is itself solved under the assumption of consistent expectations, ex post expectations are continually falsified, but by a lesser amount each period as agents gradually learn about the true policy. Comparison of QRE and QL in chart 1 shows the consequent effects on the exchange rate path. Because the true scale of the spending increase was not anticipated, the initial effect on the exchange rate and hence on prices was much less, and is correspondingly more plausible.

Example 2: Evolving credibility of exchange rate realignment intentions

The last example postulated a very simple closed-loop learning rule. This example gives a more complicated example of a Bayesian probability disribution where now it is assumed that agents form expectations of an openloop trajectory for the exchange rate. Let us assume that the government wishes to evaluate the costs and benefits of devaluing the exchange rate by 5 per cent (a more comprehensive examination of this question is carried out in Westaway (1992)). It is assumed that, to begin with, the government's commitment to a fixed exchange rate versus the D-Mark is complete. (4)(5). As soon as any devaluation is carried out, however, the commitment of the government to the new parity will be in doubt even if, as we assume, the government announces that interest rates will be set to hold the exchange rate at its target level against the D-Mark. As a consequence, the uncertainty regarding the future exchange rate level will be reflected in an interest rate differential which embodies this uncertainty. This credibility effect on expectations will depend upon the probability distribution as perceived by the markets of all possible outcomes for the exchange rate, following the initial devaluation in the exchange rate of 5 per cent.

In order to illustrate how this expected path for the exchange rate may have been arrived at, it is useful to make a number of stylised assumptions about the underlying probability distribution161. In doing this, we are necessarily taking as given many complex economic and political factors which actually determine this probability distribution (as discussed in Britton (1991) for example). One particular hypothetical distribution which might plausibly occur immediately after a devaluation is characterised by the probability tree (which gives details for annual exchange rate changes) shown in table 1. It embodies the following stylised assumptions;

-- in any year the probability distribution is bi-modal in the sense that the exchange rate can either stay fixed or be realigned.

-- if the authorities do choose tor ealign a gain in the first year, they will devalue by another 5 per cent with probability 0-4; this causes interest rates to rise by 2 per cent in the year following the original realignment.

-- for every year that the authorities are observed to hold the exchange rate fixed (which it is assumed is done by concerted intervention within the ERM), the expected value of any future realignments declines, to zero by the fourth year.

-- every time the authorities do realign, the expectation that they will do so again increases.

-- if the authorities do hold the exchange rate fixed, the expectation that they will not depreciate again (i.e. their credibility) builds up gradually.

The probability tree contains all the relevant information for calculating expected exchange rate movements. Each 'branch' of the tree represents a possible outcome for the exchange rate in a particular year. The two branches at the top represent the alternative possibilities in the first year. The sixteen bran&es at the bottom of the tree represent the proliferation of possible outcomes by the fourth year. From this we can calculate the ex ante expected devaluation in the exchange rate for the first four years (which will be equal to the required interest rate differential) as expected at the beginning of the first year; this information is summarised in table 2(a). For example, the expected depreciation in year 2 as perceived at the beginning of year 1 is given by the probability of a small realignment of 3 per cent, given that there was no realignment in the first year (0.6 X 0.3 X 3) plus the probability of a depreciation of 5 per cent in year 2 which would happen if there had been a realignment in year 1 (0.4 X 0.45 X 5). This sums to 1.6 per cent as shown.

This type of analysis becomes crucially important in the context of policy evaluation. From the perspective of the private sector, either as an agent in the economy or as a macroeconomic forecaster, it is necessary to use the ex ante prediction for the depreciation of sterling (as given in table 2(a)); for example this is the procedure usually adopted in the National Institute forecast where it is assumed that the exchange rate will depreciate in line with market expectations.

As with the earlier example, the path implied by the central expectation will never occur, so after one year expectations will have been wrong ex post. The probability tree then informs us how to revise our expectations of future exchange rate movements depending on which branch of the probability tree we have 'travelled along'. To give a specific example of how expectations may evolve, suppose the authorities chose not to realign during the first four years. This happens to be the outcome which is most likely to occur, happening with probably O. 173. Table 2(b) illustrates how exchange rate expectations are sequentially revised over the future as the private sector effectively learns about the government's policy intentions; the ex post exchange rate expectations are given by reading down the diagonal, i.e. 2, 1.2, 0.4, 0.0..

Thus, at the beginning of the forecast period, the exchange rate is expected to decline by 1.2 per cent in the third year of the forecast. However, by the time the third year begins and the authorites have been observed to hold the exchange rate fixed, the expected depreciation for that year is then revised downwards to 0.4 per cent (see the third row of table 2(b)). This may be interpreted as an increase in credibility of the authorities' fixed exchange rate commitment which will consequently allow interest rates to be cut in the fourth year by more than was envisaged at the beginning of the forecast period.

In fact, this example is of more than academic interest. By computing this path where credibility evolves gradually, we may be putting ourselves in the position of the Treasury forecasters who may know with certainty (for the sake of argument) that the exchange rate will not be realigned. This is a specific example of the general case where forecasters hold different expectations to the markets. The interesting corollary to this is that even if the Treasury were using the Institute model and had the same information regarding all variables, they should be producing a different forecast from ourselves because of their superior information on the true intended policy stance.

Of course, we can equally well illustrate the implications of a strategy of continuing to devalue by 5 per cent every period ( which occurs with a lower ex ante probability of 0.0495). Table 2(c) shows how the interest rate differential now rises relative to previous expectations as the strategy of continual devaluation is gradually anticipated correctly.

To undertake this exercise on the full model is slightly more complicated than the stylised example for a number of reasons;

-- the updating of expectations is carried out every quarter. Rather than computing a probability tree over 16 periods (which would have 65536 bran&es by the end of the fourth year) we adopt an updating rule for the expected depreciation which approximately retains the same properties as that used in the stylised example (see chart 3(a)). As the expected depreciation falls, so interest rates decline in line (chart 3(b))

-- since the exercise is undertaken on a model of the whole economy we need to consider the expectations of other parts of the private sector such as wage bargainers, as well as the foreign exchange markets (for a full description of the treatment of expectations in the National Institute model, see NIESR (1992)). We assume, as in the base forecast itself, that the expectations held by these different groups do not differ, that is, they perceive the same probability distribution of realignments (for the implications of making a different assumption see Miller and Sutherland (1990)).

We can now illustrate the implications of a 5 per cent devaluation of sterling under different assumptions regading the credibility of the policy stance. Three forecast outcomes are compared with the base 'no devaluation' forecast. Case A assumes that the government is believed completely when it promises never to devalue again, so that interest rates are unchanged and the exchange rate stays 5 per cent below base in line with market expectations. Case B shows the ex ante forecast immediately after the devaluation when credibility is lost as described above, and expectations of further depreciation are reflected in private sector behaviour (note that, by assumption, case B will never actually be observed ex post). Case C gives the ex post outcome which occurs if the government succeeds in sticking to its announced strategy of no further devaluation, but where the private sector only learns about this true policy stance gradually.

Charts 4(a)-(d) compare these outcomes for the exchange rate, interest rates, inflation and output, all relative to base. All cases show a large initial increase in inflation, the greatest occurring in Case B due to the expectation of future depreciation. Inflation is slightly higher in the learning outcome, case C, compared to the full credibility case A, despite having higher interest rates; this occurs because of the initial increase in inflationary expectations which only later evolves away. Unsurprisingly, the boost to output is highest in case A where interest rates do not rise, but less predictably is lower in the ex post learning case C than in case B, since the higher exchange rate outweighs the benefits of lower interest rates. In all outcomes, all real variables such as GDP and the real exchange rate are returning to their base levels, albeit slowly, while all nominal magnitudes e.g. prices rise in the long run by 5 per cent.

From the policymakers' perspective, the relevant comparison is between case A and the base if credibility could be expected to be maintained, or between case C and the base if some loss in credibility was assumed to be inevitable. Of course, it is possible to make informal policy choices on the basis of the charts just described but Westaway (1992) evaluates the case for re-alignment more formally.

Conclusions

The two examples given above have illustrated the advantages of using what has been termed here the model consistent learning approach. It has allowed more plausible model simulation properties to be obtained, as in the example of the government spending shock, but at the same time has preserved the crucial role of forwardlooking behaviour in allowing policy analysis to be carried out, here in the context of the realignment question. Of course, the introduction of learning techniques into the macromodeller's tool-kit provides as many questions as it answers. How do we characterise the private sector's subjective probability distribution relating to policy? How do they use this distribution to forecast in the face of new information ? How do policymakers react to the fact that the private sector is learning? Importantly, however, this approach to learning provides a structural framework within which these concepts can be quantified, albeit tentatively. By examining the robustness of particular conclusions under a range of alternative assumptions, we can improve our understanding of how the economy operates.

REFERENCES

Barber, J. (ed.) (1982), Supplement to HM Treasury Macroeconomic Model Technical Manual, Government Economic Service

Working Paper, No. 71.

Begg, DKH (1982), The Rational Expectations Revolution In Macroeconomics: Theories and Evidence, Philip Allan.

Bray, MM and Savin, NE (1986), 'Rational expectations equilibria: Learning and model specification', Econometrica, vol. 54, pp.1129-1160.

Britton, A (1991), 'Exchange rate realignments in the European Monetary System', National Institute Discussion Paper, New Series No. 1.

Bullard, JB ( 1991 ), 'Learning rational expectations and policy: A summary of recent research ', Bulletin of the Federal Reserve Bank of St. Louis, February.

Cagan, P (1956), 'The monetary dynamics of hyperinflation' in M Friedman (ed), Studies in the Quantity Theory o[Money, University of Chicago Press.

Cooper, A and Young, G (1987), 'Uncertainty, credibility and learning in macroeconomic models', paper presented to the HM Treasury Academic Panel, AP(87)5.

Driffill, J and Miller, M (1992), 'Learning about a shift in exchange rate regime', forthcoming in RJ Barrell and J Whitley (eds) Macroeconomic Policy Coordination: The ERM and Monetary Union, Sage.

Ericsson, NR (1991) 'Cointegration, exogeneity and policy analysis: An overview', Federal Reserve System International Finance Discussion Paper, No.415, November.

Fair, RC and Taylor, JB (1990), 'Full information estimation and stochastic simulation of models with rational expectations', Journal of Applied Econometrics, vol. 5, pp.381-392.

Hail, SG and Garratt, A (1992) 'Model consistent learning: The Sterling-Deutschmark rate in the London Business School model', LBS CEF Discussion Paper, No. 02-92.

Hail, SG and Henry, SGB (1985), 'Rational expectations in an econometric model: NIESR model 8', National Institute Economic Review, No. 114, pp.58-69.

Hendry, DF (1992), Econometrics: Alchemy or science?, Blackwell.

Ireland, J and Westaway, PF, (1990) 'Stochastic simulation and forecast uncertainty in a forward-looking model', National Institute Discussion Paper No. 183.

Jordan, J (1992), 'Convergence to rational expectations in a stationary linear game', Review o[ Economic Studies, vol. 59, pp.109-123.

Keating, G (1985), 'The financial sector of the London Business School model', in D Currie (ed), Advances in Monetary Economics, Croom Helm, London.

Lucas, RE (1976), 'Econometric policy evaluation: a critique', in K Brunner, and AH Meltzer (eds), The Phillips Curve and Labour Market, North Holland, Amsterdam.

Lucas, RE and Sargent, TJ (1981 ), Rational expectations and econometric practice, Allen and Unwin. Marcet, A and Sargent, T (1989), 'Convergence of least squares learning mechanisms in self-referential linear stochastic models', Journal o[Economic Theory, vol. 48, pp.337-368.

McCallum, B (1976), 'Rational expectations and the natural rate hypothesis: some consistent estimates', Econometrica, vol. 46, pp. 43-52.

Miller, M and Sutherland, A (1990) 'The 'Waiters critique' of the EMS: A case of inconsistent expectations', CEPR Discussion Paper No. 480. Minford APL (1979), 'A rational expectations model of the United Kingdom under fixed and floating exchange rates', in K

Brunner and AH Meltzer (eds.), On the state of macroeconomics, Supplement to Journal of Monetary Economics. Muth, JF (1961 ), 'Rational expectations and the theory of price movements', Econometrica, vol. 29, pp.315-3 3 5.

Nickell, S (1985), 'Error correction, partial adjustment and all that: An expository note', Ox[ord Bulletin o[Economics and Statistics, vol. 47, pp.119-129.

NIESR (1992), National Institute Macroeconomic model, February.

Townsend, RM (1983), 'Forecasting the forecasts of others', Journal o[Political Economy, vol. 91, pp.546-588.

Westaway, PF (1991 ), 'Modelling the evolution of sterling's credibility in the ERM', Annex to Home Economy chapter in National Institute Economic Review, May.

Westaway, P.F (1992), 'To devalue or not to devalue?: An analysis of UK exchange rate policy in the ERM', National Institute Discussion Paper, New Series no. 16.

Wickens, MR (1982) 'The efficient estimation of econometric models with rational expectations', Review of Economic Studies, vol. 49, pp.55-68.

Woodford, M (1990), 'Learning to believe in sunspots', Econometrica, vol. 5 8, pp.277-3 07.

NOTES

(1) All the empirical work described in this paper is carried out on the February 1992 version of the National Institute macroeconometric model of the UK economy, see NIESR (1992).

(2) The true reduced form equation of the forward-looking structural model is also likely to be non-linear.

(3) A Bayesian approach to the updating of expectations has commonly been adopted in the context of small analytic models, as in Driffill and Miller (1992), for example.

(4) The question of how sterling's credibility might evolve from a position of incomplete credibility was analysed in Westaway (1991) using the same methodology as is used here.

(5) Consequently, the sterling-D-Mark short-term interest-rate differential is zero. Thus, we are assuming that the uncovered arbitrage condition holds exactly, implying that, ex ante, investors will be indifferent between holding sterling and D-Mark assets. In practice, this is fairly close to the assumptions actually adopted in the base forecast, so this simplifying assumption does not distort the analysis.

(6) The simplifying approach taken in continuous time analytic models, for example Driffill and Miller (1992), is to assume that realignments follow a Poisson process.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有