Nowcasting is not just contemporaneous forecasting.
Castle, Jennifer L. ; Fawcett, Nicholas W.P. ; Hendry, David F. 等
We consider the reasons for nowcasting, the timing of information
and sources thereof, especially contemporaneous data, which introduce
different aspects compared to forecasting. We allow for the impact of
location shifts inducing nowcast failure and nowcasting during breaks,
probably with measurement errors. We also apply a variant of the
nowcasting strategy proposed in Castle and Hendry (2009) to nowcast Euro
Area GDP growth. Models of disaggregate monthly indicators are built by
automatic methods, forecasting all variables that are released with a
publication lag each period, then testing for shifts in available
measures including survey data, switching to robust forecasts of missing
series when breaks are detected.
Keywords: Nowcasting; contemporaneous information; Autometrics;
location shifts; impulse-indicator saturation.; robust nowcasts; Euro
Area GDP growth JEL Classifications: C51; C52
Introduction
On 24 July 2009 the Office for National Statistics (ONS) published
GDP growth figures for the second quarter of 2009. The 0.8 per cent
decline in GDP induced severe forecast failure--the official data were
worse than any forecast. (1) Statistical agencies such as the ONS must
base their flash estimates on incomplete and uncertain data, hence there
is a need for nowcasts that combine both actual data for components that
are known with forecasts of components that are unknown. Often surveys
are used to 'infill' the unknown components to arrive at the
flash estimate (see e.g., Ashley, Driver, Hayes and Jeffery, 2005), a
reasonable strategy when the surveys are good leading indicators, but
such a methodology can prove disastrous in times of structural change.
The Financial Times (25 July 2009) reported that 'the ONS said that
its estimates were even less reliable than normal because the
economy's unpredictability meant its models had "broken
down" (2) A similar problem has beset the National Institute's
nowcasting approach (see Mitchell, 2009). Any nowcasting strategy must
be robust to structural breaks, requiring a methodology that can both
detect structural breaks and rapidly adapt when such breaks occur. It is
precisely at the point at which accurate nowcasts are needed (turning
points) that the current nowcasting methods may be breaking down. This
paper considers the reasons for nowcasting and the implications for how
to produce robust nowcasts.
There are four practical reasons why nowcasting
'forecasting' the current state-is needed for economic
aggregates, which play a key role in economic policy decisions.
Unfortunately, each of the reasons is matched by a corresponding
problem, as we now explain.
First and foremost, and the central basis for
'nowcasting', is that not all disaggregated contemporaneous
data are available at the time their information is required to
construct the relevant aggregate. Statistical agencies decide when to
publish preliminary estimates based on their view of the optimal
trade-off between timeliness and accuracy. For preliminary estimates of
GDP, for example, the information is only available up to 24 days after
the end of the quarter for which a 'nowcast' is required for
the UK (see Reed, 2000, 2002), whereas the Euro Area preliminary
estimates are published 45 days after the end of the quarter. This
'missing data problem' is ever present, so that aggregates
cannot be constructed just by adding up the relevant observed
disaggregates.
Secondly, many economic time series are themselves only
preliminary, or 'flash', estimates, which are subject to
potentially substantial revisions in due course as more information
accrues, so are not necessarily a reliable and accurate guide to current
conditions. Figure 1 (from Walton, 2006) demonstrates that initial
estimates of GDP growth are a poor guide to actual GDP growth outturns
(defined as the latest available vintage in Autumn 2005) with revisions
of more than 2 per cent for some quarters and magnitudes, directions and
turning points differing substantially between the two measures. Faust,
Rogers and Wright (2007) report that revisions to GDP announcements are
quite large in all G7 countries, and are fairly predictable in the UK,
mainly because of reversion to the mean, which they interpret as due to
removing measurement 'noise'. Garratt and Vahey (2006)
consider UK macroeconomic indicators and find that most preliminary
measurements are biased predictors, with some revision series affected
by multiple structural breaks, and Mitchell (2004) finds that poor
construction of nowcasts can explain much of the revision to UK GDP
growth. Thus, 'measurement error problems' are also pervasive,
and consequently, the 'observed data' cannot be taken at face
value, and may be no more accurate than nowcast values.
[FIGURE 1 OMITTED]
Thirdly, different components of the data entering the aggregates
for which nowcasts are required are unavailable in different periods
(see e.g., Clements and Hendry, 2003), and so are missing on a
non-systematic basis. Consequently, a consistent subset of information
is rarely available, inducing a 'changing database problem'
which is bound to affect any system for nowcasting.
Fourthly, even when actual values of the relevant variables are
available, nowcasts should still be produced. These would now allow for
immediate evaluation, to act as an 'early warning signal' when
nowcasts departed too radically from the measured series. That could
enable rapid action to be taken following any deterioration in the
performance of a nowcasting model or method, or warn of measurement
problems in the 'actual' series. Thus, a 'break
problem' always threatens to disrupt the accuracy of any nowcasts.
The interaction of these four problems poses major uncertainties
for nowcasters. At first sight, the difficulties seem similar to those
facing forecasters. However, a key difference is the presence of
contemporaneous observations--which logically could never occur in a
forecasting context --and there are many possible sources of such
evidence, including rapidly and frequently observed data on
disaggregates such as retail sales; correlates like road traffic and air
passenger numbers or energy consumption; surveys; and more recent
innovations including Google (see Choi and Varian, 2009) and prediction
markets. Castle and Hendry (2009) propose a framework for nowcasting
that seeks to address three of these issues, namely the 'missing
data', 'changing database' and 'break'
problems. Here we also consider 'measurement errors', and the
use of contemporaneous data to try and ascertain the extent of any
anomalies being due to those, as against breaks. We also apply our
approach, building on the example in Ferrara, Guegan and Rakotomarolahy
(2008).
Given the need for nowcasts, section 2 considers the range of
information available. Then section 3 discusses the potential role for
automatic model selection to handle all the available information while
allowing for multiple past breaks at unknown dates, as well as
contemporaneous location shifts, which together entail that nowcasting
models always involve more variables than observations. This aspect
builds on the approach in Castle, Doornik and Hendry (2009), based on
Autometrics with impulse-indicator saturation for removing breaks,
outliers and past data contamination: see Doornik (2009a), Hendry,
Johansen and Santos (2008), and Johansen and Nielsen (2009). Section 4
discusses problems affecting economic nowcasting due to the trade-off
between timeliness and data quality, with section 5 discussing the
conflict between breaks and measurement errors, leading to an
application of the proposed nowcasting strategy to Euro Area GDP growth
in section 6. Section 7 concludes.
2. Range of information available
There are many possible sources of contemporaneous data, including
up-to-date values for some of the relevant disaggregates, rapidly and
frequently observed outcomes for variables such as retail sales;
correlated variables like road traffic and air passenger numbers or
energy consumption; qualitative surveys of consumers and businesses
about their plans and expectations; and more recent innovations
including Google (see Choi and Varian, 2009) and prediction markets.
Models that exploit related series, possibly in combination with
univariate time-series models, can help improve the accuracy with which
any missing data are estimated, by adding the proxy as an explanatory
variable in a model for the variable to be forecast. The advantage of
automatic Gets is that all covariate information can be included in the
general model at the outset and the data characteristics will determine
whether the explanatory variables are relevant in explaining the
individual disaggregate series. The model selection for the individual
forecast models could be undertaken for every new release of data, which
is feasible for automatic model selection. Hence, if a break is
detected, the model will be updated as soon as information on the break
is available. The robustness of the model specification and parameter
constancy can be tested alongside the break detection tests, but the
distinction between internal and external breaks discussed in Castle,
Fawcett and Hendry (2009) points to different approaches in even those
two cases.
2.1 Disaggregates
Castle and Hendry (2009) consider the first of these, and propose
how to utilise data on the subset of available disaggregates to
construct the desired aggregate in a framework that addresses the
'missing data', 'changing database', and
'break' problems, using Autometrics with impulse-indicator
saturation to remove past breaks, outliers and data contamination, and
robust forecasting devices to offset current location shifts (see
Hendry, 2006). However, they do not include additional variables.
Impulse-indicator saturation (IIS) adds an impulse indicator for
every observation to the candidate regressor set; Hendry et al. (2008)
establish one feasible algorithm, and derive the null distribution for
an IID process, and Johansen and Nielsen (2009) generalise their
findings to general dynamic regression models (possibly with unit
roots), and show that there is a small efficiency loss under the null of
no breaks at a nominal significance level [alpha], despite investigating
the potential relevance of T additional variables for a sample of size T
when [alpha]T is small. (3)
2.2 Covariate information
Variables like retail sales are observed more frequently (monthly)
and rapidly than aggregates like GDP (quarterly), and while they include
some of the information needed for expenditure measures of GDP, they are
also potentially correlated with other variables that are only available
with greater latency. For example, retail sales are published
approximately 32 days after the end of the reference period, whereas
consumption data, for which retail sales is a reasonable proxy, is only
released approximately 64 days after the end of the reference period
along with the first full GDP release (excluding the flash estimate).
Other proxies, such as new passenger car registrations, construction
output and industrial new orders are also available more rapidly than
components of the expenditure or income based measures of GDP.
2.3 Google query data
Choi and Varian (2009) show that Google Trends data can help
improve their forecasts of the current level of activity for a number of
different US economic time series, including automobile, home, and
retail sales, as well as travel behaviour. The associated Google query
variable is added to simple linear seasonal-AR models to measure the
additional 'predictive' power, and although the resulting
models are used for forecasting, their focus is on 'predicting the
present'. Other correlated contemporaneous variables include road
traffic and air passenger numbers or energy consumption and surveys of
consumers and businesses about their plans and expectations. In an
Autometrics approach, such additional data series would all be added to
the information set for the relevant aggregate (Doornik, 2009b,
illustrates this approach), or could be used in the Castle and Hendry
(2009) framework to nowcast missing disaggregates.
The advantage of an automatic Gets framework that allows for more
variables than observations is that it will enable very general models
initially given the magnitude of databases available, with
impulse-indicator saturation to detect outliers and location shifts. The
selected model specification could be maintained to update within
quarters, but the models can be re-selected intermittently to update if
breaks have occurred. Furthermore, the quantity of data available will
enable measures of variance via the realised volatility of inter-day
activity, and possibly even complete distributions via a non-parametric
approach. The optimal level of aggregation for such data is an empirical
question that again can be tested within the Gets framework. Further
developments to handle mixed frequencies of data could draw on a
MIDAS-type of approach (see, inter alia, Ghysels, Santa-Clara and
Valkanov, 2004, Ghysels, Sinko and Valkanov, 2006, and Clements and
Galvao, 2008).
2.4 Prediction market data
Prediction markets for the relevant measures could be another
valuable source of ancillary information: see Wolfers and Zitzewitz
(2004), Gil and Levitt (2006) and Croxson and Reade (2008) inter alia.
These information markets, like Iowa Electronic Markets for political
outcomes and Betfair for sporting competitions, are claimed to have more
accurate predictions than polls, surveys, or expert judgement, and to
have predicted high profile events well, such as the size of President
Obama's victory. Although it is unclear how well they forecast
relative to robust econometric equations, as location shifts occur
intermittently (see e.g., Hendry and Reade, 2008a), the probabilities do
adjust very rapidly to new information. As participants are risking real
money with their views, prediction market real-time measures could be a
valuable addition to measuring and nowcasting the present state of the
economy. For example, the July 2009 Iowa Electronic Market is for bets
on the Federal Reserve's Monetary Policy decisions, which thereby
could indicate implicit knowledge about the state of the US economy.
2.5 Surveys
Qualitative survey information could be used directly to modify
estimates of the forecast origin values, or as possible additional
regressors, or as part of a signal extraction approach to estimating
missing data on the disaggregates, or as one of the devices to be
pooled. We doubt their likely efficacy as leading indicators following
the critiques in Emerson and Hendry (1996) and Diebold and Rudebusch
(1991), since their ex post performance is usually superior to the ex
ante, reflected in regular revisions of the indicator components of
indexes. However, we do test whether surveys are able to provide timely
detection of structural breaks in the empirical example in section 6.
3. Selecting nowcasting models
Castle and Hendry (2009) outline the use of automatic model
selection for selecting nowcasting models, allowing for all the
available information, multiple past breaks and contemporaneous location
shifts, building on Castle, Doornik and Hendry (2009). The selection
algorithm is automated in Autometrics with impulse-indicator saturation
(see Doornik, 2009a). The Gets methodology commences from a general
model specification. In a nowcasting context with multiple vintages of
data, this enables inclusion of highly correlated vintages. Gets can
handle perfectly collinear variables, and so will have power to select
between preliminary releases and revised releases of data.
Nowcasting models tend to rely on bridge equations that can only
handle a limited set of covariates, but are useful in handling mixed
frequency data. The bridge equations are specified at a higher frequency
than the variable to be nowcast, enabling a bridge between, say, monthly
and quarterly data. Salazar and Weale (1999) use interpolated estimates
of GDP, correcting for the impact of measurement error, to obtain
monthly data and use a VAR approach on the higher frequency data. They
find that the monthly model is useful for nowcasting quarterly GDP once
one or two months' hard data become available. Methods proposed in
the literature to get around the bridging problem include combining
predictions from many small models (see Diron, 2006, and Kitchen and
Monaco, 2003), common factor models (see Giannone, Reichlin and Small,
2008), mixed frequency interpolation in a regression context or state
space form (see Chow and Lin, 1973, for the former and Harvey and
Pierse, 1984, for the latter). Mitchell, Smith, Weale, Wright and
Salazar (2005) use an interpolation approach to nowcast monthly GDP
using monthly indicators and then aggregate up to obtain early estimates
of quarterly GDP. This is the approach used by the National Institutue
of Economic and Social Research for nowcasting monthly GDP growth. A
Gets approach allows for large sets of covariates to be included in the
initial model specification, circumventing the need for bridging models.
The proposed nowcasting approach relies on well-specified
econometric models both to forecast the disaggregate covariates and
nowcast the aggregate. This requires all in-sample breaks to be
modelled. We use two methods to ensure breaks are captured. First,
impulse-indicator saturation (IIS) is applied to all selected models.
Castle, Doornik and Hendry (2009a) examine the ability of IIS to detect
multiple breaks, and show it can find up to 20 breaks in 100
observations. Secondly, all models are selected recursively to enable
breaks to be rapidly updated in the model specification. This procedure
is feasible in a real-time context due to automation of the selection
algorithm, which requires that it is fast and easy to implement. Castle,
Fawcett and Hendry (2009) show that recursive updating is beneficial
when there is an external break (rolling windows could also be used, but
can be costly if the size of the rolling window is small relative to the
full sample).
4. Problems in economic nowcasting
Nowcasting is characterised by a trade-off between timeliness and
data quality. More timely data are subject to larger revisions due to
initial measurement errors, but allow breaks at the forecast origin to
be detected more rapidly, enabling a switch to robust forecasting.
Hence, the 'optimal' publication lag is an empirical question,
and the views of statistical agencies differ from country to country.
Furthermore, data are released with different frequencies (e.g. GDP at
the quarterly frequency and retail sales at the monthly frequency).
Again, there is a trade-off as to the 'optimal' frequency to
model the data at. Although time disaggregation is beneficial for
detecting breaks sooner, Castle and Hendry (2008) show that the impact
of location shifts is not mitigated by higher frequency data; when the
objective is to forecast (say) a quarterly outcome one quarter ahead,
quarterly and monthly models are equally affected by forecast-origin
breaks. However, in nowcasting, some contemporaneous data are already
available for the period to be nowcast, so higher-frequency observations
could help.
There is a signal extraction problem at T as the final observation
will contain measurement error, so an 'extreme observation'
could be an outlier that will be revised or a more permanent location
shift. The two explanations are observationally equivalent with one data
point. Measurement errors in dynamic models lead to autocorrelated
residuals, but after the data are revised, there are only a few
observations at the end of the sample to detect such measurement errors;
full sample tests for autocorrelation will have low power, and often
congruency is anyway imposed in the selection procedure to ensure no
full-sample autocorrelation. Hence, residual analysis at the end of
sample is all that is available to decipher whether an
'extreme' value for the last observation is a measurement
error or a break. Section 4.1 outlines the characteristics of the last
observation, [[gamma].sub.i,T], when there is measurement error or a
break, including the use of IIS. Section 4.2 discusses the inconsistency
between nowcasting methods designed to mitigate measurement errors and
those designed to be adaptable to structural breaks, and section 5
considers whether pooling can mitigate forecast failure when there is a
break.
4.1 Measurement errors and breaks
Let [delta] <1 denote a high-frequency sub-period, and
[[??].sub.i,T|T-[delta]] a forecast of [[gamma].sub.i.T] based on
information available at time T - [delta]. There are N disaggregate
series, of which J are known at T and N-J are unknown at T. Then if
[[??].sub.i,T|T-[delta]] differs significantly from [[gamma].sub.i,T]
for i = l, ..., J, there are three plausible interpretations; either the
discrepancy is due to a structural break or a serious measurement error,
or a combination of both at T.
First, consider the case of measurement error. Following Clements
and Hendry (1998, ch.8), assume the estimation period consists of t = 1,
..., [T.sup.e], during which data are measured without error, and s =
[T.sup.e], ..., T, when data are measured with error. This captures the
idea that near the forecast origin, data are measured in real time,
whereas data further back in the sample have been revised as more
information is accumulated. Assume a simple autoregressive DGP for each
disaggregate:
[[gamma].sub.i,t] = [[psi].sub.i][[gamma].sub.i,t-1] + [v.sub.i,t]
for t = 1, ..., T; i = 1, ..., J (1)
where [v.sub.i,t] ~ [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN
ASCII] and [absolute value of [[psi].sub.i]] < 1. For the period 1,
..., [T.sup.e], we assume a correctly specified model with known
parameters, so the model coincides with (1), and for s = [T.sup.e], ...,
T the 'true' data, [[gamma].sup.*.sub.i,s], are measured with
error, so we observe:
[[gamma].sub.i,s] = [[gamma].sup.*.sub.i,s] + [[eta].sub.i,s] (2)
For convenience, assume [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE
IN ASCII] and E[[[gamma].sup.*.sub.i,s][[eta].sub.i,t]] = 0 for all s,
t. In practice, these assumptions may not hold; revisions often have a
systematic and predictable component, which implies that the measurement
error is also systematic. For example, a possible mistake is that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which could be
calculated from historical revisions.
For the period [T.sup.e], ..., T, the model in (1) becomes:
[[gamma].sub.i,s] = [[psi].sub.i][[gamma].sub.i,s-1] + ([v.sub.i,s]
+ [[eta].sub.i,s - [[psi].sub.i][[eta].sub.i,s-1]) =
[[psi].sub.i][[gamma].sub.i,s-1] + = [e.sub.i,s]
where [e.sub.i,s] is the orthogonal component of the residual
relative to [[gamma].sub.i,s-1]. When [[psi].sub.i] = 1, so the models
for the disaggregates are expressed for simplicity as:
DELTA[[gamma].sub.i,s] = [v.sub.i,s] + [[eta].sub.i,s] -
[[psi].sub.i][[eta].sub.i,s-1] = [e.sub.i,s]
the residuals should exhibit the following properties over the
period s (usually just the last few observations depending on data
frequency):
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where [[rho].sub.i] denotes late-onset error autocorrelation,
assuming homoskedasticity such that [MATHEMATICAL EXPRESSION NOT
REPRODUCIBLE IN ASCII] is constant as s approaches T, again, a stringent
assumption. Hence, if the last few residuals exhibit an increase in
variance and autocorrelation, more weight can be placed on the
hypothesis that the observation at T can be partly attributed to
measurement error.
The alternative hypothesis is that a location shift has occurred.
In some cases, future changes in economic policy may be anticipated and
so the error at T could be justified ex ante. Examples include changes
in VAT rates that are announced in advance, or disaggregate specific
policies such as prescription charges. Such breaks can be accounted for
in the nowcast using an intercept correction. Other location shifts at T
may be unpredictable, but IIS enables such shifts to be detected after
they occur. By including an impulse indicator at T in the general model
and selecting a specific model, the indicator will be retained if the
final observation exhibits a location shift relative to the full sample
mean. It is essential that impulse indicators are included for all
observations to account for shifts that occur in sample, otherwise late
onset shifts will be harder to detect. The final observation is
evaluated against the in-sample mean, so if the mean for t = 1, ..., T -
1 is incorrectly estimated due to unmodelled breaks, selection will have
incorrect retention properties, and [[??].sub.i,T] will have the
incorrect mean and variance.
If an impulse is detected at T in one or more of the disaggregate
models, the J + 1, ..., N unknown disaggregates that are nowcast may
need to be adjusted, depending on the correlation across disaggregate
series and the correlation of measurement errors, both of which are
almost certainly non-constant, particularly when a break occurs.
Unfortunately, the two explanations for an 'extreme
observation' at T require quite different forecasting models,
leading to a tension between alternative solutions. Exponentially
weighted moving average (EWMA) schemes work well when there are
'regular' noise-like measurement errors. They can be shown to
be optimal for a random-walk process subject to random measurement
errors, in contrast to the intercept correction and differencing
approaches discussed above which offset location shifts, but exacerbate
the impact of measurement errors.
4.2 EWMA and intercept correction
The EWMA forecasting formula for a scalar time series
{[[gamma].sub.i,t]} when h > 0 is:
[[??].sub.i,T + h|T] = (1 - [[lambda].sub.i] [T.summation over
(j=O)] [[lambda].sup.j.sub.i][[gamma].sub.i,T-j], (3)
where [[lambda].sub.i] [member of] (0,1), so with start-up value
[[??].sub.i,1] = [[gamma].sub.i,t] and [[gamma].sub.i,0] = 0:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)
This is a 'random-walk' forecast modified by the
proportion [[lambda].sub.i] of the previous forecast error
([[gamma].sub.i,T] - [[??].sub.i,T|T-1]). EWMA approximates an ARIMA
(0,1,1): [DELTA][[gamma].sub.i,t] = [[epsilon].sub.i,t] -
[[lambda].sub.i][[epsilon].sub.i,t-1] (5)
which arises from a random walk in the true variable
[[gamma].sup.*.sub.i,t] that is measured with an error [u.sub.i,t], so
[[gamma].sub.i,t] is observed:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)
and hence is a special case of a 'structural time series'
model (see Harvey, 1993, and Koopman, Harvey, Doornik and Shephard,
1999), where [[lambda].sub.i] is determined by the relative magnitudes
of the innovation error variance, [MATHEMATICAL EXPRESSION NOT
REPRODUCIBLE IN ASCII] and the measurement error variance, [MATHEMATICAL
EXPRESSION NOT REPRODUCIBLE IN ASCII].
A central problem of nowcasting is balancing measurement errors and
location shifts, and these 'conflict' in EWMA: (4) dampens
recent forecast errors, whereas rapid adjustment is essential for
location shifts, so the closer [[lambda].sub.i] is to zero the more
rapidly a location shift is assimilated. Adding an impulse indicator to
(6) at the forecast origin:
[[[gamma].sup.*.sub.i,t] = [[delta].sub.i] [1.sub.{t=T}] +
[[gamma].sup.*.sub.i,t-1] + [e.sub.i,t]
[[gamma].sub.i,t] = [[gamma].sup.*.sub.i,t] + [u.sub.i,t]
leads to:
[[gamma].sub.i,T] = [[gamma].sub.i,T-1] + [[delta].sub.i] +
[[epsilon].sub.i,T] - [[lambda].sub.i][[epsilon].sub.i,T-1]
[[gamma].sub.i,T+1] = [[gamma].sub.i,T] + [[epsilon].sub.i,T] -
[[lambda].sub.i] [[epsilon].sub.i,T]
then for simplicity letting [[gamma].sub.i,T] = [[??].sub.i,T|T-1])
= [[epsilon].sub.i,T-1] with known [[lambda].sub.i], the forecast error
at T is ([[gamma].sub.i,T] - [[gamma].sub.i,T|T-1]) = [[delta].sub.i] +
[[epsilon].sub.i,T]. Hence, the next forecast becomes:
[[??].sub.i,T+1|T] = [[gamma].sub.i,T] - [[lambda].sub.i]
([[delta].sub.i] + [[epsilon].sub.i,T]) (7)
with error:
[[gamma].sub.i,T+1] - [[??].sub.i,T+1|T] =
[[lambda].sub.i][[delta].sub.i] + [[epsilon].sub.i,T+1]
so large [[lambda].sub.i] will induce a sequence of systematic
errors and an MSFE of:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
An alternative predictor can be obtained by adding an impulse
indicator to the last observation to estimate [[delta].sub.i], as
[[delta].sub.i] + [[epsilon].sub.i,T], but then not extrapolating the
indicator so that:
[[??].sub.i,T|T-1] = [[gamma].sub.i,T-1] - [[lambda].sub.i]
([[gamma].sub.i,T-1] - [[??].sub.i,T-1|T) + ([[delta].sub.i] +
[[epsilon].sub.i,T])
and hence [[??].sub.i,T+1|T] = [[gamma].sub.i,T] with a forecast
error ([[epsilon].sub.i,T+1] - [lambda][[epsilon].sub.i,T]). The
final-period impulse indicator essentially takes the forecast back to a
random walk, with MSFE:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
so will yield a lower MSFE when [MATHEMATICAL EXPRESSION NOT
REPRODUCIBLE IN ASCII], which is a standardised location shift greater
than one error standard deviation, independently of [[lambda].sub.i].
Such an impulse indicator is akin to an intercept correction (IC), which
adds back a function of recent errors to offset a location shift, and
can be efficacious in situations in which the form, magnitude, timing
and sign of any shift are unknown (see e.g., Clements and Hendry, 1998).
The key difference here is that the IC is not extrapolated into the
future since the process is integrated, so is close to the approach
discussed in Hendry and Santos (2005).
4.3 Pooling of forecasts
The importance of pooling lies both in revealing divergences
between methods and averaging performance across methods. Since some
methods are less affected by structural changes than others, alternative
devices can deliver rather different forecasts in such a setting. In
particular, large differences from robust methods should prompt action.
Since the best forecasting strategy is inevitably unknown, pooling
across forecasts from a number of models, methods, transformations and
sub-sample periods can reduce serious inaccuracy: see, e.g., Newbold and
Harvey (2002), Fildes and Ord (2002), and Stock and Watson (1999).
Nevertheless, care is required in selecting the set of models to pool,
as shown by Hendry and Reade (2008b); an analogy would be the
inadvisability of mixing a glass of poison with glasses of pure water,
then drinking the combination.
Our proposed strategy uses an element of pooling. We suggest
averaging within blocks across the impulse indicators retained at T for
the known disaggregates to obtain an estimate of the break direction and
magnitude, which is incorporated into the nowcast for the unknown
disaggregates. Even if there is only weak evidence of a break, averaging
across the intercept-corrected model, the differenced model, the EWMA
and the unadjusted model will provide an 'insurance policy'
forecast as in Hendry (2004).
5. Breaks and measurement errors in nowcasting
Successfully resolving the conflict between the opposite responses
entailed to location shifts and measurement errors is central to
accurate nowcasting. A location shift necessitates a positively-signed
intercept correction (IC) for the nowcast period (see e.g., Clements and
Hendry, 1996). Conversely, a measurement error entails a negative
correction to offset that error, which should be ignored thereafter. But
a sudden unexpected shift at a point in time could correspond to either
happening. Given the inconsistency between methods to address
measurement errors and those that are robust to breaks, we now consider
whether additional information can help distinguish between these two
hypotheses. (4)
From a policy perspective, the issue is also serious: a location
shift (in inflation, say) usually necessitates a policy reaction,
whereas a measurement error will not. Thus, accurate discrimination
between these would be invaluable. Any analysis needs to allow for
location shifts and measurement errors before and after nowcasting in
real time. Several features suggest it may be possible to distinguish a
location shift from a measurement error shortly after the nowcast is
made, specifically within two periods. First, a measurement error at
time T does not 'carry forward' into the next period, although
its effects will do so in a dynamic process. Secondly, that data point
will usually be revised and should then be more accurate: thus,
revisions to nowcast errors at T + 1 should be informative about the
source of the error from T to T + 1. Thirdly, the next nowcast error
from T + 1 to T + 2 will again be large if the source is a location
shift but not for a one-off measurement error; thus, repeated
mis-forecasting is more indicative of a location shift. Fourth,
extraneous contemporaneous data may help in discriminating within a
couple of periods by whether a discrepancy from a pre-existing model
persists (likely to signal a break) or rapidly disappears (more likely
to be a measurement error). Finally, the variance of measurement errors
usually changes as the forecast origin approaches, and such an effect
should be capable of being modelled into the nowcasting procedure.
The literature on measurement errors in nowcasting is not large,
unlike that on the impacts of measurement errors on estimation
(summarised in e.g., Hendry, 1995). Some related approaches for forecast
scenarios are considered in Clements and Hendry (1998, 1999) and Hendry
and Clements (2003). Harrison, Kapetanios and Yates (2005) consider the
induced negative moving average in dynamic models as well as the error
variance effects caused by measurement errors, so argue for
down-weighting parameters estimated from recent data when forecasting.
We discuss the simplest model formulated in section 5.1, then
consider the impact on it of measurement errors only in subsection 5.2.
Subsection 5.3 discusses location shifts, and contrasts their effects
with those of measurement errors. Subsection 5.4 briefly records the
impacts of each incorrectly using the other's solution, and
subsection 5.5 notes the way ahead.
5.1 The model
Denote the aggregate [[gamma].sub.t] = [N.summation over
(i=1)][w.sub.i][[gamma].sub.i,t] where [[gamma].sub.i,t] are the
disaggregates and [w.sub.i] are the weights, which could be changing
over time. The simplest setup that can adequately characterise the
problem is:
[[gamma].sub.t] = [[gamma].sup.*] + [[epsilon].sub.t] (8)
where [[epsilon].sub.t] is an IID measurement error which we take
to be:
[[epsilon].sub.t] ~ IN[0, [[sigma].sub.[epsilon].sup.2] (9)
independently of [y.sup.*]. Intermittently, and unpredictably,
[[gamma].sup.*] alters, which represents the location shift. We first
consider the setting where [[gamma].sup.*] is constant; then where
[[epsilon].sub.t] = [for all]t but [y.sup.*] alters; and finally,
combine these. The nowcast is for time T following an estimation sample
from 1, ..., T - [delta], for small [delta] which may be zero for some
variables, but is unity for y, and we imagine a sequence of nowcasts as
if in real time, at T + 1, T + 2 etc. Abstracting from dynamics and from
stochastic shocks greatly simplifies the problem, but does not seem to
sacrifice the essential features.
5.2 Measurement errors only
When [y.sup.*] is constant in-sample, the nowcasting problem can be
resolved by using the conditional expectation of [[gamma].sub.T+1] given
information on it dated T or earlier:
[E.sub.t+1] [[gamma].sub.T+1] [absolute value of [I.sub.T+1]] =
[E.sub.T+1][([[gamma].sup.*] + [[epsilon].sub.T+1])] = [[gamma].sup.*]
(10)
Here [I.sub.t+1] only includes ([[gamma].sub.1], ...,
[[gamma].sub.T]), but more generally may also include ancillary data
series, some of which may be contemporaneous at T +1.
Even though [[epsilon].sub.t], is a measurement error, regression
of y, on a constant delivers the 1-step minimum MSE forecasting device:
[[??].sub.T+1|T] = [[bar.[gamma].sub.(T)] = 1|T [T.summation over
(t=1)][[gamma].sub.t]
where unconditionally from (8) and (9):
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)
Thus:
[[E.sub.T+1] [[??].sub.T+1|T] [[absolute value of [I.sub.T+1]] =
[[gamma].sup.*]
and:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (12)
so the full-sample estimate [[bar.y].sub.(T)] of [y.sup.*] is an
unbiased predictor, with the smallest variance of any sample size
choice, which is just 1/T larger than the nowcast from the known
location [y.sup.*].
5.3 Location shifts only
When [[epsilon].sub.t] = 0 [for all]t but [y.sup.*] is not
constant, then the best predictor of [y.sub.T+1] is [[??].sub.T+1|T] =
[y.sub.T]. When no location shifts occur, then [[??].sub.T+1|T] =
[y.sub.T] = [y.sup.*] is exact; and when one does occur at T, but not at
either T - 1 or T + 1, denoted [nabla] [sub.(T)][y.sup.*], a one-period
mistake is made:
[y.sub.T+1] - [[??].sub.T+1|T] = [y.sub.T+1] -[y.sub.T] = [nabla]
[sub.(T)][y.sup.*]
whereas the next nowcast at T +2 is unbiased if the process now
remains constant:
[y.sub.T+2] - [[??].sub.T+2|T+1] = [y.sub.T+2] -[y.sub.T+1] = 0.
Thus, the best 1-step predictor here is simply the lagged value,
namely a random walk, which uses the smallest possible sample size of
unity, ignoring all earlier observations.
The contrast between the outcomes in sections 5.2 and 5.3 is stark:
in the former setting of just measurement error, the full sample
estimates are best; and in the latter of just a location shift, only the
final observation is used. It would be unsurprising to find that when
both problems occur, a compromise is required, and we first illustrate
the conflict by considering using the 'best solution' to each
problem when it is applied to the other.
5.4 'Cross' solutions
First, consider [[??].sub.T+1|T] when there are no location shifts,
and only measurement errors. Then:
[E.sub.T+1]|[[??].sub.T+1|T] = [E.sub.T+1] [[y.sub.T]] = [y.sup.*]
and:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)
Thus, the predictor remains unbiased, but the variance increases by
(T - 1)/T, essentially doubling.
Next, consider [[??].sub.T+1|T] when there are no measurement
errors, but location shifts occur. Now we need to specify all in-sample
location shifts, and let these be [nabla] [([tau])[y.sup.*] for [tau]
[member of] T, so that after a shift:
[y.sub.t] = [y.sup.*] + [nabla] [sub.([tau])[y.sup.*],
and hence:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
The simplest case is that of one in-sample location shift at
[[tau].sup.*] (say), so that:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
When no further breaks occur, [[[bar].y].sub.(T)] is a biased
predictor of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
with a squared prediction error of:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
The impact of the in-sample break is under-estimated when not
modelled, because the sample spans periods before and after the break.
Either a congruent model is needed that models the break, or a shorter
sample period that post-dates the most recent break is required, as
would occur with a rolling-sample estimator. Assuming one of these
routes is followed, we need only consider the case where no previous
break contaminates the model in use, and focus on the measurement error
and location shift at T.
5.5 Measurement errors and location shifts
A number of interacting features suggest it may be possible to
distinguish a break from a measurement error after about two periods. Of
course, a systematic measurement error will also act like a break, and a
one-off shift like a measurement error, which will 'confuse'
the analysis, as would a measurement error followed by a break of a
similar magnitude and sign. Nevertheless, there are four sources for
distinguishing breaks from measurement errors:
a) the measurement error does not 'carry forward' into
the next period;
b) the previous data point will usually be revised, and should then
be more accurate;
c) the second nowcast error using [[[bar.y].sub.(T+1)] at T + 2
will again be large for a break, but not for a one-off measurement
error;
d) the variance of measurement errors changes as the nowcast origin
recedes.
Thus, we assume (8) and (9) operate till time T, then letting [??]
etc. denote a preliminary measurement:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where there are no unmodelled in-sample shifts, but:
[y.sup.*.sub.a] = [y.sup.*] +[nabla] [sub.(T+1)][y.sup.*]
and after data revision at T + 2:
[y.sub.T+1] = [y.sup.*.sub.a] + [[epsilon].sub.T+1] (14)
Then, from above, [[[bar.y].sub.(T)] provides the minimum MSE
estimator of [y.sup.*], with variance
[[sigma].sup.2.sub.[epsilon]][T.sup.-1] on average; [y.sub.T+1] -
[[??].sub.T+1] estimates the revision error, independently of [y.sup.*]
being constant or shifting; [[??].sub.T+1] - [y.sub.T] and later
[y.sub.T+1] - [y.sub.T] both estimate [nabla][sub.(T+1)][y.sup.*],
albeit imprecisely; and [V.sub.T+1][[[??].sub.T+1]] and
[V.sub.T+2][[[??].sub.T+2]] are likely to be larger than
[[sigma].sup.2.sub.[epsilon]]. Since ([y.sub.T+1] + [[??].sub.T+2])/2
estimates [y.sup.*.sub.a] with a variance of:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
combining this with [bar.y] estimated up to T, and the estimate of
[nabla] [sub.(T+1)][y.sup.*] gained from [y.sub.T+1] - [y.sub.T] yields
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)
which also estimates [y.sup.*.sub.a], with a variance of:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
[TABLE 1 OMITTED]
The weighted estimator (15) has a lower variance than [MATHEMATICAL
EXPRESSION NOT REPRODUCIBLE IN ASCII], which could be satisfied, given
point (b) above. Signal extraction suggests using both the latest
[y.sub.T+1] and [y.sub.T], where the weights vary with the measurement
error variance; Fawcett (2008) finds that this is the case for dynamic
models, in a similar setting.
6. Nowcasting Euro Area GDP growth
In this section, we undertake a nowcasting exercise using a variant
of the nowcasting strategy in Castle and Hendry (2009). That paper
proposed a framework to handle a large set of disaggregate variables
with non-synchronous release dates. Here, we compute nowcasts of Euro
Area GDP growth using a small subset of data that is used to nowcast GDP
growth directly. The data have varying release dates, and we exploit the
timely release of survey data to test for location shifts at the nowcast
origin. Ferrara et al. (2008) and Angelini, Camba-Mendez, Giannone,
Runstler and Reichlin (2008) propose the use of bridge equations to
exploit monthly data in forecasting quarterly time series. In these
models, each bridge equation is a forecasting model for GDP growth
conditioning on different information sets. We utilise the bridge
equations in a different way, instead forming blocks of variables to
forecast the disaggregate monthly series directly. The blocks group
together variables with close linkages such that a break in one series
is likely to spread through to the other series, enabling tests of
breaks within each block. The disaggregate monthly models are selected
using Autometrics and the forecasts from these models are used to
compute nowcasts of GDP growth. Section 6.1 reviews the data, 6.2
outlines the methodology, 6.3 discusses the model specifications, 6.4
reports the nowcast results, and 6.5 considers the use of judgement to
distinguish between location shifts and measurement errors.
6.1 Data
The appendix lists the data and sources, along with the
transformations undertaken to remove stochastic trends, and the lag in
releasing the data after the reference month. We compute nowcasts for
Euro Area quarterly GDP growth, ([DELTA][y.sub.t]), using the real-time
data provided by EABCN. (5) Quarterly observations are indexed by t, and
[tau] denotes the monthly index, with t = 3[tau]. There are three
releases of quarterly GDP: the first flash estimate of Euro Area GDP
growth is published approximately 43 days after the end of the reference
quarter, with two further revised releases in the following consecutive
months. Hence, quarterly GDP estimates for [y.sub.t] are released at
[tau] +2, [tau] +3 and [tau] +4. We denote these three vintages of data
as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for the flash
estimate, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] as the
release in the month following the flash estimate and [MATHEMATICAL
EXPRESSION NOT REPRODUCIBLE IN ASCII] as the third release for the
reference quarter. (6) Survey data are more timely, published at the end
of, or just a few days after, the reference month. We assume all survey
data are available at the end of the reference month, so the publication
lag is 0 for these indicators. Other data such as industrial production,
financial variables, unemployment, etc. are released with an
intermediate lag of between one and three months.
We calculate the nowcasts over the period 2003Q2-2008Q1, giving a
total of 20 nowcast observations. For each quarter, we compute three
nowcasts of quarterly GDP growth, denoted horizon 1 ([H.sub.1]) for the
month at the end of the reference quarter, horizon 2 ([H.sub.2]) for the
month after the end of the reference quarter, and horizon 3 ([H.sub.3])
for the second month after the end of the reference quarter. At the end
of the reference quarter, [H.sub.1], survey data for the reference
quarter will be available, but additional conditioning information may
not be available (although Google Trends suggests a potential source as
discussed earlier). By [H.sub.3], the flash estimate of GDP will be
available. Table 1 summarises the nowcast horizons and the timing of GDP
releases.
6.2 Nowcasting methodology
The nowcasts of GDP growth are computed in two steps. First,
forecasts of the monthly variables that are released with a lag are
obtained using a block selection approach based on bridge equations. The
standard procedure in the literature is to use univariate time-series
models, but we use the automatic general-to-specific selection procedure
with IIS to allow for covariates as well as breaks that have occurred
during the in-sample period. The monthly variables are available with
varying publication lags so we use direct forecasts to compute the
required step-ahead forecasts, resulting in a set of monthly forecasts
for the nowcast horizon. The forecast models are selected and estimated
recursively using a 1 per cent significance level, so result in 60
monthly forecasts for the period 2003M6-2008M5.
Impulse-indicator saturation is undertaken jointly with selection
of the forecasting models. If an impulse indicator is found to be
significant for the final observation in-sample, it is recorded.
Although the data are not as timely as survey data, significant impulses
could be an indication of either a location shift or measurement error.
Furthermore, for each of the surveys that are available without a lag,
we use the actual realisation, but we also test for impulse indicators
at the final in-sample observation, again to detect possible structural
breaks. Adjustments can be made to the nowcasts of GDP growth if there
is strong evidence of a break due to significant final-observation
indicators. Adjustments will depend on judgement as to whether the break
detected in one or more disaggregate series is likely to be correlated
with a break in GDP growth: see section 6.5.
The second step is to compute nowcasts of quarterly GDP growth.
Three quarterly time series are created for the monthly variables.
Denoting [x.sub.t] = ([x.sub.t], ..., [x.sub.0])' as the past
history of series [x.sub.t], the three quarterly series are given by:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
such that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
corresponds to data in the first month of the quarter, [MATHEMATICAL
EXPRESSION NOT REPRODUCIBLE IN ASCII] corresponds to data in the second
month of the quarter and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN
ASCII] for the final month of the quarter. All three quarterly series
are included in the general unrestricted model (GUM) for GDP growth,
thus avoiding the aggregation issue for monthly data, and allowing all
monthly data to be included in the quarterly model if all three series
are retained in the selection process. The nowcast models include all
in-sample data for the conditioning variables and the latest available
vintage of GDP growth. The models are selected recursively using
Autometrics with IIS for each nowcast origin and each nowcast horizon,
resulting in 60 model specifications. The nowcasts are then computed by
plugging in the forecasts for those variables that are selected but
still unknown at the nowcast horizon.
The nowcasts are compared to the actuals which are given by the
final available vintage of data, released in December 2008. Clements and
Galvao (2008) discuss which vintage of data to use for the actuals
depending on whether the forecaster's aim is to forecast an early
release, or later revised releases of the data, see also Corradi,
Fernandez and Swanson (2009). For the purpose of nowcasting which aims
to estimate the current position of the economy, evaluating against the
final available vintage seems appropriate. However, revisions to the
actuals will be ongoing and this may impact on nowcast accuracy more
towards the end of the nowcast sample.
[TABLE 2 OMITTED]
6.3 Empirical specification
We consider three nowcast model specifications including two
quarterly benchmark models: (A) a univariate model; (B) a model with
covariates; and (C) the disaggregate approach outlined above. As all
models are selected recursively using Autometrics, the model
specifications and parameter estimates will change with each nowcast
origin. Hence, we report the GUM specifications for each model.
(A) As a benchmark, we compute the forecasts from univariate models
using just the quarterly GDP growth data. The GUM for each horizon is
given by:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)
Lags of vintages 1 and 2 are included in the GUM despite vintage 3
data being available, as this will pick up any benefits to forecasting
the revision process. The vintages of data will be highly correlated,
but Doornik (2009a) demonstrates the viable properties of Autometrics
for correlated data. Note that only the GUM for [H.sub.3] includes
contemporaneous data as the flash estimate is available by the third
horizon. The nowcasts are computed as 1-step ahead forecasts from the
recursively selected models for each horizon.
(B) As a second benchmark, we augment the information sets in (16)
with conditioning variables. The set of conditioning variables includes
[x.sub.t] = ([DELTA][ip.sub.t], [DELTA][ipc.sub.t,], [DELTA][SCI.sub.t],
[DELTA][rs.sub.t], [DELTA][CARS.sub.t], [DELTA][MCI.sub.t],
[DELTA][ESI.sub.t], [DELTA][CCI.sub.t], [DELTA][RCI.sub.t],
[DELTA][eer.sub.t], [DELTA][eurox.sub.t], [DELTA][oecdli.sub.t],
[DELTA][EUROCOIN.sub.t], [DELTA][ur.sub.t], [DELTA][m1.sub.t] and
[SPREAD.sub.t]), and we use the quarterly time series of the monthly
data [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for each
variable. (7) As all variables in [x.sub.t] are released with a 3-month
lag or less, we include all variables at t-1, ..., t-4. The nowcasts are
computed as 1-step ahead forecasts from the recursively-selected models
for each horizon.
(C) The disaggregate nowcasts are computed as outlined in section
6.2. The first step is to compute forecasts for the monthly variables
that are available with a lag. Table 2 records the GUM specification,
in-sample equation standard error, and root mean square error (RMSE) for
each monthly variable. The conditioning variables are based on the
bridge specifications in Ferrara et al. (2008).
The second stage is to select the nowcast models for GDP growth.
The GUM specifications for the three-horizon nowcasts of GDP growth
include [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], an
intercept and impulse indicator dummies. This GUM is augmented with the
lagged dependent variables given in (17) for each horizon, resulting in
150, 151 and 152 variables in the GUM for [H.sub.1], [H.sub.2] and
[H.sub.3] respectively, plus an additional T indicator dummies.
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (17)
The models are selected recursively at the 1 per cent significance
level and the nowcasts are computed using:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (18)
where [[bar.x].sub.t] denotes the retained variables that are known
at t for horizon k, [[??].sub.t] denotes the forecasts for the retained
variables that are unknown at t for horizon k and [t.sub.t] denotes the
retained impulse indicators. For example, if the nowcast model for
[H.sub.2] retained [DELTA][ip.sub.t] this would be replaced with the
forecast [??] as industrial production figures are released with a 3
month lag and horizon 2 is computed for the month after the nowcast
horizon.
[FIGURE 2 OMITTED]
[FIGURE 3 OMITTED]
We compare three different disaggregate forecast methods for
[[??].sub.t]. First, we compute ex post nowcasts taking the
disaggregates as known, denoted (C1). Hence, all retained variables will
be in the set [[bar.x].sub.t]. This provides a benchmark against which
to assess the disaggregate nowcasts, but will be infeasible in a
real-time setting. Second, we use the forecasts from the forecasting
models reported in table 2, (C2); and finally, we use a robust
forecasting device to compute the disaggregate forecasts given by the
double differenced device (DDD), (C3). The DDD is given by:
[[??].sub.t] = [x.sub.[tau] -l] (19)
for data released with lag 1. The forecast device in (19) is robust
to location shifts once l periods have elapsed after a break, but
differencing will usually increase the error variance of the forecast.
6.4 Real-time nowcast results
Table 3 reports the ratio of RMSE for methods (B) and (C) relative
to the univariate benchmark model (A) (actual RMSE for (A) reported in
italics).
There are substantial benefits to using the disaggregated approach
at all horizons. If the monthly indicators were known at the nowcast
origin, the RMSE more than halves for the horizons before the flash
estimate is released. However, as the monthly indicators are unknown
they must be forecast. Despite the additional uncertainty in forecasting
the indicators, the RMSE is reduced by more than 20 per cent at
[H.sub.1] and 35 per cent at [H.sub.2] when more monthly indicators are
known. By [H.sub.3] all monthly indicators are known (other than
industrial production which was not retained in the nowcast models) so
all disaggregate methods are equal. However, augmenting the flash
estimate with monthly indicators still yields a 17 per cent improvement
in RMSE. The differenced device for forecasting the monthly indicators
yields a small improvement over the forecasting models for (C2), despite
the forecasting models including IIS and recursive selection, both of
which should account for location shifts. There is no benefit to
augmenting the univariate model with additional covariates at the
quarterly frequency, and at [H.sub.2] it is costly, suggesting that the
main benefit of the disaggregate approach comes from incorporating
higher-frequency data that is available with more timely release dates.
Figure 2 reports the mean absolute nowcast error (MANE) and RMSE
for each method at all three horizons. The MANE for the univariate
models is large for horizons before the flash estimate is released, but
conditioning on covariates, even at the quarterly frequency, yields a
substantial improvement. The MANE is slightly larger for the
disaggregate nowcasts by [H.sub.3], but the smaller variance yields a
smaller RMSE. The RMSE of the disaggregate model with known covariates
(C1) at [H.sub.1] is as small as any nowcast at [H.sub.3] which suggests
that the flash estimate does not yield additional information over the
monthly indicators; indeed, the flash estimate is likely to condition on
the same information set.
Figure 3 records the nowcast errors over the 5-year nowcast period
for each horizon (all panels have the same scale). There is no
systematic nowcast error for any of the models. Models (A) and (B) have
a higher variance, with model (A) producing downward biased nowcasts on
average, but there is no predictable component to the nowcast errors. By
[H.sub.3], all nowcast errors follow a similar pattern.
To summarise, we find that using higher-frequency monthly data to
produce more timely nowcasts of Euro-Area GDP growth yields substantial
benefits in terms of RMSE, particularly over short horizons. The main
benefits come from rapid incorporation of information available at the
monthly frequency, accounting for outliers in the GDP growth models
using impulse-indicator saturation, and the use of recursive selection
and estimation to update the models rapidly. We also find that there is
little benefit to using well-specified econometric models to forecast
the monthly indicators as opposed to using a robust forecasting device
such as (19).
[FIGURE 4 OMITTED]
6.5 Measurement errors and structural breaks: the role of judgement
We now consider whether judgement can improve the nowcast
performance of the disaggregate model. Two possible explanations for
poor nowcast performance could be measurement errors or location shifts;
section 5 discussed under what conditions it is possible to distinguish
between these two possible explanations. In our nowcast exercise, the
flash estimate of GDP is only available in [H.sub.3], so detecting a
measurement error ex ante is very difficult as there is no revision
process upon which to base the decision. However, when it comes to
forecasting GDP growth, both the new flash estimate and the past
revision will help to determine whether a shift is due to a temporary
measurement error or a more permanent structural break. Ex post, one can
look at the revisions process to see how informative revisions are, to
detect measurement errors versus breaks more rapidly. Mitchell (2009)
demonstrates that the weights placed on 'hard' and
'soft' indicators shift in times of structural change, so
detection of breaks is essential for the nowcasting model
specifications.
Table 4 reports the magnitudes of revisions over the nowcast
horizon, where [v.sub.f] is the final available vintage (December 2008).
Revisions between the flash estimate and vintages 2 and 3 are small.
However, revisions between the flash estimate and the final vintage are
more than ten times larger in absolute value (in keeping with figure 1),
suggesting that there are substantial measurement errors in the first
few vintages of data, but there is little information content in the
revisions in the first few months. This implies that even in a
nowcasting context, detecting measurement errors close to the nowcast
origin is very difficult, and there is little additional information in
the second and third vintage of GDP data. Figure 4 provides more detail
to the table, recording GDP growth over the nowcast period (given by the
final vintage) along with three revisions to the flash estimate; the
second vintage, third vintage and final vintage. There is no systematic
component to the revisions process over this period, so again the
information content to detect measurement errors using the revisions
process is limited.
We next assess whether we can use breaks in the monthly indicators
to detect structural breaks in GDP growth. Impulse-indicator saturation
at 1 per cent is applied recursively to models for all the monthly
variables, including the surveys known at the forecast horizon, and any
significant final-observation indicators are recorded. No significant
impulse indicators were found over the nowcast horizon for
[DELTA][ip.sub.[tau]], [SCI.sub.[tau]], [DELTA][rs.sub.[tau]],
[CCI.sub.[tau]], [RCI.sub.[tau]], [DELTA][eer.sub.[tau]],
[DELTA][eurox.sub.[tau]], [DELTA][oecdli.sub.[tau]] and
[DELTA][EUROCOIN.sub.[tau]], but significant impulses were found for:
[DELTA][ipc.sub.[tau]] = 2005(12);2006(1) [DELTA][CARS.sub.[tau]] =
2005(5);2005(6) [MCI.sub.[tau]] = 2003(12) [ESI.sub.[tau]] = 2003(12)
[DELTA][eur.sub.[tau]] = 2006(4);2007(7);2007(9);2008(1)
[DELTA][m1.sub.[tau]] = 2005(1); 2005(6); 2006(12) [SPREAD.sub.[tau]] =
2007(8)
Castle and Hendry (2009) propose a tighter significance level (0.5
per cent [less than or equal to] [alpha] [less than or equal to] 0.1 per
cent) for selecting impulse indicators. Using a 0.1 per cent
significance level we find significant impulse indicators for:
[DELTA][ur.sub.[tau]] = 2006(4);2007(7);2008(1)
[DELTA][m1.sub.[tau]] = 2005(6)
Figure 5 plots GDP growth against the retained impulse indicators
at [alpha] = 1 per cent for all covariates aggregated to the quarterly
scale. Visual inspection suggests that there are no significant location
shifts in GDP growth over the nowcast period, so we cannot anticipate
that judgemental adjustments will improve the nowcast performance in
this example. Further, the retained dummies are not clustered together;
if they were this would be stronger evidence for a location shift in GDP
growth. We check whether adjustments yield an improvement in the nowcast
performance using the rule:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (20)
[FIGURE 5 OMITTED]
[I.sub.k] is an indicator function taking the value 1 when an
indicator dummy is retained for the covariates, [DELTA][[??].sub.t] are
the nowcasts obtained from (C2), and [DELTA][[??].sup.*.sub.t] is the
DDD given by:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Results are reported in table 5. The adjusted model has worse
nowcast performance at longer horizons. This suggests that a tighter
criterion is needed for the adjustment rule. Castle and Hendry (2009)
propose the rule that [I.sub.k] takes the value 1 when k[alpha] J
[greater than or equal to] p for a small integer k where p =
[[summation].sup.J.sub.i=1] [1.sub.T] is the number of impulse
indicators retained at the nowcast origin for the J known monthly
covariates. In our example this would be the surveys and as no surveys
retain dummies at [alpha] = 0.1 per cent, no adjustment should be made.
In summary, distinguishing between measurement errors and location
shifts in a nowcasting context is extremely difficult given the limited
information set available. In the empirical example we consider, the
nowcast period does not span a period with large measurement errors or
structural breaks. Extending the analysis to incorporate 2008/2009 data
may well yield more interesting results given the increased uncertainty
and location shifts evident in the most recent data.
7. Conclusion
Nowcasting is not just contemporaneous forecasting since the use
and timing of contemporaneous data introduces importantly different
aspects compared to ex ante forecasting. We first consider the four main
reasons for nowcasting, and their associated problems of missing data,
measurement errors, changing database, and breaks. Location shifts can
induce nowcast failure, but interact with measurement errors to make
discrimination difficult in the available time horizon.
We develop the nowcasting strategy proposed in Castle and Hendry
(2009) of building models by automatic model selection methods (here
Autometrics), using all available measures (possibly including
disaggregates, surveys, and on-line data), testing for shifts using
impulse-indicator saturation. Autometrics can allow for location shifts
both in-sample and at the nowcast origin while still including all
available covariate information by handling more variables than
observations. Although data are released at varying times, this can be
accommodated in our approach, so the method ensures that the largest
available information set is used at each nowcast origin.
Next forecasts are made of all variables that are released with a
publication lag each period, switching to robust forecasts of missing
series when breaks are detected, but using a conservative strategy that
requires strong evidence of breaks. The available information and
forecasts then combine to produce nowcasts of the desired variables. We
apply a variant of this nowcasting strategy to nowcast Euro Area GDP
growth. The application is illustrative of potential gains to reducing
RMSEs of flash estimates, and we anticipate further gains by using more
extensive information sets as discussed in the paper, especially surveys
and Google Trends data.
Initial estimates of economic data can have a substantive impact on
expectations, plans, future activity and economic policy, driving the
need for accurate nowcasts. The apparent 'break down' of the
ONS's current models, reported by the Financial Times on 25 July
2009, shows the difficulty of nowcasting in times of economic
uncertainty and structural change. A methodology that is robust to such
breaks is needed in the current climate, and this paper demonstrates
that automatic model selection with higher-frequency covariates, break
detection, and robust forecasts can yield gains in nowcast accuracy.
APPENDIX
Label Description
[Y.sub.t] Real GDP, million, constant prices
[IP.sub.t] Industrial production index (NSA)
IP [C.sub.t] Industrial production index for construction (NSA)
[SCI.sub.t] Service sector confidence indicator (survey)
[RS.sub.t] Retail trade index, except motor vehicles, constant
prices
[CARS.sub.t] New passenger car registrations, total
[MCI.sub.t] Industry confidence indicator (survey)
[ESI.sub.t] Economic sentiment indicator (base=100)
[CCI.sub.t] Consumer confidence indicator (survey)
[RCI.sub.t] Retail trade confidence indicator (survey)
[EER.sub.t] Real effective exchange rate, CPI deflated, core
group
[EUROX.sub.t] Dow Jones Euro Stoxx Broad stock exchange index
[OECDLI.sub.t] DECD composite leading indicator, Euro Area,
trend restored
[EUROCOIN.sub.t] Eurocoin indicator
[UR.sub.t] Unemployment rate, total
[MI.sub.t] Index of notional stock, money MI
3MT [B.sub.t] 3-month interest rate, Euribor
10Y [B.sub.t] I 0-year Euro area bond yield
SP [READ.sub.t] = (3MTB-IOYB)/100
Label Source Period
[Y.sub.t] [1] (yer.xls) 1995Q1-2008Q4
[IP.sub.t] [1] (MI-B) 1990M1-2008M9
IP [C.sub.t] [1] (MI-L) 1990M1-2008M9
[SCI.sub.t] [1] (MI-GE) 1995M4-2008M11
[RS.sub.t] [1] (MI-P) 1995M1-2008M10
[CARS.sub.t] [1] (MI-V) 1990M1-2008M10
[MCI.sub.t] [1] (MI-FO) 1990M1-2008M11
[ESI.sub.t] [I] (MI-FN) 1990M1-2008M11
[CCI.sub.t] [1] (MI-FS) 1990M1-2008M11
[RCI.sub.t] [1] (MI-GA) 1990M1-2008M11
[EER.sub.t] [1] (MI-BK) 1993M1-2008M11
[EUROX.sub.t] [1] (MI-DP) 1994M1-2008M11
[OECDLI.sub.t] [2] (CLI-AF) 1990M1-2008M12
[EUROCOIN.sub.t] [3] 1990M1-2008M12
[UR.sub.t] [1] (MI-X) 1995M1-2008M10
[MI.sub.t] [1] (MI-FA) 1990M1-2008M10
3MT [B.sub.t] [1] (MI-DD) 1994M1-2008M11
10Y [B.sub.t] [1] (gbr 10y.xls) 1994M1-2008M8
SP [READ.sub.t] -- 1994M1-2008M8
Label Trans- Lag
form- re-
ation lease
[Y.sub.t] 2 2,3,4
[IP.sub.t] 2 3
IP [C.sub.t] 2 3
[SCI.sub.t] 3 0
[RS.sub.t] 2 2
[CARS.sub.t] 1 * 2
[MCI.sub.t] 3 0
[ESI.sub.t] 3* 0
[CCI.sub.t] 3 0
[RCI.sub.t] 3 0
[EER.sub.t] 2 1
[EUROX.sub.t] 2 1
[OECDLI.sub.t]
2 0
[EUROCOIN.sub.t] 1 0
[UR.sub.t] 2 2
[MI.sub.t] 2 2
3MT [B.sub.t] 1 1
10Y [B.sub.t] 1 1
SP [READ.sub.t] -- 1
Sources: [1] EACBN real-time database (www.eacbn.org), [2] OECD
monthly indicators database (http://stats.oecd.org/mei), [3] CEPR
database (from Bank of Italy) (http://eurocoin.cepr.org/).
Notes: MI-? refers to the monthly indicators spreadsheet, CLI-? refers
to the composite leading indicators spreadsheet and ? denotes the
column reference. NSA - not seasonally adjusted. Transformations: (1)
= [DELTA][X.sub.t] = [X.sub.t] - [X.sub.t-1]; ([1.sup.*]) = [DELTA]
([X.sub.T]/1000000); (2) = [DELTA][x.sub.t] = (ln([X.sub.t]) -
ln([X.sub.t-1])); (3) = [X.sub.t]/10;([3.sup.*] = [X.sub.t]/100. upper
case denotes levels and lower case denotes logs.
REFERENCES
Angelini, E., Camba-Mendez, G., Giannone, D., Runstler, G. and
Reichlin, L. (2008), 'Short-term forecasts of Euro-area GDP
growth', Working paper no. 949, European Central Bank, Frankfurt.
Ashley, J., Driver, R., Hayes, S. and Jeffery, C. (2005),
'Dealing with data uncertainty', Bank of England Quarterly
Bulletin, Spring, pp. 23-29.
Castle, J.L., Doornik, J.A., and Hendry, D.F. (2009), 'Model
selection when there are multiple breaks', Working paper, Economics
Department, University of Oxford.
Castle, J.L., Fawcett, N.W.P. and Hendry, D.F. (2009),
'Forecasting with equilibrium-correction models during structural
breaks', Journal of Econometrics (forthcoming).
Castle, J.L. and Hendry, D.F. (2008), 'Forecasting UK
inflation: empirical evidence on robust forecasting devices', in
Rapach, D.E., and Wohar, M.E. (eds), Forecasting in the Presence of
Structural Breaks and Model Uncertainty: Frontiers of Economics and
Globalization Volume 3, Bingley, UK, Emerald Group, pp. 41-92.
--(2009), 'Nowcasting from disaggregates in the face of
location shifts', Journal of Forecasting (forthcoming).
Castle, J.L. and Shephard, N. (eds)(2009), The Methodology and
Practice of Econometrics, Oxford, Oxford University Press.
Choi, H. and Varian, H. (2009), 'Predicting the present with
Google Trends', unpublished paper, Economics Research Group,
Google.
Chow, G. and Lin, A.L. (1971), 'Best linear unbiased
interpolation, distribution and extrapolation of time series by related
series', The Review of Economics and Statistics, 53, 4, pp. 372-5.
Clements, M.P. and Galvao, A.B. (2008), 'Macroeconomic
forecasting with mixed frequency data: forecasting US output
growth', Journal of Business and Economic Statistics, 26, pp.
546-54.
Clements, M.P. and Hendry, D.F. (1996), 'Intercept corrections
and structural change', Journal of Applied Econometrics, 11, pp.
475-94.
--(1998), Forecasting Economic Time Series, Cambridge, Cambridge
University Press.
--(1999), Forecasting Non-stationary Economic Time Series,
Cambridge, Mass., MIT Press.
--(2003), 'Forecasting in the National Accounts at the Office
for National Statistics', Report no 12, Statistics Commission.
Corradi, V., Fernandez, A. and Swanson, N.R. (2008),
'Information in the revision process of real-time data',
Journal of Business and Economic Statistics (forthcoming).
Croxson, K. and Reade, J.J. (2008), 'Information and
efficiency: goal arrival in soccer betting', mimeo, Economics
Department, Oxford University.
Diebold, F.X. and Rudebusch, G.D. (1991), 'Turning point
prediction with the composite leading index: an ex ante analysis',
in Lahiri, K. and Moore, G.H. (eds), Leading Economic Indicators: New
Approaches and Forecasting Records, Cambridge, Cambridge University
Press, pp. 231-56.
Diron, M. (2006), 'Short-term forecasts of Euro-area real GDP
growth: an assessment of real-time performance based on vintage
data', Working paper No. 622, European Central Bank.
Doornik, J.A. (2009a), 'Autometrics', in Castle, J.L. and
Shephard, N. (eds), The Methodology and Practice of Econometrics,
Oxford, Oxford University Press, pp. 88-121.
--(2009b), 'Improving the timeliness of data on influenza-like
illnesses using Google Trends', typescript, Department of
Economics, University of Oxford.
Emerson, R.A. and Hendry, D.F. (1996), 'An evaluation of
forecasting using leading indicators', Journal of Forecasting, 15,
pp. 271-91.
Faust, J., Rogers, J.H. and Wright, J.H. (2007), 'News and
noise in G-7 GDP announcements, Journal of Money, Credit and Banking,
37, pp. 403-19.
Fawcett, N.W.P. (2008), 'Essays on econometrics and
forecasting', D.Phil Thesis, Economics Department, University of
Oxford.
Ferrara, L., Guegan, D. and Rakotomarolahy, P. (2008), 'GDP
nowcasting with ragged-edge data: a semi-parametric modelling', CES
working paper, 82, Paris School of Economics.
Fildes, R. and Ord, K. (2002), 'Forecasting
competitions--their role in improving forecasting practice and
research', in Clements, M.P. and Hendry, D.F. (eds), A Companion to
Economic Forecasting, Oxford, Blackwells, pp. 322-53.
Garratt, A. and Vahey, S.P. (2006), 'UK real-time macro data
characteristics', Economic Journal, 116, F119-135.
Ghysels, E., Santa-Clara, P. and Valkanov, R. (2004), 'The
MIDAS touch: Mixed DAta Sampling regression models', mimeo, Chapel
Hill, N.C.
Ghysels, E., Sinko, A. and Valkanov, R. (2006), 'MIDAS
regressions: further results and new directions', Econometric
Reviews, 26, pp. 53-90.
Giannone, D., Reichlin, L. and Small, D. (2008), 'Nowcasting:
the real-time informational content of macroeconomic data', Journal
of Monetary Economics, 55, pp. 665-76.
Gil, R. and Levitt, S.D. (2006), 'Testing the efficiency of
markets in the 2002 world cup', mimeo, Economics Department,
University of California at Santa Cruz.
Harrison, R., Kapetanios, G. and Yates, T. (2005),
'Forecasting with measurement errors in dynamic models',
International Journal of Forecasting, 21, pp. 595-607.
Harvey, A.C. (1993), Time Series Models 2nd (first edition 1981)
edn., Hemel Hempstead, Harvester Wheatsheaf.
Harvey, A.C. and Pierse, R.G. (1984), 'Estimating missing
observations in economic time series', Journal of the American
Statistical Association, 79, pp. 125-31.
Hendry, D.F. (1995), Dynamic Econometrics, Oxford, Oxford
University Press.
--(2004), 'Forecasting long-run TV advertising expenditure in
the UK, commissioned report, Ofcom, London (see
http://www.ofcom.org.uk/research/tv/reports/tvadvmarket.pdf).
--(2006), 'Robustifying forecasts from equilibrium-correction
models', Journal of Econometrics, 135, pp. 399-426.
Hendry, D.F., and Clements, M.P. (2003), 'Economic
forecasting: some lessons from recent research', Economic
Modelling, 20, pp. 301-29.
Hendry, D.F., Johansen, S. and Santos, C. (2008), 'Automatic
selection of indicators in a fully saturated regression',
Computational Statistics, 33, pp. 317-35, erratum, pp. 337-9.
Hendry, D.F., and Reade, J.J. (2008a), 'Economic forecasting
and prediction markets', Working paper, Economics Department,
Oxford University.
--(2008b), 'Modelling and forecasting using model
averaging', Working paper, Economics Department, Oxford University.
Hendry, D.F. and Santos, C. (2005), 'Regression models with
data-based indicator variables', Oxford Bulletin of Economics and
Statistics, 67, pp. 571-95.
Johansen, S. and Nielsen, B. (2009), 'An analysis of the
indicator saturation estimator as a robust regression estimator',
in Castle, J.L. and Shephard, N. (eds), The Methodology and Practice of
Econometrics, Oxford, Oxford University Press, pp. 1-36.
Kitchen, J. and Monaco, R.M. (2003), 'Real-time forecasting in
practice: the U.S. Treasury staff's real-time GDP forecast
system', Business Economics, October, pp. 10-19.
Koopman, S.J., Harvey, A.C., Doornik, J.A. and Shephard, N. (1999),
Structural Time Series Analysis, Modelling, and Prediction using STAMP
2nd edn., London, Timberlake Consultants Press.
Mitchell, J. (2004), 'Revisions to economic statistics',
Review by National Institute of Economic and Social Research, Statistics
Commission Report No. 17, volume 2, April, see http://www.
statscom.org.uk/uploads/files/reports/Revisions_vol_2.pdf.
--(2009), 'Where are we now? The UK recession and nowcasting
GDP growth using statistical models', National Institute Economic
Review, 209, pp. 60-9.
Mitchell, J., Smith, R.J., Weale, M.R., Wright, S.H. and Salazar,
E.L. (2005), 'An indicator of monthly GDP and an early estimate of
quarterly GDP growth', Economic Journal, 115(501), F108-29.
Newbold, P. and Harvey, D.I. (2002), 'Forecasting combination
and encompassing', in Clements, M.P. and Hendry, D.F. (eds), A
Companion to Economic Forecasting, Oxford, Blackwells, pp. 268-283.
Reed, G. (2000), 'How the preliminary estimate of GDP is
produced', Economic Trends, 556, pp. 53-61.
--(2002), 'How much information is in the UK preliminary
estimate of GDP?', Economic Trends, 585, pp. 1-8.
Salazar, E. and Weale, M. (1999), 'Monthly data and short-term
forecasting: an assessment of monthly data in a VAR model', Journal
of Forecasting, 18(7), pp. 447-62.
Stock, J.H. and Watson, M.W. (1999), 'A comparison of linear
and nonlinear univariate models for forecasting macroeconomic time
series', in Engle, R.F. and White, H. (eds), Cointegration,
Causality and Forecasting: A Festschrift in Honour of Clive Granger,
Oxford, Oxford University Press, pp. 1-44.
Walton, D.R. (2006), 'Dealing with data uncertainty', MFE
lecture slides, Department of Economics, Oxford University.
Wolfers, J. and Zitzewitz, E. (2004), 'Prediction
markets', Journal of Economic Perspectives, 18, pp. 107-26.
Jennifer L. Castle, Nicholas W.P. Fawcett and David F. Hendry *
* Department of Economics, Oxford University. e-mail:
jennifer.castle@nuffield.ox.ac.uk, nicholas.fawcett@lmh.ox,ac.uk and
david.hendry@cunuffield.ox.ac.uk. We are grateful to Mike Clements,
James Mitchell and Martin Weale for helpful comments.
NOTES
(1) The estimate of GDP growth has since been revised to a 0.7 per
cent decline in the second release for 2009Q2 on 28 August 2009, see
www.statistics.gov.uk/cci/nugget.asp?id=192.
(2) www.ft.com/cms/s/0/c5529f00-7886-11de-bb06-00144feabdc0.html
(3) Because many other uses of the words impulse and indicator
exist, we have used the more precise designation of impulse-indicator
saturation.
(4) We are indebted to Rob Engle for help in formulating this
section.
(5) www.eabcn.org.
(6) We abstract from intra-period data in this application, but
there is scope for using such data to update. For example, the flash
estimate of GDP is typically available 6 weeks after the end of the
reference quarter so weekly updates would incorporate this information
faster than monthly data.
(7) Third differences of the monthly covariates
([[DELTA].sub.3][x.sub.[tau]] = [x.sub.[tau]] - [x.sub.[tau] - 3])
correspond to first differences of the quarterly series [MATHEMATICAL
EXPRESSION NOT REPRODUCIBLE IN ASCII].
Table 3. Actual RMSE for (A) and ratio of RMSE to the
univariate benchmark model (A) for (B) and (C)
A B C1 C2 C3
[H.sub.1] 0.2951 0.9823 0.4340 0.7994 0.7531
[H.sub.2] 0.3056 1.2245 0.4808 0.6372 0.6222
[H.sub.3] 0.1486 1.0372 0.8320 0.8320 0.8320
Table 4. Mean error (ME) and standard deviation (SD) of
the revisions to GDP growth. Results for 2003Q2-2008Q1,
reported as percentages
[MATHEMATICAL [MATHEMATICAL [MATHEMATICAL
EXPRESSION NOT EXPRESSION NOT EXPRESSION NOT
REPRODUCIBLE IN REPRODUCIBLE IN REPRODUCIBLE IN
ASCII] ASCII] ASCII]
ME (%) 0.0045 -0.0059 -0.0674
SD (%) 0.0214 0.0391 0.1583
Table 5. RMSEs for disaggregate nowcast model C2 and
adjusted model, C2-adjust
C2 C2-adjust
[H.sub.1] 0.2359 0.2327
[H.sub.2] 0.1948 0.2146
[H.sub.3] 0.1236 0.1603