16
National Institute of Economic and Social Consensus estimates of forecast uncertainty: working out how little we know James Mitchell NIESR June 2005

National Institute of Economic and Social Research “Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR

Embed Size (px)

Citation preview

  • Consensus estimates of forecast uncertainty: working out how little we know

    James Mitchell NIESRJune 2005

  • ESRC Social Science WeekThanks to the ESRC for financial supportThanks to Stephen Hall (co-author)Our ESRC output three papers: Density forecast combinationhttp://www.niesr.ac.uk/pdf/ssw200605a.pdf2. Optimal density forecast combinationhttp://www.niesr.ac.uk/pdf/ssw200605b.pdf3. Evaluating, comparing and combining density forecasts using the KLIC with an application to the Bank of England and NIESR fan charts of inflationhttp://www.niesr.ac.uk/pdf/ssw200605c.pdf

  • Forecast UncertaintyHow do we judge if forecasts are any good? How should a forecaster best acknowledge how little they really know ? Surely they are not always surprised when their forecast proves wrong ?How should we judge if one forecast is better than another ?Why is it misleading to say one forecast is better than another simply if it turns out closer to the subsequent outturn ? Can we do better if we take some kind of average across competing forecasts of the same event ?

  • Forecasting: state of the artdismal scienceThe importance of forecasts: forward looking policyPoint forecasts are better seen as the central points of ranges of uncertaintyIt is not a question of one point forecast proving right and another proving wrong, despite what politicians may sayUsers may not be surprised when inflation is higher than forecast. Indeed they may not be very surprised if its much higher

  • Density ForecastsIncreased attention is now given to providing measures of uncertainty associated with forecastsMeasures of uncertainty surrounding a point forecast can enhance its usefulness; affect policy response; essential with non-quadratic loss functionsSo called density forecasts are being used increasingly since they provide commentators with a full impression of forecast uncertaintyThey provide an estimate of the probability distribution of a variables possible future values

  • Production of density forecastsSubjective or model-basedEx post we will see that we can evaluate the quality of the assumed density, combined density, forecast The fan chart: the Bank of England uses an asymmetric density forecast based on a two-piece normal distribution: upside vs. downside riskNIESR uses a normal density with variance estimated from the historical forecast errors: how far back should they look?

  • Bank of England fan chart for CPI Inflation: Inflation Report May 2005

  • Evaluation of Density ForecastsEvaluate density forecasts statistically using the probability integral transform (pit); analogous to evaluation of point forecasts using RMSEThe pits zit for the density forecast git of yt (say, inflation) are

    zit are i.i.d. uniform (or via a CDF transform, normal) when the density forecast is correct

  • Consensus forecastsIt is widely appreciated that combination forecasts normally outperform any single forecast

    There are debates about why this happens- All forecasts are wrong but in different ways

  • Consensus estimates of forecast uncertaintyA natural question to ask is, would a combined density forecast also work better?This raises a number of issues:1. How should we combine density forecasts2. How should we evaluate the combined density3. How should we test individual densities against each other

  • Combining density forecastsThe early OR approachesConsider N forecasts made by N experts (i=1,,N) of a variable yt. If their (continuous) density forecasts are git then the linear opinion pool is

  • Combined density forecastsHow do we determine the weights wi ?Equal weightsOptimal combination: mimic the optimal combination of point forecasts The combined density can have distinct characteristics from those of the individual forecasters; e.g. if all the densities are normal, but with different means and variances, then the combined density is mixture normal. But what if the true density is normal?Indirect combination: moment by momentBayesian and copula based combination

  • The Bank of England and NIESR density forecasts of inflation

  • Does density forecast combination work?In-sample and out-of-sample experimentsCombined density forecasts can but need not helpCombining the Bank and NIESR density forecasts we find a weight of zero on NIESRCombining Bank and time-series forecasts we find a weight of 0.73 on the time-series forecast and an improvement in accuracy

  • The tool-kit available to those willing to admit they may get it wrongThe Kullback-Leibler Information Criterion (KLIC) offers a unified statistical tool to evaluate, compare and combine density forecastsThe KLIC distance between the true density f(y) and the forecast density g(y) is:

    Existing density forecast evaluation tests based on the pits implicitly test KLIC=0 but without having to know f(.)KLIC can be used to test which density forecast is best: extension of Diebold-Mariano testBasis for Bayesian Model Averaging

  • ConclusionsProducers of forecasts should be encouraged to indicate how uncertain they are. This is an admission of strength not weakness. Facilitates better policy-makingCombining density forecasts appears promisingUsers require a tool-kit to evaluate, compare and combine density forecasts. This will enable us to work out how little we know and improve the reliability of this forecast