The Treasury

Global Navigation

Personal tools

1  Introduction

“…the ability to produce accurate predictions of the course of the economy in the near-term future is probably the main criterion by which the public judges the usefulness of our entire profession.” Victor Zarnowitz (December 1986)

The Treasury produces forecasts of the New Zealand economy at least twice a year which are published in the Economic and Fiscal Updates.[1] The economic forecasts are used as a basis for Treasury’s economic and fiscal policy advice to the Government. Consistently poor forecast performance can lead to policy mistakes, requiring disruptive policy adjustments further down the track. Evaluating Treasury’s forecasting performance is therefore both important and necessary to ensure the quality of its advice. In reality, there are many dimensions to consider when assessing the quality of economic and fiscal advice. Zarnowitz’s statement above, however, provides a simple and high level benchmark for judging one of Treasury’s core functions.

Treasury undertakes regular internal monitoring of its forecast performance, and since 2003 its forecasting record has been publicly released on an annual basis.[2] Work on comparing Treasury’s forecasting performance to date has mainly focussed on comparisons against consensus forecasts published by the New Zealand Institute of Economic Research (NZIER). International studies have found that consensus forecasts tend to perform better than forecasts produced by individual organisations (Batchelor, 2001; Zarnowitz, 1984; Zarnowitz and Braun, 1992). In this study, a different approach is taken. Treasury’s forecast performance is compared with that of individual private sector forecasters as well as major public sector institutions, rather than simply with the average of other forecasts. This makes it possible to assess how Treasury’s forecast performance compares with its peers, on average and also over time.

Section 2 describes the data and methodology used in this paper. Section 3 analyses the head-to-head comparison, and section 4 concludes.

2  Data and Methodology

2.1  The data

Forecast data from private sector institutions come from Consensus Economics’ Asia Pacific edition of Consensus Forecasts. Each month, Consensus Economics[3] surveys a number of private sector institutions in New Zealand and collects their forecasts for several major economic variables – such as gross domestic product (GDP), private consumption, consumer price index (CPI), unemployment rate and the current account balance. The forecasts are published in the second week of each month, based on the survey conducted in the previous two weeks. Forecast data from major public sector institutions were sourced either directly from those institutions or from published forecasts.[4]

As well as the forecasts of the individual participants in the survey, Consensus Economics also reports the mean of those forecasts (known as the consensus forecast). In this study, two alternative ‘consensus’ measures are calculated for each period. One is a Mean of the private sector institutions used in this study, plus the Reserve Bank, OECD and IMF. The other is the Median of the forecasts used in the study, calculated to reduce the influence of extreme forecasts. The Mean and Median are calculated only for GDP, since the CPI forecasts used in this study are not on a comparable basis.[5]

A common dilemma in the forecasting literature is the appropriate “actual outturn” data to use for assessing forecast accuracy. Unlike the CPI, which is generally not subject to revision, initial GDP outturns are often subject to numerous revisions. The revisions could be due to updated information, methodological changes, the introduction of new weights or rebasing (see table 1). Since forecasts made at any point in time are based on all available data at the time (and methodology), and are often judged against the first available data outturn, this study uses the initial outturn as the basis for assessing forecast accuracy.

Table 1 – Initial and latest GDP outturn (1996 to 2005)

Table 1 – Initial and latest GDP outturn (1996 to 2005) .

2.2  The methodology

To ensure adequate comparisons, only the Consensus Forecasts which were surveyed in the same month that Treasury’s forecasts were finalised were used, thereby ensuring that the forecasts were based on similar information sets. These are typically the April/May and October/November editions of Consensus Forecasts. Private sector forecasters that are no longer featured in Consensus Forecasts, either because they are no longer included or because they have been merged or taken over by another forecaster, and those without a sufficient number of observations, have been excluded. The Reserve Bank’s forecasts were based on their Monetary Policy Statements which were finalised in the month closest to when the Treasury finalised. The OECD’s Economic Outlook is normally published in June and December, while the IMF’s World Economic Outlook is normally published in May and October. The minor timing differences of when the different forecasts were finalised can have an impact on the forecast performance, given sudden exchange rate or commodity price changes. It is difficult to quantify or resolve the timing differences, and they are an important and ongoing issue for forecast comparisons.

The forecast performance of the individual forecasters is not disclosed in this study. They are labelled as either Forecaster X, or Forecasting Group Y. Table 2 below lists the forecasters covered in this study.

Table 2 – Forecasters covered in this study

Table 2 – Forecasters covered in this study.

Due to the limited availability of consistent forecast data, the comparison focuses only on “current year” and “year ahead” forecasts of GDP and CPI, on a calendar year basis (ie for the year ended December) covering the evaluation period 1996 to 2005.[6] A current year forecast is defined as one that is made within the calendar year that the forecast period relates to, and a year ahead forecast is one where the forecast is made in the calendar year prior. For example, a forecast made in April 2004 for the 2004 calendar year is a “current year forecast” and the forecast for the 2005 calendar year is a “year ahead forecast”. Table 3 shows the number of observations and forecasters that are included in this study. Note that the number of forecasters includes the Consensus, Mean and Median calculations.

Table 3 – Number of observations and forecasters

Table 3 – Number of observations and forecasters.

Forecasts for GDP are all in annual average percent change terms and are comparable across all forecasters except for the OECD, which forecasts on an expenditure GDP basis, not production GDP. The forecasts for CPI are not all on a comparable basis. Data from the Consensus Forecasts are for headline CPI in annual average percent change terms, except for a brief period in 1999 and 2000 when it was for CPIX.[7] Reserve Bank and Treasury forecasts are for annual percent change of CPIX for the reference years 1996 to 2000, and annual percent change of headline CPI thereafter. In all instances, the appropriate actual outturn was used to calculate the forecast error.

The method used to compare forecast performance is similar to the one used by Blix et al (2001). It is based on an average relative rank over all the evaluation periods. For each organisation’s GDP and CPI forecast for an evaluation period, a relative rank is assigned based on the mean absolute error. The most accurate (ie, having the lowest mean absolute error) is given a ranking of 1, the next given a ranking of 2 and so on. An average relative rank is then calculated for each organisation over the entire evaluation period. The average relative rank itself is then ranked to allow for easier comparison. This metric does not put any weightings on good or poor forecasts. For example, two forecasters will have similar rankings if one had two fourth placings and a first, while another had three third placings (both will have average relative rankings of 3).

Another metric which places weightings for poor forecast performance is the root mean squared error (RMSE). The RMSE for each forecaster is calculated over all evaluation periods, and ranked. The rankings obtained from the average relative rank based on the mean absolute error can differ, in some cases quite substantially, from that obtained by ranking the RMSE because the latter penalises large forecast errors more severely. Due to the relatively limited sample period, this study focuses more on the average relative rank metric as it does not penalise a forecaster as much for large forecast errors.

Notes

  • [1]These are the Budget Economic and Fiscal Update (published at the time of the Budget, typically in May) and the Half Year Economic and Fiscal Update (published in November or December).  In addition, a Pre-election Economic and Fiscal Update is published four to six weeks before a general election.
  • [2]Go to http://www.treasury.govt.nz/forecasts/performance/ for the latest report.
  • [3]For more information on Consensus Economics, visit their website at www.consensuseconomics.com.
  • [4]The Reserve Bank provided their forecast data for this study.  The OECD’s forecasts were sourced from their twice yearly Economic Outlook publication.   The IMF’s forecasts were sourced from their World Economic Outlook reports.
  • [5]Some institutions forecast a different measure of consumer price inflation for part of the evaluation period.  See page 4 below.
  • [6]The implied forecasting horizons are typically 2, 8, 14 and 20 months ahead.
  • [7]CPIX is the Consumers Price Index excluding credit services and interest charges.  It was the target measure of inflation for the Reserve Bank until September 1999 when interest charges (but not other credit services) were removed from the CPI.
Page top