The Treasury

Global Navigation

Personal tools

Treasury
Publication

Treasury's Forecasting Performance: A Head-to-Head Comparison - WP 06/10

3  Head-to-Head Comparison

3.1  Forecast performance across all evaluation periods

Figure 1 presents the average relative rank (based on mean absolute error) for each forecaster’s GDP and CPI forecasts across all evaluation periods (April/May and October/November current year and year ahead forecasts for the periods 1996 to 2005). The best forecast performances are those in the lower left of Figure 1 showing the lowest average relative ranks, while the worst forecast performances are in the upper right. Of the 12 forecasters who forecast both GDP and CPI, the best was the Consensus. The Treasury is among the worst performers for CPI forecasts. Figure 2 presents the RMSE of each forecaster’s GDP and CPI forecasts across all evaluation periods. On the RMSE measure, Treasury’s forecast performance is amongst the poorest for both GDP and CPI.

Table 4 presents the average relative rank and RMSE for each forecaster across all evaluation periods. For GDP on an average relative rank basis, the Mean was the most accurate followed by the Median, with the Consensus taking third place. Treasury came in close to the middle of the pack at seventh placing out of 16. Treasury came in 13th based on the RMSE. The difference in ranking based on the average relative rank and RMSE highlights how large forecast errors can affect forecast performance. Treasury’s large forecast error for the 1998 period (see Appendix Figure 8) had quite a material impact on the overall RMSE, which made their RMSE ranking worse than the average relative rank. Treasury forecasters at that time misjudged the impact of the Asian financial crisis and droughts on economic activity, revising their forecasts much later than other forecasters. The Mean, which had the best ranking based on the average relative rank, came in only at fifth place when ranked according to the RMSE. Across all forecasters, the average relative rank and the RMSE usually produce similar rankings but the exceptions to this are Treasury and Forecasting Group 3. For Treasury, the large forecast error in 1998 worsened its RMSE ranking. For Forecasting Group 3, making conservative forecasts ensured that it was not penalised for large forecast errors, but it also means that it is less successful at getting the closest to the actual number.

Large forecasting groups, on average, tend to perform poorly at forecasting GDP compared to private sector forecasters on the average relative rank basis. But their performance is comparable on the RMSE ranking. This suggests that forecasting groups tend to be more conservative in their forecasts, which may not result in a closer forecast to the actual, but lessens the chance of a large forecast error.

Figure 1 – Average relative rank across all evaluation periods
Figure 1 – Average relative rank across all evaluation periods.

 

Figure 2 – RMSE across all evaluation periods
Figure 2 – RMSE across all evaluation periods.

Table 4 – Average RMSE and relative rank across all evaluation periods

Table 4 – Average RMSE and relative rank across all evaluation periods.

For CPI, the Consensus performed the best on both the average relative rank and RMSE basis. Treasury came in 10th out of 12 on both bases. Large forecast errors for CPI relating to the 1998-2000 period (see Appendix Figure 9) were a large contributor to the poor forecast performance. Overestimating GDP growth for 1998 also led to an overestimation of CPI for the 1998 and 1999 years. In 2000, Treasury underestimated CPI due to the pass-through from the exchange rate depreciation at that time.

Page top