The Treasury

Global Navigation

Personal tools

Treasury
Publication

Administrative and Support Services Benchmarking Report for the Financial Year 2010/11

Measurement and benchmarking approach

The Treasury is responsible for providing an annual benchmarking service across the public service and for compiling this report. This role involves providing practical supports to measurement agencies during data collection, validating and analysing data, producing a summary report, and working with practitioners to strengthen the metric set based on lessons learnt. The Treasury completes most work in house and draws on third parties such as American Productivity & Quality Center (APQC) and The Hackett Group for comparator data and specialist analysis as required. It also liaises with other governments to access comparator data and lessons learnt from similar exercises overseas.

The Treasury's approach to benchmarking is adapted from established international methodologies. Rather than building a bespoke methodology, the New Zealand agency benchmarking exercise adopted metrics and methods from the UK Audit Agencies (UKAA) and two leading international benchmarking organisations: APQC and The Hackett Group.

Work with agencies is guided by five principles:

  1. Metrics are selected with practitioners across government. Selection is based on three criteria:
    • Metrics reflect performance - they provide meaningful management information that can support business decisions.
    • Results can be compared - they are comparable across NZ agencies and comparator groups.
    • Data is accessible within agencies - the measurement costs are reasonable.
  2. Methods and results are transparent. The Treasury makes its metric calculation methods and underlying definitions publicly available along with the results of individual measurement agencies to promote transparency, facilitate discussion and debate, and to collaborate with other jurisdictions undertaking similar exercises.
  3. Performance results should be understood within the operational context of each agency. While agencies have common features and results are broadly comparable, some have unique functions and cost drivers. For example, large service delivery agencies are expected to have higher ICT costs than smaller policy agencies, especially if they have more expensive requirements such as specialised line business applications or a distributed network. Benchmarking results are a guide to relative performance, and conclusions regarding efficiency and effectiveness should be made in light of each agency’s operational context.
  4. Results should be used constructively, not punitively. In leading practice organisations, performance information supports discussion, decision making, and learning.
  5. The quality of management information should improve each year. Metric sets and data collection methods are refined and improved year-to-year based on lessons learnt by the benchmarking team, the insights of practitioners in agencies, and trends and innovations in measurement around the world. Improvements in accuracy will lead to some increases and reductions in reported numbers, through either greater inclusion or exclusion of A&S service information. Changes through more accurate measurement are discussed in this report, as appropriate.
Page top