The Treasury

Global Navigation

Personal tools

Quality of management information

This section outlines some issues about the quality of management information. Specific comments relating to the quality of management information for a particular function are covered in the chapters.

Overall, the quality of data submitted by agencies was high and continues to improve.

Measurement practice was consistent across agencies and international comparator groups. Agencies used common definitions and data collection practices, and these definitions and practices are aligned with those used by three main sources of comparator data: UKAA, APQC, and The Hackett Group. This consistency is foundational to the comparability of results and usefulness of management information.

Where there are concerns with data quality, the underlying problems are based in the maturity of measurement methods and are common in the private and public sectors around the world. Two functions in the benchmarking exercise are particularly difficult to measure:

  • Procurement: The highly devolved nature of the Procurement function makes it hard to measure consistently because measurement only captures costs where procurement activities make up more than 20 percent of a person's time. While these data collection practices are consistent with international practice, they lead to an understatement of the cost of Procurement in NZ agencies with devolved procurement functions.
  • CES: Organisations around the world undertake a wide range of activities within this function without standard definitions, and it is not common for them to benchmark these services. When they do benchmark, the quality of management information is impaired by data inconsistency and a limited pool of reliable comparator data in New Zealand or internationally.

Management information for the HR, Finance, Property Management, and ICT functions is therefore more reliable and more comparable across agencies than that for Procurement or CES.

Some A&S costs may be understated. Agencies were asked to only include function activity costs for staff that spend more than 20 percent of their time on the relevant function. The implication of this data collection practice is that, if agencies have highly devolved processes for a specific function, the true cost of the activity is likely to be understated as the data would exclude a line manager's time and effort.

Management practice indicator scores are self reported. It should be noted that management practice indicators are self reported by agencies, and the responses have not been checked for accuracy. This has raised some concerns about possible inconsistencies across scores.

While results are broadly comparable, results need to be understood within the context of each organisation. While agencies have common features, each has their own unique functions and cost drivers. Benchmarking results are a guide to relative performance, and conclusions regarding efficiency and effectiveness should be made in light of each agency's operational context.

Page top