The Treasury

Global Navigation

Personal tools

Quality of management information

These findings report on known ICT data quality issues, limitations of the indicator set in providing insight into ICT service performance, and opportunities for improvement. The Context chapter includes common quality of management information findings across all functions that are not repeated in this chapter.

The quality of the data underlying the metrics is generally of a high standard. Agencies overall collected high quality data for both reporting periods with consistent definitions and data collection methods across the New Zealand cohort and the international comparator groups.

Consistency of data across agencies improved this reporting period. Improvements in consistency and accuracy of data for FY 2010/11 are expected given that for many agencies FY 2010/11 was only their second year of reporting. While more accurate and more consistent data is positive, changes year-to-year have a negative impact on time series analysis. The two definitions most affected are:

  • End user: the definition of end user was refined for better alignment to The Hackett Group definition. Some users that were included in the FY2009/10 data return were excluded in FY 2010/11.
  • ORC: some agencies reduced their reported ORC in FY 2010/11 by excluding transfer payments that were included in the ORC for FY 2009/10.[37]

Management information quality will improve with changes to metrics, especially for the management information that provides a government-wide view of ICT performance. As stated in this chapter's commentary, there are significant opportunities to improve the management information in future reports as follows:

  • Measure the complexity of the ICT environment. Complexity is a major driver of performance. ICT management information could be enhanced by introducing new measures that identify sources of, and opportunities to, reduce complexity.
  • Measure the value of ICT to overall agency performance. Management information could be improved by introducing new measures for the impact of ICT solutions and services on agency performance. Measuring ICT impact is a challenge globally and will take considerable practitioner input and trial and error in future benchmarking exercises to achieve.
  • Separate capital expenditure (capex) and operating expenditure (opex). Agencies reported a single cost figure inclusive of capex and opex. Given that capital spending on ICT is lumpy year-to-year, it is important to isolate this spending to understand trends and opportunities in the costs in individual agencies and across government.
  • Understand what is outsourced and at what cost. Current measures of the number of FTEs undertaking ICT processes in-house do not take into account whether or not a process is outsourced. The implication is that agencies that outsource a process (and therefore assign few FTEs to that process) look more efficient than agencies providing that process in-house.
  • Strengthen measures of efficiency. While the cost of ICT as a percentage of ORC is an accepted measure of ICT efficiency in overseas jurisdictions and leading ICT benchmarking organisations such as Gartner and APQC, practitioners are seeking a more meaningful indicator. Insights from the number of users per ICT FTE can be hampered by outsourcing arrangements, which vary from agency-to-agency. Enhancing these metrics for more insightful efficiency information will take considerable practitioner input and trial and error in future benchmarking exercises.
  • Align with detailed ICT benchmarking methods in Australian jurisdictions. In recent years Australian jurisdictions have made significant investments and advancements in measuring government-wide ICT performance. The GCIO and the Treasury are working together with the ICT Council to partner with Australian jurisdictions to share intellectual property and data for more detailed ICT benchmarking and insight.
  • While results are broadly comparable, results need to be understood within the context of each agency. While agencies have common features, each has their own functions and cost drivers. For example, large service delivery agencies are expected to have higher ICT costs than policy agencies, especially if they have more expensive ICT requirements such as specialised line business applications or a distributed network. Agencies should use the benchmarking results as a guide to relative performance. Conclusions regarding efficiency and effectiveness should be made in light of each agency's operational context.

Notes

  • [37]Transfer payments include revenue passed on to other organisations or individuals who make decisions on how this money is spent
Page top