Pre-condition #3: Systems Are in Place to Verify Performance
Departments and Crown entities must declare how they will report on performance in Statements of Intent. Annual reporting intentions must be declared in the information supporting the Estimates. Results chains lay out the results expected, and help identify appropriate measures for major interventions. Failure at any step in a results chain[2] can signal non-performance, flag risk, and drive managers to make improvements.
(Crown entities are subject to similar requirements under s.141 of the Crown Entities Act.)
Ideally, effectiveness is proven by measuring impact. Even when impact is measured, other indicators are often reported to speed feedback up and to build a richer ‘performance story’. A typical system for demonstrating performance will thus:
- Require proponents to lay out how the intervention is meant to work (#2, above).
- Identify what results should be visible and measurable if the intervention is working.
- Show how results will be measured (i.e. specify measures and comparison groups).
- Be implemented in advance, so that data can be collected as intervention occurs.
A robust system for managing and demonstrating performance will assess whether:
- Inputs were used efficiently (or, ideally, cost-effectively).
- Outputs were delivered in the right quantity, without compromising on quality.
- Outputs reached those in need who are likely to respond to intervention (coverage).
- Core outcomes improved by the quantum expected, and in the groups ‘treated’.
- Intermediate and near-term outcomes are reported (as well as end outcomes) to hasten feedback, confirm output quality, improve attribution and pinpoint problems.
- Superior performance has been achieved by other jurisdictions, agencies or means.
What gets measured depends on what must be assessed (Appendix 2; table below). This includes key aspects of performance, and risks of unintended consequences.
| Type of Result | Class | Focus On | Examples of Common Measures |
|---|---|---|---|
| EFFICIENCY OF PROCESS | Input |
Efficiency Utilisation Economy |
Real output price trend (inflation adjusted) Price per unit, vs. benchmarks % prison beds full / max. capacity used Trend in real price (eg, per cop or nurse) |
| QUANTITY | Output | Volume produced |
People receiving training / rehabilitation Cases / complaints processed |
| QUALITY | Output |
Quality of delivery Timeliness Acceptability |
% output fully meeting specification % ministerials / passports / etc on time % who would use again / recommend use |
|
COVERAGE (or Reach) |
Output |
Coverage Targeting efficiency Access |
% population in need receiving output % in ‘treated’ group who met entry criteria % targets who did not access / use service Time in queue (or other ‘big’ barrier to use) |
| NEAR-TERM | Outcome |
Completion rate Knowledge retained Reduction in queue Receipt of benefits Incentives changed Unintended effects |
% finishing / getting qualified / in service % core messages remembered Average wait time / number in queue % impoverished with more money % believing regulatory change matters Higher incident or reduced survival rates |
| INTERMEDIATE | Outcome |
Cognitive change Behaviour change Risk reduction Lifestyle change Survival Unintended effects |
% aware of risks / able to use new idea % investing / saving / quitting / working Fewer drunken drivers / ‘bad’ incidents % in jobs / new career / crime free % alive after 30 days / time event-free Graduates migrating or excessive uptake |
| END or FINAL | Outcome |
More good stuff More equity Less bad stuff Cost effectiveness Unintended effects |
Greater health / wealth / happiness Less difference across deciles / areas Fewer deaths / accidents / kids in care Cost per unit of improvement in outcome Increased welfare dependency, risk, etc |
Remember: ‘major interventions warrant a major effort to demonstrate performance’:
- Demonstrate ongoing need, efficiency, good delivery, coverage and outcomes.
- Credible performance stories tie an intervention logic to key performance measures.
- Focus on robust measures that help leaders reduce uncertainty about performance.
- You cannot measure everything. But the literature shows what can be measured.
Notes
- [2]Repeat measures may be required before managers accept a problem is real. Triangulation helps.
