If the purpose of the Government of Canada’s performance measurement system is to keep people busy, then it works beautifully. If it’s supposed to show how programs make life better for Canadians, it’s a dud. More than 2,500 measures are just too many, period!
The government has over 2,500 performance measures, and yet twice in the last six years the Auditor General reported that program evaluations were hampered by lack of performance information.
Government executives tell me they ignore performance data generated by the Management Reporting and Results Structure (MRRS). The Treasury Board Secretariat (TBS), “owner” of the MRRS Policy, recently said it found that many Deputy Ministers make little use of MRRS performance data.
The performance measurement system is broken. Its basic purpose-producing evidence that programs help Canadians by contributing to social, economic or environmental outcomes is sound, but implementation suffers from too many low-quality measures and inappropriate use of performance data. Here are a few problems:
Invalid Measures. Valid measures would provide evidence that programs help make us healthier, wealthier, safer, etc. Many of the 2,500+ measures don’t do this. Here’s just one example:
An RCMP program aims to contribute to this result: The rate and severity level of crime is reduced. The performance measure is: Percent of Canadians who strongly agree/agree with statement “I am satisfied with RCMP contribution to a safe and secure Canada” The measure tells us little or nothing about the result. It is invalid.
Incomplete Measures. Many measures are made useless by missing information.
An expected result of an Environment Canada program is Biodiversity goals … are integrated into federal, provincial and territorial strategies … The performance measure is Percentage of federal departments with natural resource or environmental mandates, provinces and territories that … are implementing measures to enhance biodiversity.
The measure incorrectly treats units of account (Departments, provinces/territories, measures to enhance biodiversity) as having equal value. Suppose all jurisdictions except Nunavut implemented biodiversity measures. Looks like excellent performance, except that Nunavut is the largest province/territory by surface area. Its impact on biodiversity could far outweigh other jurisdictions. Counting “measures to enhance biodiversity” has the same weakness: not all “measures” have equal impact.
Numerator without denominator is another form of incompleteness. An expected result of a Transport Canada program is “A competitive marine transportation sector.” The measure is “Tonnage handled by Canadian carriers (domestic)”. Tonnage, without context, begs the question, “Relative to what?”
Improperly Specified Results. MRRS Policy seeks performance measures related to results, and correctly states that results are economic, social or environmental changes. But what Departments often present as results are really outputs — services delivered, information provided, etc.
It’s common to see results described as “Target population has access to an output”, as in this Canada Revenue Agency (CRA) example:
Businesses have access to timely and accurate responses to their tax enquiries.
CRA delivers “timely and accurate responses”. They are outputs, not results. The words “have access” do not transform them into results!
Other “results” are just simple descriptions of outputs, as in this Health Canada example:
Timely response to emerging food and nutrition safety incidents … “Timely response” is what Health Canada delivers — it’s an output.
Inappropriate Use of Data
Targeting
MRRS Policy requires performance targets. But targets are problematic because programs only influence results (while they control outputs). Missing a target doesn’t necessarily mean failure, nor does meeting a target necessarily mean success.
A Canadian Heritage program aims to increase availability of Canadian-authored books. The target is 5,500 new Canadian-authored titles. Why this particular number? Would 4,000 titles signal failure?
An Agriculture and Agri-Food Canada program aims to expand markets for Canadian producers. Its target is $105.37 billion in food shipments. But the program’s influence over shipments is undoubtedly small, making the target meaningless.
Efficiency Measures
Departments are required to measure efficiency. This might make sense for programs that always do the same thing in the same way. In other words, it is irrelevant to many programs.
Consider a grant program that receives funding requests of varying size and complexity from a diverse range of applicants. Suppose it took 30 days to process a $100,000 grant proposal from a tiny Aboriginal organization in a remote community, and 25 days to process a $500,000 proposal from a provincial government agency. The numbers say the second grant was managed six times more efficiently than the first.
Is this right? Or should working with the tiny, remote Aboriginal organization take longer than working with the provincial agency? If efficiency is expenditure per unit of output, then the units of output are not comparable. Delivering the first grant is fundamentally different from delivering the second one.
The government is full of programs with diverse clientele and no “cookie-cutter” way of operating. There are no simple, valid measures to tell us how efficient all programs are.
What the Treasury Board Secretariat Should do:
The performance measurement system tries to do too much, and does it poorly. It should do less, and do it well. TBS should aim for a smaller system that produces high-quality, results-oriented performance information relevant to public-service managers, Parliamentarians and Canadians.
TBS should:
1. Significantly reduce the total number of performance measures in the system.
2. Eliminate improperly formulated result statements as well as invalid and incomplete performance measures.
3. Drop requirements for performance targets and efficiency measures.
4. Strengthen its own capacity to advise Departments on performance measurement and provide quality control.
Mark Schacter is a public-management consultant based in Ottawa. This article is a condensed version of a longer paper, “Drowning by Numbers”, that can be downloaded at www.schacterconsulting.com