In a few weeks, the Minister of Finance will deliver the most anticipated federal budget since 1995 when the Chretien government tabled its Program Review budget. The 2012 budget has two related objectives: to eliminate the federal deficit by 2014 and, second, to cut back on federal programs and activities that are deemed to no longer be part of the government’s future agenda. 

This will be a defining budget for Stephen Harper. It will chart the financial course of the federal government for years to come. The current worldwide financial crisis will depress global economic activity and as a result will limit Canada’s export sales and simultaneously increase support payments to unemployed Canadians.

Given Canada’s great success with Program Review more than 15 years ago, it is expected that Treasury Board ministers will make good use of lessons learned from the earlier exercise by drawing on the numerous studies and evaluations that have been produced by the government over the past decade as it considers the Deficit Reduction Action Plan (DRAP). This is the magic moment for policy analysts inside and outside of government to play a significant role in the review process by providing robust and unassailable evidence of program effectiveness to the DRAP committee.  

Under the best of circumstances program evaluation is difficult to do. Reliable measurement is a challenge and determining long-term effectiveness depends on appropriate time frames, well-designed studies and clearly defined objectives. As a result, it is impossible to generate valid studies without years of planning and foresight.  

Interestingly, the Treasury Board made evaluation a priority in 2001 when it revised its policy framework for program evaluation and did so again in 2009 about the same time as the Auditor General was preparing a chapter on evaluation in one of her last annual reports. At that time, the new policy transferred the responsibilities for program evaluation to the deputy ministers and the program manager community in an effort to allocate responsibility for evaluation to the most appropriate level in the hierarchy. In addition, the Treasury Board also required that every federal program including grants and contributions be subject to an evaluation every five years. Stated another way, the 2009 policy requires that 20 percent of all federal government programs be subjected to a summative evaluation, each year, to determine whether the programs are providing value for money.

Unfortunately, it appears that the government’s evaluation policy is not having its desired effect. Recent Treasury Board analysis suggests that, despite the existence of an evaluation policy since 2001, the government carried out only about 170 evaluations per year representing between five and 13 percent of all federal program spending that took place between 2004 and 2008, instead of 100 percent as was envisaged with the policy.  

In fact, the shortcomings might be more serious than limited coverage. In her 2009 report to Parliament, the Auditor General noted many evaluations were plagued with serious measurement problems. She also concluded that many were late, provided inconclusive results, used rudimentary methodology, and contained findings that were often unreliable. 

Evaluation serves many important functions in addition to providing input into resource allocation exercises that happen once every 15 years. It is a useful tool to hold government to account, to improve the performance of ongoing programs, to free up funds for new efforts, and to provide lessons to other comparable programs.      

High quality evaluation work used to be a strength of the Canadian federal public service. In the 1970s, policy analysts were internationally recognized for their policy training programs, for the expertise of effectiveness and efficiency divisions in the Treasury Board Secretariat, and for the analytical strength of the evaluation units in many federal government departments.    

At this juncture when the government needs to make serious spending cuts to meet its fiscal targets, it appears the data are not there to address the key question regarding the value of the programs being reviewed. Not only is this unfortunate, it also represents a management system failure.

Nothing is more fundamental to good government than knowing how well tax money is being spent on behalf of citizens. If the role of Parliament is to hold government to account, then it is imperative that the appropriate House of Commons and Senate Committees demand valid and up-to-date evaluations. It is simply not sufficient to have rules and policies without a demand from legislators for their appropriate use.   

David Zussman holds the Jarislowsky Chair in Public Sector Management in the Graduate School of Public and International Affairs and at the Telfer School of Management at the University of Ottawa (dzussman@uottawa.ca).