Beginning on April 1, 2013, and over a five-year cycle, the evaluation function in large federal departments and agencies will face a significant challenge, namely to contribute to budget planning by assessing program effectiveness for all direct program spending and major statutory spending.

While this challenge originates in the 2009 Treasury Board Policy on Evaluation, it impacts deputy heads as much as accounting officers. The scenario is clear: as deputies implement review targets under strategic review and the Deficit Reduction Action Plan, they want the best available evidence to explain and defend program results. Hence a robust demand for program evaluation.

The extent to which this scenario will play out is unknown, because the budgeting process is subject to Cabinet and budget secrecy. However, studies of the budgeting process have rated the federal government poorly in terms of attention to program performance and effectiveness: the spending departments focus on developing expenditure proposals and linking them to government priorities, rather than program performance. It is well known that the program evaluation function did not make much of a contribution to earlier program reviews in the mid-1980s and 1990s. Moreover, there is some indication that departments have lacked evidence about program performance when preparing for the current strategic reviews.

To meet the challenge this situation will have to change. The question is what influences the integration of evaluation into budgeting: what is required?

This article draws its inspiration from a government-wide audit of evaluation (Chapter 1, “Evaluating the Effectiveness of Programs,” Fall 2009 Auditor General’s Report), and a 2010 Treasury Board of Canada Secretariat (TBS) report on the health of the evaluation function. Below are five factors influencing the integration of evaluation into budgeting: capacity, quality, coverage, timelines and utility.  
 
Capacity is the question of how to ensure sufficient, qualified evaluation staff and funding to meet the needs for program evaluation. In the three years following the enactment of the Federal Accountability Act, departments increased professional staff in evaluation units, and received some central funding to implement the Act, with its requirement for evaluation of grants and contributions programs. Despite these gains, departments have experienced a shortage of experienced evaluators, and have faced difficulties finding experienced professional staff, particularly at senior levels.    

Quality can be assessed in terms of coverage of issues, validity and reliability of methods, and the independence and objectivity of reporting. While studies sponsored by TBS have rated most program evaluations as acceptable in quality, the assessment of program effectiveness also depends upon reliable data on performance. In many cases, evaluation has been hampered by unavailable or low-quality data.

Coverage requirements for grants and contributions are being met, in compliance with the legal requirement. In terms of overall direct program spending, it remains a challenge for departments to evaluate an average of 20 percent of spending each year. In particular, full coverage calls for a change in planning practices, since evaluation efforts can no longer be targeted on the basis of risks.  

Timeliness is the extent to which evaluation reports can be made available at the appropriate time in the annual budget cycle or when needed for other types of expenditure review. It has been identified as an underlying reason for the lack of use of evaluation findings in budgeting. There is a need for coordination with evaluation planning, which is carried out on a rotating five-year basis and requires Treasury Board approval. In addition, evaluations of larger, more complex programs can take more than one year to complete, potentially rendering them unavailable for a given annual cycle.

Utility refers to the use of evaluation findings in budgetary decision making: the question of fit or alignment. Some officials have viewed program evaluation as detailed, longer-term research, not always well suited to the less formal evaluations used in budgeting. In addition, budgeting tends to focus on fiscal aggregates, notably the budgetary balance and the elimination of the deficit; however, expenditure review targets can bring it back to the level of programs.  

In conclusion, the successful integration of evaluation in budgeting is a work in progress. Many of the necessary steps are known, and achievable. Overall, program evaluation can be better aligned with the budgeting, by tailoring evaluation studies to the needs of the budgetary process.


Tom Wileman, a principal with the Office of the Auditor General of Canada (retired), led the 2009 government-wide audit of program evaluation. He is a board member of the Performance and Planning Exchange (PPX).