Canadian Government Executive - Volume 23 - Issue 1

20 / Canadian Government Executive // January 2017 Program Evaluation remainder was either partially complet- ed, revised to a later timeframe, not yet commenced, considered obsolete or their status was unknown. Common barriers identified for non-completion included: a lack of program level resources/capacity; other external factor (i.e. activities linked to the programs, departments); changes in mandate or priorities; and changes in process/procedures. Again, from Dr. Savoie’s side, what does it mean? Have concrete transformations resulted? Have evaluation recommendations, and follow management response actions, made a difference? The current answers are both general and suggestive. Evaluating the evaluation policy – what follows? In the Evaluation of the 2009 Policy on Eval- uation , TBS-CEE claims that evaluations are used to inform expenditure manage- ment decisions, policy development and program improvement, accountability, Cabinet decision-making and public re- porting. Unfortunately, the lack of speci- ficity beyond stakeholder self-reporting is not entirely convincing to fully justify the evaluation function. Even TBS-CEE ac- knowledges that additional tracking indi- cators are needed to validate and enhance the monitoring of evaluation use beyond self-reporting. It is acknowledged that de- partments lack resources and capacity to monitor and track beyond the instrumen- tal use of evaluations. Among evaluation recommendations, it states, ‘Reaffirming and building on the 2009 Policy on Evalu- ation’s requirements for governance and leadership of the departmental evaluation functions, which demonstrate positive influences on evaluation use in depart- ments.’ In its management action plan re- sponse to the 2015 evaluation study, TBS confirmed that in developing a renewed Policy for Evaluation it would ‘promote the conduct of cross-cutting analyses on com- pleted evaluations and use of these analy- ses to support organizational learning and strategic decision making.’ In July 2016, TBS publicized its new Policy and Direc- tives on Results . The mandate and focus for evaluation remains largely intact – with emphasis on program impacts, innova- tion, and evidence-based decision-mak- ing. Support for the evaluation function now centers on agility – new and innova- tive approaches, timeliness, crosscutting analysis and shared practices. After-the- fact use is given relatively thin attention. One hopes that over time, the reporting parameters surrounding quality and use of evaluation recommendations and man- agement response actions may be adjust- ed. But in the absence of hard evidence, Dr. Savoie’s conjectures continue to hold force. Can more be done? It is argued that annual Departmental Performance Reporting to Parliament represents an occasion to make the case for evaluation-linked, program improve- ments. It does not, at least with any suf- ficiency. Given the quantity of evalua- tions produced, it should be possible to issue a separate but consolidated annual statement about evaluation associated program outcomes, decision-making, and learning. With hundreds of management responses to evaluation recommenda- tions in hand, it should also be feasible to make a specific case for evaluation use- fulness. At the moment, rolling up evaluation- linked management actions appears to be a near impossible task. This is in part due to the wide variety of monitoring tracking systems across departments. Why so many exist remains a mystery. The numbers of elements to be tracked are relatively few. They can be standardized and sys- tematized. Human and financial implica- tions, associated with evaluation manage- ment response action, can be estimated. If both evaluation recommendations and management response to evaluation rec- ommendations are adequate (in terms of specificity, measurability, realism, action- able, clearly stating who is accountable and within what timeframe) an integrated view should be achievable. Large international organizations such as UNICEF, UNDP and the World Bank have had much success in tracking and consolidating management responses to evaluation recommendations. Central leadership is essential. Evaluators too should start to re-think their roles and approaches. Perhaps more secondary re- source support could be offered within evaluations to assist in better formula- tion of program performance indicators. Evaluators could extend their services to helping program managers and deci- sion makers articulate quality evaluation management response/actions plans. Is the time right for an Evaluator General to raise the visibility of evaluation? If it is possible to accomplish so much under ‘deliverology,’ then documenting evalua- tion use is attainable. It requires ‘turning up the volume’ on evaluation use in the Policy on Results . More needs to be done, and can be done, to refute the arguments suggesting the inutility or failure of pro- gram evaluation. W ayne M ac D onald is President of Infinity Consulting and Legal Services, Vice-President of the New Brunswick Chapter of the Canadian Evaluation Society, and a former Director of Corporate Performance and Evalua- tion, Social Science and Humanities Research Council of Canada. Is the time right for an Evaluator General to raise the visibility of evaluation? If it is possible to accomplish so much under ‘deliverology’, then documenting evaluation use is attainable.

RkJQdWJsaXNoZXIy NDI0Mzg=