It is no secret that evaluation reports in the federal public service are not being utilized to the extent that they should to support decision-making. This can be explained in part by the requirements of the 2009 Policy on Evaluation, which stipulates that all direct program spending must be evaluated every five years. The very fact that each program must be evaluated in that five-year span might not coincide with the actual needs of senior management. With the Policy currently being revised, there may be more flexibility to undertake more strategic evaluations. Evaluations that are better aligned with management’s needs will not only serve as a useful tool to support decision-making, but they will also help address any concerns, priorities, or challenges that senior management might have.

Evaluators gain a great deal by choosing evaluation projects that are highly significant to their Department, notably by factoring in departmental and senior management priorities. To accomplish this task, evaluators must ask themselves first “what keeps senior managers up at night?”.

An innovative Approach: Using the Audit Process for Evaluation Planning

The Evaluation Division at Infrastructure Canada recently adopted an innovative approach, similar to planning approaches used in some other departments and agencies, to improve its planning process. Instead of relying only on a more traditional planning approach based on the development of a program inventory, our evaluation team embarked upon a special journey with the audit team, by participating in a joint four-phased risk-based process. What we were hoping to gain out of this unprecedented collaboration was a better plan that would be more useful to management and would help mitigate what keeps them up at night.

The first phase of the risk-based process focused on defining the universe of entities that could be evaluated. Defining our preliminary evaluation universe only required us to review the departmental program inventory. But we wanted to expand this universe of possibilities and we could only do it by learning more about our clients’ needs. This was made possible during phase two, where we jointly held consultations with several levels of management within the Department, from program managers to the Deputy Minister, and various counterparts in other departments. These consultations enabled us to better understand their top priorities, concerns, challenges, areas of risks, as well as their appetite in terms of future audits and evaluations reports. The valuable information obtained through the consultations was applied in phase three: the risk assessment, during which we created an expanded list of potential evaluable entities. In order to measure the level of risk of each entity, we developed a total of six risk criteria: Materiality, Previous Evaluations and Audits, Public Sensitivity, Recent Changes, Complexity, and Senior Management Interest. Each evaluable entity was defined against all of these criteria. The fourth and last phase was the identification of potential evaluation projects. In this phase, we took a number of essential factors in consideration: the program inventory, the highest priorities and concerns of senior management, the highest level of risk and the Policy requirements. The outcome was a list of potential evaluation engagements, in the form of a five-year calendar. We also added a rationale and a strategic evaluation approach for each project, to justify the reason behind each choice. We then followed up with senior management to present and validate the calendar of evaluation projects. This resulted in our new evaluation engagements.

The joint participation of the Evaluation and the Audit teams in this process also resulted in another product: the very first combined Risk-Based Audit and Evaluation Plan in the Department.

Success and Lessons learned

What contributed to the real success of this collaborative approach were the joint consultations held with multiple levels of management. By only contacting clients once, we were able to significantly reduce the burden on them while still obtaining strategic and valuable information. Also, by taking into account the clients’ true needs, challenges and risks, we ended up creating a list of evaluation engagements that are better aligned with current priorities, and thus, that have a higher chance of being more useful to decision-makers. As for the lessons learned, moving forward, extra time should be allocated right after the consultations so that the evaluation team can better define the preliminary objectives and scope of its next evaluation engagements.

 

 

Marie-Josée Courchesne is the Manager of Evaluation Services at Infrastructure Canada. She also completed a Master’s Degree in Public Administration specialized in Program Evaluation at the École Nationale d’Administration Publique (ENAP) in Gatineau. mjcourchesne@hotmail.com
Linda Vertefeuille is currently working as an Evaluation Analyst at Infrastructure Canada. She has recently obtained her Master’s Degree in Public Administration specialized in Program Evaluation at the École Nationale d’Administration Publique (ENAP) in Gatineau. vertefeuillelind@hotmail.com