The two key tools used by the Canadian government to measure program or policy “results” (i.e., the outputs and intended outcomes) are performance monitoring and evaluation. Internationally, many countries, encouraged by the World Bank and other UN agencies, would refer to this as their Monitoring and Evaluation (M&E) system. Though this term does not tend to be used in Canada, it is introduced to reflect the systems-approach to results measurement.
Many senior officials would describe their organizations as “results”-oriented. The move to “managing for results,” which started in the mid- to late-nineties and really took off over the last decade, has raised the profile of the R-word and the vocabulary of results-based management (RBM) across all sectors. Managing for results, though, implies the need to measure results, and there is generally much less focus on this critical element of the RBM regime.
Evolution of M&E
Canada is highly regarded internationally for its M&E system, particularly in the field of evaluation where its introduction into public sector management in Canada dates back to 1969. The first government-wide evaluation policy was established in 1977. Generally speaking, this was inspired by the notion of “letting the managers manage,” that is, allowing deputy ministers to assume greater responsibility of their departments and programs, but also being accountable for the performance of those programs and the prudent use of public funds.
The 1990s saw an increased move to performance monitoring and high-level reporting across many OECD countries. In Canada, this was inspired by a desire to make performance information more accessible and useful to parliamentarians and parliamentary committees.
The 2000s introduced a more formal “results” orientation into the public sector with the introduction of RBM. In this environment, evaluation and performance monitoring were recognized as key tools to help ensure a results focus, responsible spending and greater transparency and accountability across government.
The structure of the Canadian M&E system can be characterized by three important defining elements:
1.Departmental delivery, central leadership: a model based on a strong central management board that oversees and holds deputies accountable.
2.Emphasis on both monitoring and evaluation as tools of performance measurement: the ongoing performance monitoring and the conduct of planned evaluations are recognized as tools to measure program and policy performance, serving to support good governance, accountability and results-based management.
3.Well-defined foundation setting the rules and expectations for performance measurement and evaluation: formalized government policies establish the standards of practice and guidelines and help clarify the government’s expectations, including the roles and responsibilities of all key players in the M&E system.
Along with the TBS and individual departments and agencies, the Auditor General of Canada (AG) is an important element in the M&E system.
Within TBS, the Centre of Excellence for Evaluation (CEE) sets the rules for evaluation across government and supports capacity building needs and oversight responsibilities of the system. As well, relevant policy areas in TBS guide departmental managers and provide system-wide oversight on performance measurement and reporting. The deputy head of a department or agency has some flexibility in the resourcing of these tools, so as to be appropriate to the size and needs of their organization.
To support evaluation, all major government departments and agencies are required to establish an internal evaluation function, as well as put in place the following infrastructure: a senior-level evaluation committee, chaired by the deputy minister; annual and multi-year planning for evaluation; a departmental evaluation policy reflective of the government’s policy; and, the mechanisms needed for follow-through on delivery of credible evaluation products. To help ensure independence of the evaluation function, the head of evaluation generally reports to the deputy head or at least has unencumbered access to the most senior official in the department. Deputy heads are also required by TBS policy to develop a corporate performance framework (the so-called Management Resources and Results Structure, MRRS) that links all programs of the department to the expected outcomes.
The Auditor General periodically monitors and reports to Parliament on the functioning of various aspects of the M&E system, an important oversight role that reinforces the health and sustainability of the system.
Sustainability of the system
M&E should not be considered as an end in itself. A number of formal centrally-driven administrative policies introduced over the 1990s and 2000s have served as key drivers for both monitoring and evaluation. Some have had a direct impact on building M&E capacity in departments; others, though serving broader needs, have also generated demand for systematic and credible performance information.
An internal evaluation function could potentially be criticized for not having the necessary independence to “speak truth to power.” To deal with such a challenge, the Canadian model has put in place certain infrastructure and oversight mechanisms aimed at ensuring that internal evaluations of departmental programs or policies are indeed credible and objective. Some of these elements are instituted at the level of the individual department; others are instituted and enforced centrally.
Oversight in the Canadian model is implemented at both a micro and a macro level. At an operational level, TBS monitors individual departmental M&E initiatives and assesses each department/deputy head against a number of M&E-related criteria through its annual Management Accountability Framework (MAF) process. At a whole-of-government level, the AG conducts periodic “performance audits” that monitor the effectiveness of M&E implementation across the full system.
To be effective, there needs to be an enabling environment for M&E and a willingness to carry out performance monitoring and evaluation of government programs in full public view. Transparency has been a critical dimension underlying the government’s M&E system. Access-to-information legislation has played an important role in increasing accessibility of M&E studies to the general public, including the media.
HR capacity development is an ongoing issue given the large number of professional evaluators working in government, currently some 550. The introduction in 2010 by the Canadian Evaluation Society of a Credentialed Evaluator designation based on a recognized set of competencies is the most recent effort to professionalize evaluation.
Measuring the performance of programs and policies is not without its challenges. Canada, though one of the leaders in the area of M&E, still works to perfect the use of M&E tools for measuring performance.
While there has been considerable progress in developing program and corporate performance frameworks across government departments, the expectations regarding the establishment of performance monitoring systems has generally not been met, resulting in a lack of performance information.
One of the problems may be that the expectation for performance monitoring (the M) as a tool for measuring results is unreasonably high. Beyond the measurement of outputs and some short-term outcomes, the establishment of ongoing