Canadian Government Executive - Volume 23 - Issue 1
I II III IV Parliamentarians, Canadians Level of aggregation of M&E information & analysis Foundational information to understand 'results' Central Agencies, Cabinet, TB Senior Government Officials (DMs) Program Managers in Departments Chart 1 A Hierarchy of Potential Users and Uses for Results Information January 2017 // Canadian Government Executive / 7 At the other end of the hierarchy, Par- liamentarians and Canadians likely don’t need the same level of detail in most re- porting on results – some select indica- tors and an overview analysis may be sufficient to inform the most interested on how and how well results associated with particular goals are being delivered. What they do need, though, as do all users of re- sults information along such a hierarchy, is an assurance that expectations regard- ing public sector governance, good man- agement practices and accountability are being met. To do so demands that ‘measur- ing and reporting on results’ needs to start with what Chart 1 refers to as ‘foundation- al information to understand results,’ and thus needs to incorporate both M and E as tools to measure and analyze ‘results.’ Canadian Experience to date with Measuring ‘Results’ Unlike most other countries or jurisdic- tions where deliverology has been intro- duced, Canada has at the federal level a more comprehensive and systematically institutionalized system of monitoring and evaluation for measuring ‘results.’ A formalized Evaluation function in gov- ernment has evolved over the past four decades, with all major departments and agencies devoting resources to assess the effectiveness, efficiency and continued need for public programs. (The federal government currently produces some 125 Deliverology evaluation studies annually dealing with results of various public programs. All are easily accessed on public departmen- tal websites.) Indeed, this has facilitated the introduction of a more systematic and results-oriented approach to Performance Measurement/Monitoring across govern- ment. (There is no doubt that the introduc- tion of Results for Canadians in 2000 as the government’s management framework gave added impetus to the drive to build a more results-oriented approach to both M and E as the key tools to be used by depart- ments and agencies.) The Canadian model could succinctly be described as ‘central leadership with departmental/agency de- livery,’ with the Treasury Board Secretari- at (TBS) setting the policy, standards and guidelines that underlie the use of both E and M across government. But, while Canada is regarded inter- nationally as one of the world leaders in Evaluation, measuring and using results information is not without its challenges. Indeed the Evaluation Policy has not re- mained static, but has been altered some four times over the past four decades, in part to address issues of measurement, as well as issues of experience and use, with an increasing focus on ‘results’ over this pe- riod. The most recent adjustment occurred in July 2016 with the introduction of the government’s Policy on Results that incor- porated new standards and guidelines for both E and M into the Policy. In theory, therefore, some of the poten- tial challenges that deliverology typically faces in other jurisdictions – shortage of needed skills, capacity, data and experi- ence in translating aspirational goals into measurable outcomes – should readily be handled in the Canadian federal scene, given the lengthy experience and evolu- tion with the use of M and E to measure results. This of course hinges on whether or not the Canadian context is readily in- corporated into the deliverology process. What can we Learn about the Deliverology Process from the International Arena? While different versions of the deliverology concept have been introduced into various governments and jurisdictions around the world, there is limited knowledge on its effectiveness. Indeed, the Telfer School of Management at the University of Ottawa conducted a systematic literature review and concluded that “there is, at this point in time, no definitive research that points to the success or failure of the concept [but that] success or failure depends a great deal on the ‘how’ of implementation” (see Greg Richards et al “Does Deliverology Deliver?”, CGE , December 2016, p. 5.) More comprehensive research of the World Bank has concluded that the deliverology ap- proach ought not to be viewed as a ‘magic bullet,’ noting that “Each country has its own public service values, reform program and institutional pattern and a Delivery Unit must fit within that context if it is to effectively support improvement and re- form.” (See R. Shostak et al, “When Might the Introduction of a Delivery Unit Be the Right Intervention?” Governance and Pub- lic Sector Management Practice Note, World Bank: June 2014.) Understanding and align- ing with the Canadian context would thus seem to be critically important as deliverol- ogy gets rolled out across the federal land- scape. But, what would that entail? Examining international experience with deliverology can be informative, since it has pointed to both potential pros and cons in the way it is being implemented. On the pos- itive side, deliverology, given its link to politi- cal power, generally brings with it authority, resources, flexibility and a striving for provi- sion of timely advice and quick turnaround (i.e. a sense of urgency that can potentially cut through bureaucratic roadblocks to ac- tion). Potential downsides with deliverology whichmight have relevance to the Canadian scene include: a tendency to rely on score- cards and key performance indicators (KPIs), ignoring evaluation as a tool to measure and understand ‘results’ (it is worth noting
Made with FlippingBook
RkJQdWJsaXNoZXIy NDI0Mzg=