Canadian Government Executive - Volume 23 - Issue 1

6 / Canadian Government Executive // January 2017 Robert Lahey Connecting the Dots Deliverology between M, E, RBM….and Deliverology T he year 2016 saw two critical words circling the halls of Ot- tawa – ‘results’ and ‘deliverol- ogy.’ The first is not new to the federal public service, though occasion- ally it is touted as a major breakthrough (let’s not forget that Results for Canadians was the centrepiece of the government’s management framework at the turn of the century). The second proved a real attention-getter. It was in part because of the newness and awkwardness of the word itself, but also because of its friends in high places. Based on an approach in- troduced by the Tony Blair government in the UK in 2001, the concept of deliver- ology was introduced onto the Canadian federal scene with the creation of a centre of government ‘results and delivery unit’ to be based in the Privy Council Office (PCO). The intent is laudable – to put a sharper focus on government priorities to help ensure that government delivers on its commitments, goals are met and ‘re- sults’ delivered to Canadians. As is so of- ten the case with best intentions aligned to new approaches, though, the devil is in the detail, and government, in instituting the deliverology model, will need to take account of the Canadian context if it is to work effectively. What does it mean to ‘measure results’? Of critical importance to an effective deliv- erology process are the mechanisms/tools to measure and analyze ‘results’ – perfor- mance measurement/monitoring (M) and evaluation (E). (Internal Audit (IA) is also an important public sector tool that tra- ditionally examines issues of efficiency and compliance, and not effectiveness of programs or policies.) They are comple- mentary to one another: M is best suited to measuring outputs and often short-term outcomes of a program, while E is general- ly the more cost-effective tool to measure, analyze and understand medium- and lon- ger-term outcomes and impacts. All rep- resent ‘results’ in terms of a government intervention, though the need for such information and the underlying detail re- quired likely varies across a range of ‘users’ of results information. One might think of a whole hierarchy of potential users of ‘results’ information that varies, in part, by the type of informa- tion they may need for their particular use (See Chart 1). At an operational end, from the perspective of good governance, man- agement and accountability, government managers need to understand how and howwell their programs are operating: are they delivering on results expected and, if not, why not and what needs to be done to alleviate or improve the situation? They also represent value for money. Most of this information and understanding needs to be gleaned from a systematic and objec- tive assessment of program performance, including delivery on results – in other words, via an evaluation. Monitoring infor- mation that examines trends or compara- tive statistics across a few select indicators will generally not provide an adequate level of information needed to understand performance associated with most social or economic programs.

RkJQdWJsaXNoZXIy NDI0Mzg=