An article in The Economist (March 8, 2015) notes that the term ‘Deliverology’ was originally used to poke fun at Sir Michael Barber’s approach to managing the Delivery Unit formed by the Blair government in 2001. Sir Michael adopted the term and the rest, as they say, is history. The term was subsequently adopted by its proponents as a buzzword referring to the capability of government organizations to deliver on policy commitments. The UK government implemented Deliverology through the creation of the Prime Minister’s Delivery Unit; a centrally situated group with direct access to the PM focused on clear outcomes for citizens. The Delivery Unit model has been implemented in over twenty jurisdictions around the world over the past 15 years. In some situations, positive results have been reported, but the model has also been the subject of some criticism.

To examine the impact of the Delivery Unit concept, the Telfer School of Management at the University of Ottawa conducted a systematic literature review. We learned that there is, at this point in time, no definitive research pointing to success or failure of the concept. There are many case studies which indicate that, like other Results-Based Management (RBM) models, success or failure depends a great deal on the “how” of implementation. In this article, we first outline the key components of Deliverology, discuss what makes it different from other RBM frameworks, identify lessons learned from implementation in other jurisdictions and conclude with practical considerations for transitioning to the Canadian Government’s new Policy on Results that came into effect on July 1, 2016.

What is Deliverology and how is it different?

Deliverology is an approach for managing and monitoring the implementation of activities that, according to its creator, have a significant impact on outcomes. (See Michael Barber, Paul Kihn & Andy Moffit. Deliverology: From Idea to Impact, 2011). It is, therefore, similar in scope to other RBM frameworks that encourage the setting of clear targets and the use of performance measurement to drive continuous program improvements. The difference between Deliverology and the other frameworks is one of emphasis. Two aspects in particular are key. First, the establishment of a central unit with a focus on managing performance against key policy outcomes. Second, the development of processes for using performance information to encourage change and improvement.

In the first instance, many RBM frameworks provide outlines for target setting and the creation of performance measures. The unspoken assumption is that performance against expected policy outcomes is everyone’s business. We have seen however, the silo effects that are created when each individual department or responsibility centre focuses on specific targets to the degree that they lose sight of the bigger picture. The creation of a Delivery Unit that helps to keep that broader policy outcome in perspective is one area where the Deliverology concept makes an important contribution.

In terms of creating processes for improvement, every public sector organization already gathers lots of data. The problem is that this data is often used primarily for reporting purposes and rarely analyzed in detail to drive program performance. By encouraging the development of routines for analyzing data and for implementing changes based on the analysis, Deliverology drives an important aspect of managing organizational performance that is often easy to forget – the discipline of “follow-up”.

Lessons Learned

Since the inception of the first Delivery Unit in the UK in 2001, the concept has spread to the USA, Africa, Southeast Asia and Europe. Criticisms of the approach abound, as do reports of its overall effectiveness. Our research categorizes the lessons learned as follows:

Sustained senior management and political interest in the policy outcomes are critical for success.

It’s a truism to claim that nothing succeeds without senior leadership interest. In many cases, “interest” is often demonstrated by signing off on the project proposal. In the successful cases of Deliverology implementations, senior leadership involvement went much further. Politicians and senior departmental leaders were involved in reviewing regular progress reports and contributed to helping remove barriers to success where needed.

So, “interest” is not simply tacit agreement that the outcomes are important, it also includes direct, visible and sustained involvement to help remove barriers and allocate resources where needed.

The implementation will not survive without credible data.

“Objectively verifiable” data is critical. This avoids arguments about the facts so the Delivery Unit and program managers can get on with problem resolution / mid-course correction.

It is important to create documented routines for using the data to drive program changes.

Successful implementations had a plan of some sort for what happened to the data and the analysis: who gets it, who has decision rights, and how to evaluate the impact of changes.

Measurement can create a tendency to manage towards the performance standard.  

A great deal of the criticism of Deliverology revolves around “gaming” of the system to meet the established performance standard.

For example, in the education sector, teaching children only what they need to know to pass standardized tests obtains the result being measured, not necessarily the broader outcome.

Silo effects can occur.

Silo effects occur when certain groups are identified for additional attention, based on pre-established performance targets; the others can by default, be identified for neglect.

Sticking with the education sector as an example, some children far below the performance standard would be selected for extra help. The others at the standard could be ignored until everyone has caught up.

It’s important to align the organization’s “philosophy of management” with the Deliverology concept.

Deliverology has also been criticized for its apparent “command and control” culture.  There are different ways of implementing however, and highly successful implementations have been able to push decision-making down to the front lines assuring that people who are doing the work are empowered to improve it.

Some level of control is always needed, but it is important to match the control mechanisms with the mandate and the risk profile of the organization.

Transitioning to the new Policy on Results is not just a paper exercise intended for the Planning and Performance Measurement specialists in Corporate Services. Every Program Owner (or Program Official) needs to be directly engaged in making clear the results that their program contributes to their department’s priorities and mandate. The linkage back to the Minister’s mandate letter is imperative.

Practical Considerations for the Transition to the new Policy on Results

In addition, other disciplines such as policy, finance, data management and others should be involved to understand the implications of the new Policy on Results constructs and to ensure broader buy-in. Chief Results and Delivery Officers (CRDO) and Chief Data Officers (CDO) are in the process of being put in place; but theirs is a coordinating role, not a dictating one.

Some departments are simply porting their Program Activity Architecture (PAA) content into the new Departmental Results Framework (DRF), while others are taking a “clean sheet” approach to rethinking how programs are defined and structured.

Some are going further – updating their Logic Models or Outcomes Maps to ensure their programs can tell a comprehensive “performance story”. This will then drive which Key Performance Indicators (KPIs) are required, and not the other way around. Then the whole conversation about targets, results profiles and tolerances can begin – with the bigger picture of program performance in mind.

Finally, although it has likely been said of every significant change across an organization, sustained senior management commitment is critical. It will affect the programs they oversee, their accountabilities and ultimately their department’s success. And that is exactly what Deliverology is meant to achieve.

 

Gregory Richards is the Director of the MBA Program and of the Centre for Business Analytics and Performance at the Telfer School of Management at the University of Ottawa. Richards@telfer.uottawa.ca

Carrie Gallo is a Partner in the Advisory Services Practice with Interis | BDO. cgallo@bdo.ca

Murray Kronick is a Senior Manager in the Planning and Performance Practice with Interis | BDO. mkronick@bdo.ca