As the global economy struggles to regain some forward momentum, Canadian governments are looking for ways to limit government spending in light of reduced revenues, increasing demands for services and soaring deficits.

In the mid-1990s, Canada found itself in a similar financial situation and was forced to make radical changes by scaling back its spending priorities. One of the lessons from Program Review, the formal process at the federal level used to examine all government spending at the time, was setting different savings targets for key programs based on the considered view of the programs’ effectiveness. In addition to setting targets, the Masse Committee in 1994 also created a strong challenge function at the centre of the decision-making process to ensure that the departmental plans were realistic and evidence-based.

To deal with current fiscal challenges, the Harper government has kick started a similar exercise to scale back the size of the federal government in order to achieve a balanced budget by 2014. This new effort, now labelled the Deficit Reduction Action Plan (DRAP), is being led by a special Cabinet Committee and chaired by the President of the Treasury Board Tony Clement.

There are two features of the committee’s work that differentiate it from Program Review. First, all departments and agencies have been asked to generate two across-the-board cut scenarios based on five percent and 10 percent savings. Second, the Treasury Board Secretariat is relying on the outside advice of a management firm with an expertise in cost containment to look for efficiency savings by improving productivity.

Experience has shown that using a scythe to chop spending is a crude method of achieving savings. It is arbitrary and unfair since the cuts apply equally to well performing programs and to poorly performing ones, and to efficient organizations as well as poorly managed ones. Using this method of reducing costs, the DRAP ministers must find their own way to balance savings and to serve the needs of Canadians in high priority policy areas.

One possible key to achieving this balance is using the evaluations that have been conducted by the federal government during the past few years. Canada has a proud reputation for the quality of the program evaluation work done in many departments. In fact, it could be argued that at one time Canada had the most robust evaluation system among OECD countries.

While one of the unintended consequences of Program Review was the weakening of the policy capacity in the federal government, in 2009, the government reaffirmed the value of program evaluation by announcing a strengthened evaluation policy.

The new policy is far ranging and exceeds the reach of previous program evaluation policies since it covers “all direct program spending and the administrative aspect of major statutory spending, programs that are set to terminate, every five years.”

While the potential for using the 2009 evaluation policy is obvious, there are two reasons why this initiative may only result in limited value to the current Cabinet Committee. First, there is growing concern among evaluation experts that the quality of their work is not meeting their own high standards. In a recent report, Lay of the Land: Evaluation Practice in Canada, the practitioner authors argue that too many evaluations have little impact because they don’t ask the right questions, there is too much resistance from program administrators, and politicians are reluctant to listen to negative assessments of their programs.

Second, there is an increasing fear that the government is not interested in using evidence in decision making. The most recent example comes from the Supreme Court of Canada’s ruling on the Insite case concerning the closure of the supervised drug injection facility in Vancouver’s eastside. In its unanimous decision the Court admonished the federal government for its unwillingness to use information, data or analysis in making policy decisions. The court reminded the government that it could not disregard the facts and that it had an obligation to rely on “evidence” in making policy decisions.

As in the mid-1990s, the federal government has the opportunity to use the information gathered in formal evaluations to inform decisions. While there are reasons to be sceptical that the millions of dollars spent on program evaluation will find their way into the departmental submissions, there is no reason why program evaluation should not find a place in the planning cycle much as audit has become a feature in government operations.

 

David Zussman holds the Jarislowsky Chair in Public Sector Management in the Graduate School of Public and International at the University of Ottawa (dzussman@uottawa.ca).