This year is the “International Year of Evaluation”, an area that is still budding in the Canadian public sector. To discuss the state of the profession and of the practice of evaluation, CGE Editor Patrice Dutil caught up with Benoît Gauthier, President of the Canadian Evaluation Society. Gauthier has been an evaluator for more than thirty years, in government and in private practice. He has taught social research methodology and he is the editor of the textbook Recherche sociale: de la problématique à la collecte des données. He was trained at Université Laval, Carleton University, and the École nationale d’administration publique.

CGE: An international commitment to evaluation is good news. As the celebratory year ends, what is your take on the state of program evaluation in Canada?

BG: Program evaluation in Canada is thriving in some areas but languishing in others. The federal government has firmly established its practice of regular evaluations that feed into the decision-making process but it has not been innovative in its approaches. Provincial and local governments are generally not very active in evaluation; they constitute the next frontier for the Canadian Evaluation Society. The not-for-profit sector implements regular evaluation and monitoring efforts, with very limited means and using approaches that are much more participatory, innovative, and anchored than other sectors. There is a lot to learn from the not-for-profit sector in terms of evaluation relevance, use, timeliness, and efficiency. Internationally, Canadian evaluators are sought for our range of practice and methods which focus on capacity building, evidence-based decision making, two-way knowledge-transfer, and sustainability.

CGE: Where do you see encouraging developments in practice?

BG: In the past decade or so, there has been extensive innovation in evaluation, including developmental evaluation, realist evaluation, theory-based evaluation, and contribution analysis. Many of these developments were in response to the need for more agile evaluation that delivers information that is useful for decision-making — at the policy, program, community and project levels — away from the purist social research ideal of the 70s and 80s. In recent years, evaluators have understood that, while they have to maintain a level of independence from decision-makers so that they can ask the tough questions and speak truth to power, their work is most useful when it is embedded in the organization and when the evaluations findings become organizational knowledge. This requires concerted strategies to translate evaluation observations into meaningful messages for management and stakeholders, and to mobilize the knowledge in a way that is adapted to each audience.

CGE: How do you see the profession of evaluation progressing?

BG: There has been formidable progress in evaluation as a profession in the past ten years or so. When I became an evaluator in the 80s, evaluation was a simple application of social science methods and we defined ourselves as applied social scientists. Slowly, a professional consciousness has grown among Canadian evaluators; by 2008, the professionalization impetus was such that the CES was able to create, with healthy but not fractious discussion, the first professional designation for evaluators in the world, that of Credentialed Evaluator (CE). The CE program is based on a code of ethics, a set of practice standards, and a collection of core competencies for evaluation practice in Canada. Active since 2010, it is still the only one of its kind but the professionalization trend has been a key theme of the international year of evaluation. Other evaluation societies — and there are more than 150 evaluation organizations in the world — watch attentively the CES CE program because they see it as a pilot test. The CES is dead serious about this program; we are in the midst of an independent evaluation and we expect the evaluation report early in the winter. Based on our preliminary research, we have observed that the CE program has measurable impacts on the level of training of evaluators and on their position in the marketplace. But even more importantly, the CE program contributes to the professional identification of evaluators. It is also a factor of attraction for new professionals because they see that evaluation professionals take their trade seriously and that there is a professional path for them in evaluation. Given time, we hope and expect that demand and recognition will continue to grow as managers will use the Credentialed Evaluator designation in their hiring and contracting decisions.

CGE: What do you think are the factors that seem to slow the adoption of rigorous, continuous assessments of policies and programs?

BG: Some things happen only when there is a demand for them and, to some degree, this is the case for evaluation. Evaluation is a key component of results-based management (RBM) because it can provide management and stakeholders with a rigorous assessment of the performance of an initiative. Without performance information, there is no management for results. The problem is that RBM is not engrained in the Canadian management culture: most managers are taught to manage resources and outputs, not results. The current climate of fiscal austerity is also not conducive to RBM because the demands from politicians are to reduce spending, not to improve results. That said, it is important to carry on evaluating anyway: the recent evaluation of the Government of Canada Policy on Evaluation has demonstrated that exposing senior managers to regular evaluation findings contributes to developing an appetite for evaluative evidence and, by way of consequence, for managing for results.

CGE: The new government in Ottawa seems committed to improving program evaluation. What is your take on how politicians generally view program evaluation?

BG: The rationality of politicians is multi-dimensional: they want to make a positive impact on society but they are also concerned with how their decisions appear to their electorate. That’s a normal by-product of democracy. Also, some politicians are more ideological and some are more pragmatic. For the past ten years, we have lived with a very ideological government in Ottawa: many a time, decisions were made without empirical evidence because politicians were ideologically convinced of the right way to address an issue. The new government that will be formed out of the 2015 election is likely to be more pragmatic and evidence-oriented. This is an opportunity for evaluation to contribute more to the policy debate. The same is true with other governments: pragmatic politicians embrace evaluation information; now we have to show them that evaluation is a preferred way to develop the empirical evidence they need.

CGE: What can government executives do to encourage a new approach to evaluation in their departments?

BG: Government executives have to demand independent, rigorous, timely, and relevant information from their evaluators. They have to support the funding of evaluation to a level that allows evaluators to provide this information; nothing comes for free. They also have to inform evaluators of when they need information for decision-making. Government executives must also be ready to pitch in by ensuring solid on-going monitoring of program performance that will help produce evaluation information as part of a continuum of performance information feedback.

CGE: Tell me about the state of education in program evaluation. What still needs to be done?

BG: The situation of education in evaluation is much more positive now than a few years ago but there is still much more to do. The Consortium of Universities for Evaluation Education (CUEE) was created in 2008. Fifteen universities are currently members of this organization. Not all of them offer degrees in evaluation but several do. Whereas it was impossible to get a graduate degree in evaluation just a few years ago, it is now available — even through distance programs from some institutions — in English and in French. CES is also active in evaluation training: we offer a four-day introduction called the “Essential Skills Series” and two intermediate courses. We are currently working on the development of seven more intermediate level courses. The CES is also planning on moving heavily into on-line course delivery so that geographic location cease being a barrier.

CGE: And what about the Canadian Evaluation Society? What lies ahead for your association?

BG: CES is thriving. Even though, like many other professional organizations in the developed world, the CES competes with free resources and networking available on the web, we have very successfully developed the first professional designation program for evaluators in the world. We offer significant professional benefits to our members, and we maintain mutually beneficial relationships with many organizations in the sphere of evaluation. We are emphasizing more and more our promotional role, informing users of the benefits of evaluation and of best practices for evaluation use. Our brand new strategic plan puts forth three priorities: to increase the value of evaluation; to engage, grow and diversify our membership; and to advance the professionalization of evaluation. Visit us anytime at evaluatiocanada.ca. We are here to support evaluators, managers, and stakeholders.

[Pull quote if necessary] In recent years, evaluators have understood that, while they have to maintain a level of independence from decision-makers so that they can ask the tough questions and speak truth to power, their work is most useful when it is embedded in the organization and when the evaluation’s findings become organizational knowledge.