The non-profit sector is operating within an era of increased accountability, both vertical and lateral. It is part of good governance. At one point or another, non-profits must explain how they will ensure that project goals are being met, for the sake of their funders or accreditors (vertical accountability). Moreover, non-profit health organizations exist to meet the needs of the public, and to improve their health and wellness. Planned, effective evaluation audits (lateral accountability) help organizations improve this commitment towards the public and the people they serve. Unfortunately, consistent, high quality evaluation is not always common practice among non-profit agencies.

How is an Evaluation Audit Useful?

The term “evaluation audit,” first coined by Ernest House of the University of Colorado in 1987, represents a systematic review of an organization’s evaluation activities. As a quality improvement initiative, the review is used to measure the level of compliance between pre-defined evaluation policy or standards and practice at that organization. Such a process assesses the efficiency of the management system, and can validate the reach, relevancy, effectiveness, efficiency, impact, and sustainability of the system without incurring additional risk to any users or staff. It can also identify the unique drivers of efficient program evaluation practices.

Access Alliance:
The Organization Committed to Evaluation Founded in 1989, Access Alliance Multicultural Health and Community Services (Access Alliance), a Toronto Community Health Centre, provides primary health, settlement, and community services to Toronto’s vulnerable immigrant, refugee, and racialized groups. It exists to address the systemic barriers faced by these communities. The aim is to improve their immediate and long-term health outcomes. Considering the marginalized nature of their clients, it is all the more important that Access Alliance upholds its commitment to maintaining a culture of evidence-informed planning and decision-making through continuous process and outcome evaluation activities. This commitment is embodied within the Program Planning and Evaluation Policy, a peer-reviewed, organizational document that emphasizes the importance of generating high quality evidence and developing the elements of an effective evaluation framework through evidence-informed planning.

The evidence generated through this audit was used to identify and describe the gap in compliance between current evaluation policy and practice at Access Alliance, as well as to create a useful dialogue around the challenges and opportunities surrounding future changes in program planning, delivery, and evaluation. This audit resulted in synthesized recommendations for future planning and evaluation practices that will ultimately strengthen the organization’s application of the Program Planning and Evaluation Policy.
Ultimately, this audit represents an attempt to practice lateral accountability to itself in terms of its ability to carry out its own mission-based activities. Access Alliance’s experience demonstrates that an organization-wide program evaluation audit is not only conceivable but also achievable, and encourages this process within the community health sector, as well as more broadly within the non-profit sector.

The Audit Process

This evaluation audit used a mixed method approach. The first step was to enumerate programs with or without a logic model – the counterfactual method of auditing. Twenty-one programs were screened using the following two indicators: (i) a logic model (Figure 1), and (ii) evaluation activity within the past three years (Figure 2). Typically, programs synthesize a variety of sources including strategic plans, work plans, and program administration data to create a unique logic model. The logic model, a fundamental component of evaluation practices, outlines the rationale, goals, activities, and outcomes in a program, depicting the relationship between each component.

Findings from this study showed that the odds of having completed evaluation is 6.5 times higher for programs using logic models than for programs without, implying that having a logic model is conducive to evaluation practice [odds ratio: 6.5, 95% CI 0.73-57.4, p>0.05]. However, these programs do not run in a vacuum whereby other confounding factors may help to describe the affinity of a particular program to evaluation practice.

The second step of the audit process comprised interviews with program managers, team leads, and relevant staff members to contextualize the documented evaluation practices and describe the gap between these practices and organizational evaluation policy/standards. Three themes emerged as key drivers for efficient evaluation practices: high quality data, an organizational culture of evaluation, and supportive leadership.

Data that is Reliable,Relevant, and Robust

Data collection was viewed as a resource intensive process, requiring time from staff external to program and service departments to collect data, and internal staff to enter the data into databases. This is particularly challenging for a large department that works with many clients. The time contributed by volunteers and students was viewed as a valuable resource in this regard. However, since resources will always represent a limiting factor, it is all the more important that collected data yields sufficiently valid and timely information.

Reliability of data was identified as a development area for effective program planning and service delivery and to maintain relevance for current client and/or target demographics. Departments also prioritize data that is informative from a variety of perspectives. For example, the decision to “hire five more staff” is ideally informed by data that can address multiple service planning questions such as: “What languages should they speak?”, “How many clients are they serving?”, and “What populations are we serving?” Finally, effective organizational processes for data collection can ensure data quality. For example, one program described how reliable feedback from service providers can help to develop a universal evaluation tool useable for multiple programs.

Building a Culture of Evaluation

During discussion around the organization’s approach and expectations regarding evaluation, respondents were often confused about a number of topics, including: “Should entire departments be evaluated, or should individual programs be evaluated with less intensity?” At this time, the Program Planning and Evaluation Policy is supported by guidelines that outline very general practices for program managers and directors. An effectively communicated framework or pathway of explicit guidelines would remove any confusion on behalf of staff regarding the expectations of the organization’s position statement for undertaking evaluation, namely: ‘What to evaluate?’ (i.e. individual services or programs vs. the department as a whole, what are the indicators of interest); ‘How to evaluate?’ (i.e. methodology and tools); ‘How often to evaluate?’; ‘How much should be spent?’ (i.e. an explicit, dedicated, and communicated budget for program evaluation activities); ‘Who is the audience?’ (i.e. clarity on how the findings will be used, and who will be impacted); and ‘What is our part?’ (i.e. clarity around the role of the organization’s evaluation team in facilitating the evaluation).

Departmental staff suggested using a longitudinal program evaluation calendar over several years allowing for sufficient time to effectively measure any changes put in place. Departmental staff also acknowledged the need to have the evaluation team fully integrated into all stages of the program cycle. Therefore, expanding existing policy to include explicit program evaluation expectations (frequency, methods, etc.), as well as an articulation of the level of support provided by the evaluation department, would be ideal.

Leadership Clearly, navigating evaluation activity at the program level requires expertise and support by evaluation staff. Indeed, program teams expressed a call for technical guidance around the interpretation of statistics and evaluation methodology, and to learn more about evaluation tools, statistical software, and interpreting statistical tests. Respondents also expressed an interest in better interpreting and understanding the variety of data sources drawn upon by departments (e.g. administrative data, the annual Client Experience Survey, and individual program evaluations) in terms of their potential to intersect and conflict. They asked: “How do we synthesize results from a variety of stories into a cohesive story?” Therefore, both technical and abstract support are needed on behalf of the evaluation team in the form capacity-building or training sessions for managers and directors, as well as through individual consultations and meetings with departments.

The Canadian Centre for Accreditation made the following statement during their evaluation of the organization: “(Access Alliance) is very strong in the area of Quality Improvement and evaluation”. The quality and rigour of the organization’s evaluation process and data management methodology provides the evaluation team the opportunity to play a dynamic role in sharing knowledge, policy advocacy, as well as in securing and advocating for organizational resources. Once established in a position of leadership, the evaluation team may even assist other organizations in building their capacity, simultaneously providing external support and strengthening partnerships.
Although scope and budget represent a source of risk for the sustainability of an organization’s evaluation practices, the systemic auditing of organization activities can not only improve the quality of all programs and services, but can also provide evidence for how resources can be realistically and optimally leveraged for the benefit of all stakeholders.

 

Miranda Saroli is a Research Assistant in the Community-Based Research Department at Access Alliance Multicultural Health and Community Services.
AKM Alamgir is Manager, Quality and Accountability Systems of Access Alliance.
Morris Beckford is the Director of Community Health and Wellness Department of Access Alliance Multicultural Health and Community Services.
Sonja Nerad is an independent consultant and the Managing Director of SN Management.
Axelle Janczur is the Executive Director of Access Alliance Multicultural Health and Community Services.