Many programs at all levels of government provide funds to community-based organizations for short-term initiatives or projects, often with an evaluation requirement. Unlike the high prescriptiveness and limited methodological repertoire that characterize the program- or sector-level evaluation carried out by government departments, project-level evaluation is often inventive, repertoire-stretching,energizing and joyous: like a party in the basement! This article provides an overview of project-level evaluation, from the perspective of long-time evaluators working for a range local community organizations and coalitions, giving examples drawn from our respective practices. (Readers are invited to Google any underlined terms + “evaluation” to find more information).

Who’s at this party? Project-level evaluation is generally participatory, with the aim of engaging stakeholders and democratizing ownership of evaluation process and data: evaluation BY and FOR the organization, rather than done TO it. Typically, the organization mandates an Evaluation Committee or Working Group that reports to its executive or board, composed of project managers as well as representative of sectors and organizations external to the project being evaluated. In our experience, those who volunteer for the Evaluation Committee are among the most passionate and knowledgeable about the problems the project is aiming to address, and so they care deeply about the evaluation and its findings. External evaluators accompany and support the evaluation process, typically making three main types of contributions: technical expertise, often on methodology, and data analysis; data collection by an external party allowing confidentiality to be protected and hence the potential for negative findings to emerge; and acting as a “critical friend”, ensuring that the group leading the evaluation remains able to be self-critical.

What are they dancing to? The orientations and paradigmatic choices that tend to underpin evaluation in the community sector generally fall into the broad camps of: stakeholder-driven, utilization-focussed evaluation, participatory evaluation, deliberative democratic evaluation, developmental evaluation or empowerment evaluation. Common to these fancy labels are: an overarching valuing of learning, in addition to accountability; ensuring that all stakeholders, including project beneficiaries, have a voice; and insistence that evaluation be useful. It is rarely what has been termed “ceremonial evaluation … an empty yet dogmatic data ritual.” Some of the most intellectually challenging evaluative debates around societal issues that we have tried to keep up with have been around the rickety tables of grassroots organizations.

Along with a willingness to experiment, evaluation in the basement is also often evaluation that is ruthless in self-assessing how well it’s doing its job, and changing course when necessary. Some examples:

  • Creative methods must add value: After experimenting for 18 months with a case-history-based “story quilting” methodology that aimed to not only provide concrete data on progress among program participants — people in a disadvantaged Montreal neighbourhood dealing with parenting challenges — but also rally a fractious community coalition around a common stake, the Evaluation Committee recognized that the process was not working as intended. Hidden agendas and tensions were still lurking, and the “quilt stories” brought little insight, so the Committee vigorously and joyfully re-oriented, towards more learning.
  • Asset mapping is a great tool; case studies worked better! In another example, a Winnipeg community agency was implementing a project designed to help women develop knowledge, skills and experience for leadership roles in community-based organizations. The evaluation included an asset mapping exercise undertaken by participants as part of their project experience. However, this component was abandoned part way through: the time it took outweighed the benefits to participants. Instead, a number of small case studies were added to the evaluation, focussing on the journeys of a number of individuals and the impact (anticipated and unanticipated) that project participation had on their lives.

Interestingly, in our experience, community groups often insist on including the counterfactual in their evaluation designs, not out of an artificial need for “rigour” but from the realization that to know the “so what?”, there needs to be a comparison to its absence. However, their funders rarely, if ever, allow adequate resources for controlled or long-term evaluation designs; or if they do, impose an academic research model that is a poor fit for many community contexts.

What are they having? Evaluation at the project or community level is not subject to the tyranny of policies that dictate the questions in advance and limit acceptable methodologies to a few well-worn options (surveys, key informant interviews, file review….). It is rife with creative and innovative evaluation practices, including visual techniques such as photovoice and participatory videography; story-telling and life histories; world cafés; Most Significant Change technique, and various forms of real-time surveys, such as dotmocracy. In general, there is a strong appetite for qualitative methods, as they provide richer and more compelling evidence of outcomes than do counts of pamphlets and participants. Indeed, evaluation at the project level has the unparalleled opportunity to include the voices of those receiving, or not availing themselves of, the services offered by government programs through third-party funding mechanisms such as contribution agreements. From these voices can be learned how taxpayers’ dollars are, or are not, making a difference in peoples’ lives.

Who are they going home with? Like any party-goer, for us as practitioners, the take-home is the best part of evaluation. In contrast to the formal processes of negotiated recommendations, management responses and recommendation implementation plans, project level evaluation can, and often does, take learnings home to contribute directly to improvement, in the shorter and longer term. Some recent examples:

  • Putting findings into immediate action: In an evaluation of a project promoting school readiness through early development of literacy and numeracy, a preliminary results presentation showed that only 50 percent of participating parents recognized a photo of their local public library. “Welcome to your library” visits were incorporated into the program activities starting the very next week.
  • Accepting difficult findings creates positive change: Mid-term interviews found that implementation across regional health authorities of a home respite program for parents of children in palliative care was stalling because of central-regional mistrust and a perceived controlling, top-down approach from the implementing agency. Although at first difficult to swallow, these findings prompted the agency in charge to rethink its approach to partnerships and relinquish its control over elements that were important to regional stakeholders. This got program implementation back on track.
  • Same-old approach leads to same old result (so change it!): An evaluation of an intervention aiming to build practitioners’ skills in working with low- income recent immigrants found that it was having little to no effect: practitioners were doing what they had always done, although with some new labels. This led the program to rethink its intervention, with more effective actions and more realistic change targets.
  • Paying attention to the data eases tough decisions: In a planning process, a community coalition faced with budget cuts had to choose among interventions that it would continue to support. Members of the planning team who had also been on the Evaluation Committee recalled data showing that one activity may not have been reaching its target audience and, in fact, had been benefiting those who needed the intervention less. They called for a re-examination of those data, to help decide where they should put their (i.e., taxpayers’ and donors’) money.

You’re invited! Legitimate criticisms have been levelled at not-for-profit sector evaluation, as “fluffy” or, more cruelly, “random acts of kindness”: these may rebuff the many among us who aspire to professional credibility. Nonetheless, contributing to positive social and organizational change is energizing and inspiring, and often leads us to mutter to ourselves that we have the best job in the world. Our thinly-disguised aim in the article has, of course, been to spark interest among Canadian government executives in alternative ways of thinking about and doing evaluation. Come to the party!

Natalie Kishchuk, CE, FCES, is with Program Evaluation and Beyond Inc; and Linda Lee, CE, FCES, is with Proactive Information Services Inc.