Written by  Gail Vallance Barrington

Anyone who has commissioned a program evaluation knows what it is like to have a sleepless night. The reason is simple: you are concerned about having lost track of the original question. The hyper-caffeinated effect is made worse by the slow and measured process of the evaluation. Staring at a ceiling you can’t see, your mind sifts through the research questions, survey questions, interview questions, and analysis questions. Has the focus been blurred; is the core question lost altogether? What can you do about this?

With thirty years in the field as an independent evaluator, I have come to believe that sometimes the evaluation serves the research better than the program. This can be due to poor communications between evaluator and client. Certainly the hands-off approach preferred by some managers does not foster success. A manager who says “Just give me the bottom line” is courting disaster.

The disconnect between what you want and what you get can be avoided. This need not take a lot of your time, nor should you play the role of the researcher. Instead, if you develop a high level, on-going relationship with your evaluator, you can ensure whether the evaluation stays on track.

Here are some strategies:

Clarify the Policy Question in the Design Phase

Make sure the evaluator understands your policy question and the type of recommendations you need. Are they about:

  • Program improvement?
  • Sustainability?
  • Expansion?
  • Future funding?
  • Closure?

Policy questions are complex and difficult to unpack, so make sure all the relevant topics are articulated. Don’t just wait for them to show up in the research findings.

Determine the Type of Evidence Needed, then Match it to Methodology

What kind of evidence will support your decision? Will hard numbers be persuasive? Do you need to prove, in a narrow sense, some kind of cause-effect relationship? Is your question about why? If so, a quantitative approach should produce the right information. On the other hand, if you really want to understand a phenomenon, why it happens, and how it affects those involved, your question may be how or what? In this case, you need a qualitative approach.

And there is a third option. If you need to ask both kinds of questions, a mixed methods approach is best. This requires more thought because the sequence of research activities is important. Do you have a qualitative question which will highlight topics for more focused numbers or is it the reverse — quantitative data that needs further explanation and description? Whatever approach you choose, make sure that the policy question drives the methodology — not the other way around.

Review the Time Planned for Evaluation Tasks

Take a look at the evaluator’s proposed list of tasks. Extensive time may be allocated for tool design and data collection but data analysis and report writing can be collapsed into a couple of weeks. The evaluator may be trying to respond to the parameters of the Request for Proposals in which too much is being requested in too short a time frame. What this means is that in the rush to produce the final report, you will lose a lot of the meaning buried deep within the data. Be prepared to sacrifice a little on the data collection side. With more time for synthesis, interpretation and reflection, you will get much stronger findings.

Discuss the Findings with the Evaluator

It is important to prioritize the findings once the data has been analyzed. Ask the evaluator to marshal the evidence beside each of your original questions. What are the strongest findings (i.e., most likely to be true across most cases)? Where do the findings tend to cluster? Where are there inconsistencies or gaps? Which findings are likely to be the most politically sensitive? What conclusions fall out of these findings? Is there enough evidence for you to move ahead or do you need further analysis?

 Craft the Recommendations with Key Stakeholders

Before the report is finalized, work with key stakeholders on the recommendations. Schedule a special meeting long in advance to warn them of their expected involvement. Ask the evaluator to present an overview. Provide participants with thumbnail findings and key topics for recommendations. Have them consider such issues as feasibility, appropriateness, cost, intended and unintended consequences, and political impact. Ask the counterfactual — what happens if nothing is done? Once a draft list of recommendations is developed, consider the language carefully and re-check with stakeholders as needed. It’s better to deal with them before the report is finalized rather than afterwards.

If you really want the evaluation to have an impact, make sure your schedule allows you to invest your time and interest in the process to make sure that the results are really useful. You will sleep better too!

 

Gail Vallance Barrington is a Credentialed Evaluator who has conducted over 130 program evaluation studies since founding her consulting firm, Barrington Research Group, Inc.  gbarrington@barringtonresearchgrp.com