There is growing pressure on governments around the world to be more responsive. This is true for Canada. It is no less the case for Afghanistan. This includes increased demands for good governance, accountability, transparency and the delivery of results. It suggests that ‘stewardship’ over resources and processes is simply not enough. It is a requirement that extends beyond producing program outputs. As calls for greater responsibility and results intensify, the need for the evaluation of policies, programs and projects increases. Governments and organizations do successfully implement programs and policies, but questions remain. Did the policies and programs have any impact?  Did they produce the actual and intended outcomes? This has everything to do with the much-misunderstood distinction between ‘doing things well’ versus ‘doing the right thing.’ Given the needs of stakeholders, evaluation provides vital evidence on both issues. It generates findings and conclusions about a program’s ‘relevance’, ‘adequacy’ and ‘effectiveness’, but also improved program delivery and efficiency. Taken together, the results of evaluation contribute to improved public trust in government, and confidence that public money is well-spent.

There is no debate about the usefulness of evaluation in improving management, safeguarding transparency and supporting accountability. But fear often remains within organizations. Evaluation is looked upon with suspicion. Communications and public relations departments do not like the critical aspects contained in an evaluation report. They prefer good news stories. Heads of organizations too are often uneasy that an evaluation will only underscore the weaknesses of their organization. They are concerned that political masters, boards, and central funding agencies will ignore the positives, and focus only on the negatives. For program managers and others, the loss of budget or damage to an organization’s reputation is a real nightmare. It raises concerns about the role of evaluation within the concept of public management.  Is the role of evaluation clear enough?  Does it provide a ‘safe-space’ for managers to agree to cooperate, to take risks, and in the end, to believe in the merits of the evaluation process?

For example, during 2014, development donors (including Canada) requested the Free and Fair Election Forum of Afghanistan (FEFA) to establish a monitoring and evaluation department. The intention was to track progress, measure performance, and judge the overall merits, value and worth of the project. I was appointed to establish the department, and track $US 3.5 million in program expenditures (Election Observation Mission). This started with the creation of a Monitoring and Evaluation (M&E) system. At the outset, there was much doubt among management and staff. Many questions surfaced. What was the true role of this new department and its Head? Was the M&E Head a donor insider, the ‘guy’ who reported to donors on what was really going on in the organization? While some staff had considerable knowledge of M&E, especially its obligation to report to donors, few were aware or spoke about other features of project M&E.  None referred to improved management. None spoke of about transparency and accountability or even a link to potential, additional funding from donor agencies to ensure project success.

The first year was very difficult. No amount of dialogue would convince the team about the real role of M&E.  There was considerable resistance. Some even expressed the desire to avoid the accountability entirely. Eventually, following much patience and persistence, some headway was made. Staff gradually became more familiar with basic M&E concepts. This led to a better understanding of the roles and responsibilities of the M&E Head.  As acceptance grew, attitudes and behaviors started to shift. They saw M&E as fulfilling a need. Whether to go forward with M&E, or not, ‘was simply not an option!’ Cooperation gradually grew within the organization. Requests for provision of data were agreed upon and complied with. Attendance at annual ‘critical reflection’ sessions was reassuring.  Even the value of recommendations for improvement was acknowledged and welcomed. Over time, steady progress resulted in a clearer understanding about M&E’s role, how it could help, who should participate, and the tangible results that were linked with collaborative efforts.

The results of M&E activities were also demonstrated in more practical ways. It was viewed as a function that could help overcome persistent problems challenging the organization. Again, a simple example illustrates this. There was a lack of coordination between the Administration Department and Finance Department, which delayed many of the activities conducted by the organization. A form was developed (called Form A). It was recommended that the Administration Department should complete the form and submit it to Finance at least three days prior to conducting an event or purchasing any equipment. The form was also copied to the M&E Department for tracking purposes. Within a month, significant improvements were noted. There was better coordination among the departments.  Increased efficiencies resulted.  Events were conducted on time.  Equipment arrived when needed.  No longer did staff waste time and energy in ‘blame game’ pursuits.

Gradually, resistance and acceptance of the role of M&E reversed. As knowledge increased, improved practice followed. What was originally thought to be a nightmare changed? The disaster that was expected to befall the organization, failed to materialize. In fact, it did not lead to the cutting of funds as a punishment.  What came more to the forefront was the view that M&E was a means for improvement. It was a vital tool for both management and staff.

Even the role of the M&E Head evolved.  It now serves as a focal point of contact between donors and the organization. The position is called upon to answer questions, in an open and transparent way, on the progress of all programming. It is called upon to address the challenges and risks during program implementation. A systematic and comprehensive report is provided each quarter and serves as the basis for discussions. But what is the overall take away here, especially for an M&E initiative in a faraway place such as Afghanistan? It is simple. First, senior executives should ask how familiar are they really about the merits of program evaluation? Second, before launching a M&E system, organizations should expect to take the time needed to work with staff and management.  Third, clarify what M&E is really about. Do not assume that everyone’s awareness and attitudes are clear from the start. Fourth, the friendlier the discussion, the more likely is the prospect that a good M&E system will result. Try not to be too ‘preachy’. Fifth, the appeal to concrete examples of organizations or countries who benefit from M&E practices may help ease anxieties. Thus, the clearer the conceptualization, the more likely the acceptance, and the better the opportunity for evaluators to help organizations achieve improvement goals.

As a final note, there is always room for improvement. This is especially true when working towards establishing high-level principles such as transparency, accountability and learning.  Readers should not be misled.  Such initiatives, whether in a developed or developing context, are more akin to a marathon, and should not be a sprint in ‘real-time’. It requires joint efforts by all stakeholders, whether management or staff of an implementing organization, evaluation professionals or donor funding agencies. It is an effort that is dedicated towards measuring reality, veracity and certainty.  This is what becomes the basis for enlarged trust and confidence. And within a developing world context, these efforts can often be even more challenging, and sometimes lead to perceptions of ‘Do or Die’. But it is a challenge worth taking.  The benefits and impacts, in spite of the risks, are worth pursuing.

For more information on the Free and Fair Election Forum of Afghanistan (FEFA), see http://www.fefa.org.af

Abdul Majeed is a member of the Canadian Evaluation Society (CES) and a Monitoring and Evaluation Consultant in Kabul, Afghanistan as well as an Evaluation Specialist, “Urban Development Support Project” at the World Bank.