Evaluation is performed for a range of reasons:  to improve programs, to learn about how they function and, perhaps more often than we like to admit, to serve a purely symbolic, legitimating function.  Evaluation also struggles with a running tension: between the effort to ground it in rigorous, technical practice, and the reality that it is often deeply subjective. Evaluation, after all, is a claim about what is valuable, or whether something has value.

Traditionally, the values informing evaluation are set by organizational leaders; it is, after all, one of the tasks of leaders to shape expectations and norms inside their organizations.  But culture and values aren’t owned by leaders.  Indeed, to be an effective tool for mobilizing staff, it is necessary for people to internalize them, resulting in a sense of ownership.  Cultures and values aren’t “mine”, they are “ours”, and it is important to consider that idea when taking action that might be seen to conflict with, or change, the values staff see as defining an organization.  Getting people’s behavior to change in response to evaluation criteria is one thing; getting people to share, and invest in, the values informing evaluation criteria is another.

The importance of this lesson can be seen in the experience of the Federal Economic Development Initiative for Northern Ontario (FedNor) over the last decade or so, where the adoption of new criteria for performance evaluation prompted a range of adaptive strategies on the part of field staff. This article draws on material gathered via a series of interviews with FedNor employees, ranging from field staff to the Director General .

This process of adaption was driven by a perceived gap between the limited range of priorities implied by the new evaluation criteria, and the larger set of values employees considered to be important. What is notable in this case is that, despite the potential for conflict and resistance, staff at FedNor successfully reconciled the demands of the new evaluation criteria with activity advancing the institution’s broader mandate.  Context matters in the conduct of evaluation, and changes in evaluation practice can be perceived as efforts to change the organization, even when in fact they are not.

A child of the 80s, FedNor has three main priorities:  to promote growth in Northern Ontario, to facilitate development in the broader sense of economic and community diversification, and to act as a voice for, and advocate of, the Northern perspective in government decision making.  The arrival of the Conservative government of Stephen Harper in 2006 challenged this role.  Prime Minister Harper’s government was essentially hostile to the kind of economic intervention that the RDAs represented.

Partly in response to this political climate, and partly in response to other factors (an Auditor General’s report in 2007, competition from other funding programs such as the government’s Action Plan, and pressure for jurisdictional “narrowing” on federal actors), FedNor adopted a new model for performance evaluation between 2010 and 2012, primarily focused on leveraged funding and job creation.  These criteria were relatively straightforward to measure (compared to the more complex indicators associated with economic transformation), and consequently easier to understand and use in public discussion (which had the added bonus of making them useful for political communication).  The new evaluation criteria, and the reduced time frame for evaluation (annual reporting demanded the posting of annual results), led to a more restricted definition of success.  This in turn had an impact on project, and client, selection.

Successful projects were those which could demonstrate the creation of jobs and leverage funding on an annual basis; success was to be easy to identify, easy to promote, and uncontroversial.  While FedNor could still work with a wide base of clients, it had to focus on a core group possessing specific characteristics.  Increasingly, an “optimal” client was one that would produce positive performance evaluation outcomes (i.e., produce jobs and be the basis for leveraged funding), operate within acceptable risk parameters, and ideally have the internal capacity to negotiate the FedNor process on a recurring basis.  While the idea of a core client base was not new (one respondent noted that “there’s only so many people doing community economic development up here”), the need to demonstrate success via job creation and leveraged funding did set limits.

Narrowing the client base to those who could most likely meet the new criteria effectively (and unintentionally) narrowed FedNor’s mandate. For some actors in FedNor, the new evaluation criteria were seen as too restrictive, imposing limits that interfered with the potential for FedNor to engage in longer-term, strategic development.  For others, the criteria imposed a necessary, even desirable, rigor on selecting clients and projects, which did not entirely inhibit strategic intervention, but rather redirected how and where such intervention could take place.  Strategic intervention and development would now need to be achieved through building capacity in clients and project partners, so FedNor could coordinate with them over the long-term.

What is most interesting about the FedNor case, is that staff acted to respond to the emergence of perceived limits, and found ways to manage them.  Helping potential clients build the necessary capacity to become “optimal” became a second focus for FedNor.  New clients might present a potentially viable project, but would require “coaching” for planning capacity, and navigating FedNor’s processes and procedures.

All interviewees indicated that the developmental mandate of FedNor was critical to their self-conception as members of an institution: advocacy and transformation are not simply tasks for FedNor staff, they are the foundation of its organizational culture.  FedNor is the material presence of the federal government in the region (one respondent noted that “boots on the ground are the gold of the organization”) and the developmental voice at the table in any joint effort.  Most interviewees stressed the importance of the advocacy and ambassador roles: one respondent said they were “bringing the northern Ontario perspective to the table.”

While the new criteria were universally acknowledged as unconnected to transformation and advocacy, they were commonly viewed as demands that needed to be satisfied to “earn” the right to pursue the broader mandate, and were only occasionally identified as barriers.  In order to do development, FedNor staff understood that they had to show and effectively communicate that they were running successful programs. In other words, they were free to pursue the institution’s mandate, provided they could show FedNor was producing jobs and leveraging funding along the way.

While it is difficult to draw general conclusions from specific cases, the experience of FedNor does offer some lessons.  Actors do not respond to evaluation neutrally in an organizational setting.  Evaluation, by highlighting certain outcomes as indicators of success, privileges some activities over others.  Staff respond to this selective assignment of value based on their understanding of the culture of the organization, and the extent to which they have internalized it. Some learning priorities will be interpreted as contributing to the organizational culture; others will not.  Even when changes in criteria are not intended as a strategy for organizational change, they have the potential to produce change.  If the changes implicit in new evaluation criteria are seen as conflicting with, or dangerous to an embedded culture, this will likely provoke a reaction.

The reaction of FedNor staff to the new performance criteria could have turned out quite differently.  If many FedNor staff had not embraced the new criteria, it could have presented risks for the organization.  It is not difficult to imagine circumstances in which staff might seek to game the system by selecting projects that met short term targets without contributing to longer term economic development, where project performance information might be manipulated to demonstrate success, where the contingent impact on behavior produced an actual, and undesired, shift in the core mandate and objectives for the organization, or even one in which subordinate staff sought to subvert or resist new priorities.  What is notable about the FedNor case is that these possible outcomes did not occur: staff were able to incorporate practical responses to the shifts in government priorities, without framing those organizational changes as threats to the culture and values of the organization.

 

Markus Sharaput is Assistant Professor in the School of Public Administration at Dalhousie University