How does a mandate for innovation challenge evaluation?

Increasingly, the public sector is becoming more intentional and systematic in stimulating innovation. New processes and structures, such as investments in innovation hubs and labs, are exploring ideas such as the future of the public service (e.g., NRCAN’s IN·spire), or experimenting with new approaches to complex policy and program challenges (e.g., the Privy Council’s Innovation Hub). The persistent issues of our time – social inequity, climate change, urbanization, for example – demand breakthrough ideas and novel solutions. Businesses, governments and community organizations face urgent pressures to adapt quickly, often in the face of great uncertainly and high complexity.

Challenges of Traditional Evaluation

Evaluation commonly asks questions such as: Was a program implemented as it was supposed to be? Did an intervention achieve a pre-determined result? Were benchmarks met? How can we refine the program to increase effects, reduce costs or make implementation easier? Evaluation’s purpose, in these circumstances, is to help make a judgment of merit or worth, assess the efficiency of goal attainment, or delineate with clarity the causal links for a well-defined technology or intervention to prepare for replication. These purposes are based upon linear, logical problem solving approaches, which work very well when the problem and solution well understood. Evaluation’s standardization, linearity and compliance-driven nature is a fit when there is an optimal solution with clear boundaries and a limited set of possible alternatives.

The challenge for evaluators, and for innovators, is that not all problems are bounded. They don’t all have optimal solutions or occur within stable parameters. These kinds of problems – often called complex, or “wicked” – are difficult to define. The pre-specificity, prediction and control of evaluation is often very frustrating for innovators. It does not mesh with the dynamic and rapid pace of adaptation. Too often, it has the effect of constraining the innovation process instead of supporting it.

Innovation + Evaluation = Real Time Solutions

Innovation brings novelty and creativity to the introduction of something new and useful. The very techniques that enable evaluation excellence in more static situations – standardization of inputs, consistency of treatment, uniformity of outcomes and clarity of causal linkages – are unhelpful, even harmful, to situations where there is a lot of uncertainty and ‘moving goalposts.’ With dynamic and unpredictable phenomena, narrowly defining and structuring evaluative questions can interfere with learning and adaptability. Innovation is often about breaking previous boundaries.  How can you judge if something was done as expected when you are in the midst of creating it? If you are doing something for the first time, there is no benchmark, so what do you measure against? What if we could blend the creative thinking of innovation with the critical thinking of evaluation?

Developmental evaluation is an approach to evaluation that has emerged in response to the need for adaptation in highly innovative situations. The premise of developmental evaluation is to integrate the elements of evaluation – being systematic and thorough in our thought process, collecting data, drawing upon evidence, and testing assumptions – in a way that supports the open-ended nature of adaptability.

Tracking Thinking as it Evolves

WellAhead is a national initiative of the J.W. McConnell Family Foundation to help strengthen the integration of social and emotional well-being in schools. The McConnell Foundation commonly involves a developmental evaluator in the earliest stages of their major social innovation initiatives, and they position the evaluator as part of the core program or ‘innovation’ team. In the early stages of idea development, the developmental evaluator acts as a learning partner, helping to frame the intervention, and supporting the innovation team’s evolving understanding of nature of the problem and what to do about it. As options are considered, program ideas developed, and decisions about what to do, much can be revealed about a group’s implicit theory of change, their desired impacts, and key assumptions or uncertainties. The developmental evaluation helps by reflecting back the thinking and framing up emerging working models to support the evolution and tightening of theory and concept.

Accountability for Learning

When the United Way Toronto (now the United Way Toronto and York Region) launched a city-wide initiative to engage a network of practitioners who shared a common goal for improving outcomes in youth educational attainment, they were trying out a new role as a granting organization. The initiative was a major shift from funding individual programs to acting as a systems level convener with the intent of amplifying the learning from a wide range of programs across the city. The developmental evaluation helped staff access real time feedback about what was taking shape in the initiative, so that they could continually adjust their strategies. The evaluation also supported United Way Toronto to tell the story of how this initiative was developing and what progress was being made. The insights and perspective from the evaluation helped to communicate to internal and external stakeholders what was happening, helping to strengthen the case for the approach being taken, and give the initiative a mechanism by which to manage risks related to an adaptive initiative. In being actively observant in a systematic way, a narrative is constructed that explains what is happening, as well as how and why, in a way that is grounded and credible.

Probe and Respond

In situations of high complexity, we learn most through taking action and gathering feedback. Because of this, small experiments – or prototypes – are often an essential feature of developmental evaluations (and of innovation labs). Experiments are usually low cost, low risk activities that serve the primary purpose of learning something. They differ from pilots, which are much more developed ideas or initiatives, done to demonstrate the efficacy of something before scaling, and to work out efficiencies for implementation. Developmental evaluators help innovators to construct small experiments, often in a series of iterations, or with purposeful variation in order to accelerate learning. What emerges? Do our assumptions play out? What criteria develop to tell the difference between “working” and “not working”? We also do prototypes to clarify our thinking. Expectations, priorities, and key concepts become clearer when we move from the theoretical to action. The key is not simply to do something, but to do something with a specific learning agenda with sufficient attention, data, and critical thinking to understand what emerges and inform what is next.

Conditions for Success

The private sector operates comfortably with an ethos of experimenting, quickly capturing lessons from failures, and rebuilding. This plays out in product, business or technological innovation. Similarly, NGOs and philanthropic foundations have embraced developmental evaluation in support of social innovation. What we have learned from experience of both the business sector and NGO community is that success in innovation – and by extension, the evaluation of innovation – requires organizational conditions that support adaptive thinking and change. Organizations that have succeeded with developmental evaluation are more learning centred, less risk adverse, willing to experiment and even accommodate failure.  If key stakeholders require a high degree of certainty, or there is a lack of openness to experimentation and reflection, or there is a low tolerance for risk, then a developmental evaluation is likely not a good fit.

The promise of innovation in a public sector context leads us to many places: more responsive programming, better use of resources, improved client reach, better outcomes. Innovation also introduces alternative ways of thinking and acting — often ones never previously considered or examined. This is inevitably disruptive, and the resulting uncertainty can prove challenging in an environment of hyper-accountability.  Where are the niches within the public sector that are ripe and ready for purposeful and systematic innovation? As these continue to emerge, so will the need to facilitate assessments of where novel ideas are at and how things are unfolding. Developmental evaluation has the potential to fill a unique niche to help accelerate the process of generating, testing and adjusting new ideas, and along the way, get better at the very processes of innovation.

 

Jamie Gamble is the Principal of Imprint Consulting and a member of the Canadian Evaluation Society. He is a pioneer in the field of developmental evaluation and has supported innovation and development in a wide range of issues including poverty reduction, environmental sustainability, food security, public health and safety, citizen engagement and the arts.