The public service strives to achieve positive outcomes with appropriately targeted policies and services. With increasing demand for accountability, there is mounting emphasis on measurable results. The underlying assumption is that measurability is intrinsically tied to objectivity, and numbers allow government administrators to track progress toward the outcomes sought. But has it come to the point that indicators are mistaken for the ends that should actually drive public service activities?
The Institute of Public Administration’s Toronto Regional Group put this question to two icons of public administration, who responded last November at an event entitled “Getting Results: Objectivity and Truth in Public Management.”
Ralph Heintzman, distinguished for his work as a senior administrator with the federal public service, is an advocate of strengthening performance measurement for a better, more accountable public service. Gilles Paquet is a well-known provocateur, author and commentator on issues in public management, who is skeptical about the value of the emphasis on numbers.
Paquet opened the session warning against what he called a cult of quantophrenia, characterized by the belief that if something cannot be measured it does not exist. Truth and objectivity, he said, are not to be found in a tidy Newtonian world of well-behaved mechanical processes, given the complexity of what is involved in achieving positive outcomes in the domain of public administration. He explained that we are moving from a Newtonian world to a Quantum world, from a world of Big “G” government (where persons or groups could – legitimately or not – claim to be in charge) to a world of small “g” governance (where power, resources and information are widely distributed and nobody is fully in charge). This means that it is not possible to isolate and identify all factors contributing to an outcome. It is therefore somewhat counter-productive to pursue excessive precision in performance measurement.
Any model used to measure outcomes in public administration is an abstract system developed according to a set of categories which, to better reflect reality, should be taken as provisional, but often aren’t. Once numeric targets are set, they tend to have a steering effect. Paquet proposed the word “phynance” as a label for this somewhat capricious process that nevertheless appears solidly objective due to the cult of quantophrenia.
Social facts, he said, are not things you can measure in any simple way and this should be borne in mind when evaluating government programs. He advised administrators not to “reify the intent of the day;” in other words, not to fixate on numeric targets and thus fail to be attuned and responsive to other potential influences or opportunities for program improvement. Numbers are for tracking, not targeting.
Instead, he urged the use of social learning, which involves being open to new interpretations of progress toward outcomes, an emphasis on asking the right questions rather than finding definitive answers, and a comfort with a trial-and-error approach. These methods of course require a certain tolerance for experimentation and risk.
He concluded his opening remarks emphasizing that accountability lies not with checklists but with the administrators who hold the burden of office.
Heintzman was well aware of criticisms of performance measurement. He explained that potential pitfalls can be grouped under three headings, namely conceptual, motivational and technical drawbacks. Conceptually, there can be a lack of agreement on the assignment of value. Motivationally, performance metrics have been known to produce gaming and other behavioural effects. Technically, metrics can be weighed down by a self-defeating drive to comprehensiveness. Nevertheless, he advised, the message is not that measurement should be abandoned; rather, it takes effort to form reliable indicators of results.
Heintzman showed how these potential snags in the development and application of performance measures have been mitigated in the area of service delivery. In working to make and track improvements, the Government of Canada developed common measurement tools to remedy a confusing and somewhat incommensurable set of indicators. To arrive at the appropriate indicators, it was important to identify the high-level ends, outcomes and results upon which to focus – putting citizens first provided the needed alignment.
It was then necessary to identify and come to a shared understanding of the drivers that reliably determine the most important elements to measure and track in achieving the identified ends. By focusing on the drivers of service satisfaction, the government was able to exceed a 10 percent target for service improvement from 2000 to 2005.
A second example of intelligent measurement is visible in the work of the Canadian public sector around employee engagement, a strong indicator of effective leadership and people management. The B.C. government is now making use of indicators of employee engagement in the performance assessments of its leadership at a very detailed level of granularity. Significant divergences in performance occur more often at the unit level, than the branch or ministry level. This 360-degree process helps identify areas needing improvement and supports accountability. BC Stats has also confirmed that employee engagement is strongly correlated with citizens’ satisfaction with service delivery.
A third example is the work being done on performance measurement and benchmarking at the municipal level in Ontario. The Ontario Benchmarking Initiative and the Toronto Report Card Survey aim to identify meaningful indicators and standardize them in order to track progress.
This illustrates that measurement can be done intelligently.
According to Heintzman, public administration suffers more from a lack than an excess of intelligent measurement. To avoid unintelligent performance measurement, it is important to identify ends, identify a limited number of important drivers, identify common measures, and standardize them. That is not to say that there will never be unforeseen outcomes of public policy, but for all that, there are many that reside within our control.
After presenting their positions, Paquet and Heintzman continued their engaging discussion with members of the audience of about eighty people. In the end, no one doubted that there is a need for reliable, objective information in planning programs and initiatives and tracking their success, but no one would deny that administrators can be tempted to restrict their focus to meeting the numbers rather than the actual outcomes.
How can one rely on measurement in ways that are not quantophrenic, that do not close off horizons of possibility in a complex and dynamic environment? An emphasis on numbers may signal a limitation on tolerance for risk, innovation and responsiveness. On the other hand, intelligent measurement has clearly contributed to service improvements in government programs when high-level ends and a limited set of key drivers are identified.
In the end, the interlocutors seemed to agree that when it comes to measurement, too much is de trop.
Amanda Parr is with the National Managers’ Community (www.managers-gestionnaires.gc.ca) and a member of the IPAC Toronto Regional Programming Committee. Gilles Paquet and Ralph Heintzman will continue their discussion in the March issue of Optimum Online (www.optimumonline.ca). For more of the Toronto debate, see www.ipac.ca.
SIDEBAR
Ralph Heintzman is Adjunct Research Professor, Public and International Affairs at the University of Ottawa. He was responsible for the development of the Public Servants Disclosure Protection Act, a Values and Ethics Code for the