A recent visit to England and Scotland reminded me of the many innovations in public sector management taking place around the world. There has been a steady stream of reports on the government’s performance on a number of key policy areas based on performance measures and other metrics used to hold agencies or departments to account.
In Scotland, much of the discussion (aside from speculation on an independence vote) has been directed to the government’s performance. The media often comment on the degree to which the government has been able to meet the four objectives of generating wealth, delivering good health service, providing strong educational programs and lowering crime. Two decades after the Thatcher government revolutionized the provision of government services, it is clear that alternative service delivery is well entrenched in the U.K. and many other countries.
The next “new thing” may well be the reinvention of policy analysis. There are now many examples of the increasing reliance on “evidence based” decision-making in government. Decision makers always argue that their decisions are made on the basis on facts and evidence. However, more than ten years ago in Canada, it became obvious that policy decisions were not as supported by “hard edged” policy work as the rhetoric of the day maintained. There are many reasons for the decline in policy analysis: the downsizing of the policy community in the 1990s; the paucity of strong theoretical frameworks around key policy areas; political expediency; poor data; inadequate computer processing capacity; and lack of data storage.
With the enormous growth in information technology, the matter of information capacity and processing speed is no longer an issue. We now have the computation power to carry out analyses that could only be dreamed of a few years ago. This has sparked a renewed interest in policy work because it gives policy analysts the opportunity to manipulate huge statistical data sets that heretofore have not been accessible. In particular, two techniques have reemerged as viable policy tools. The first is regression analysis, a standard statistical procedure that takes raw historical data and estimates how various factors influence the variable of interest. A good example is predicting family income on the basis of the educational level and marital status of the head of household, family size and area of residence. The second development has been the reintroduction of randomized experiments or trials to test policy options. For example, despite the apparent lack of interest within the Bush government for innovative social policy, American lawmakers have increasingly relied on randomized testing as the best way to determine what works. In a recently published book Super Crunchers: How Anything Can Be Predicted, Ian Ayres noted that “government isn’t just paying for randomized trials; for the first time the results are starting to drive public policy.”
In fact, the U.S. government has gone so far as to build requirements for evidence-based research into some federal legislation. For example, one provision said that a state could experiment with new ideas on how to reduce unemployment insurance if, and only if, supported by an evaluation plan that included randomly assigned control and test groups in a field trial.
Are subject matter experts better predictors of policy outcomes than policy analysts who rely on statistical analysis and experiments? The debate is heating up. Ayres provides convincing evidence based on hundreds of studies comparing the success rates of statistical and expert approaches to decision-making. In his view, the statistical analyses outperform humans most of the time.
He suggests this is because: (1) decision makers overestimate the power of their own intuition and thinking; (2) most people, including experts, “tend to give too much weight to unusual events that seem salient. And once we do form a mistaken belief about something, we tend to cling to it;” (3) statistical analyses are better at making predictions because they do a better job at figuring out what weights should be put on individual factors in making a prediction; and (4) “unlike self involved experts, statistical analysis does not have egos or feelings.”
This new interest in evidence-based policy-making is indeed good news. Knowing what is happening in other countries allows us to benchmark against the best in the world in terms of how they manage their public institutions and develop public policy. Moreover, experience has taught us that it is counterproductive to rely on our instincts to develop public policy because they are not reliable. The changes taking place in policy-making based on evidence and experimentation is one development that we do not want to watch from the sidelines.
David Zussman holds the Jarislowsky Chair in Public Sector Management in the Graduate School of Public and International Affairs and the Telfer School of Management at the University of Ottawa. He is president of the Canadian Association of Programs in Public Administration (dzussman@uottawa.ca).