Performance Scorecards
Richard Chang and Mark Morgan
Jossey-Bass, 162 pages, $42.50

Many governments have bought into the notion of the Balanced Scorecard, the system of performance management by Robert Kaplan and David Norton that is commonly used in the private sector. But two other consultants, Richard Chang and Mark Morgan, offered another format in their fable Performance Scorecards that merits attention.

The Balanced Scorecard focuses on four areas of your organization, whereas this performance scorecard approach is more flexible: you pick the areas of importance you want to measure from wherever seems appropriate. But what I find most attractive is that the focus in their book is on having scorecards for each individual and linking each to the organization’s overall scorecard and to each other. Ultimately, that’s what is required for success: people throughout your organization must understand clearly how they relate to the organizational imperative, and be measured on their success in striding towards it.

It’s about linkage and alignment – and, of course, measurement. In the blizzard of information that snows in on you each day, you must isolate what’s important. In some cases, you need to ‘seed’ the clouds because things that demand measurement require new data to be gathered in order to gauge progress.

The authors outline six phases for developing and using a scorecard. The first is collect – you need to collect your scorecard inputs. That requires obtaining your top-level objectives, measures and targets, which will flow from your mission, strategy, values, and the priorities you have drawn from them. You must think about who the customers are for your department’s work, and what their key requirements are from you. As well, you need to define the core process chains by which you get things done.

The second phase is create – the management team has to create an overall scorecard for the organization. You must craft it to fit your needs, not mimic what others have done. The authors suggest for businesses some of the areas might be financial success, customer loyalty, market leadership, employee development, operational effectiveness and community impact. Those could all be echoed in some ways in a government scorecard, with market leadership a particularly interesting concept to ponder. But again, they stress that you are not bound by categories: determine the most important results you are looking for in the coming year or years.

Brainstorm potential measures, asking the following questions:

  • Will the measure result in a number that you can quantify and graph?
  • Would customers for your work care about the measure?
  • Will the measure give useful feedback?
  • Can you establish a challenging target?
  • Can you assess performance against others in the same field? (benchmark)
  • Can the team responsible for performance influence the outcome of the measure?
  • Does the measure relate to objectives and key result areas?

In the end, you have to define a few key measures to keep tabs on performance. Too many will just drive you to distraction.

The fourth stage is cultivate – cultivate your scorecard, using the draft you have developed to review performance, to see how you are doing overall as an organization and how practical the scorecard is. In the book, the fictional company’s top team finds that customer satisfaction scores are coming down. But as they grapple with that unhappy news, they realize scores are actually better than the previous year; the scales have been changed over the period and the scores not properly adjusted. Moreover, the scores rating customer satisfaction only include reports from the help desk, not feedback cards and evaluations in the quarterly customer interviews.

Those are simplistic examples, but the point is clear: You will need at this stage to refine your data, to make sure you are properly measuring the key objectives.

After refining this top-level scorecard, you move on to the cascade step, in which the scorecard drills down to the next level, that of key officials in the department and their work groups. Having grappled with the overall scorecard, these officials should clearly understand the goals they have as a team. With that insight, they need to prepare individual scorecards showing how they will contribute to overall success.

Again, they need to limit themselves to the vital few: too many measures will just confuse them. And in some cases, their individual goals will simply be the departmental goals, since the overall numbers can’t be sliced and diced between the different senior managers. It’s difficult to determine how much contribution comes from each manager in some instances but all of them contribute.

A key element in keeping to the vital few is ensuring that the scorecard lists the right measures for the organizational unit under consideration. Think through how work gets done in the organization, and at what level influence over processes and success is held. That’s obviously not an easy philosophical notion to put into practice, but it’s essential if scorecards are to cascade within the organization and alignment is to be achieved. Also essential is checking that the individual managers’ scorecards mesh with each other.

In some cases, you will need to build indexes that bring together a series of measures into one number for the manager to watch. The manager can then drill down to the individual figures, but at least he or she has an overall sense from the index of how the work group is faring.

The scorecard will have to be refined as the managers monitor progress rather than being set in stone. The authors offer six things to consider as the scorecards are refined:

  • Do a gut check. Keep your measures in perspective, using your intuition. If you suspect things are improving but the numbers don’t reflect that feeling, check definitions, performance scales, and other facets of how the numbers are tabulated and reported. In the same way, if the numbers are moving up but nothing seems to have changed, check for number inflation, shifting definitions, improperly recorded results, or other glitches.
  • Routinely review measurement definitions and terms. The book offers an example of a team that defined service response time as the interval from when a technician received a service order until the technician showed up at the customer site. But the customers considered response time as starting when the help desk was called, not when the technician was alerted.
  • Conduct random audits to verify that measures are being collected properly and data is complete. Let people know that you expect truth in your reports.
  • Don’t monitor too many indicators. The tendency is strong to have lots of indicators, and must be resisted.
  • Don’t monitor too few indicators, either. ‘Monitoring performance is like driving your car. There are a few indicators you watch closely and regularly: Speed, fuel level, and engine revolutions. Others are watched less often, but are important: oil pressure, total miles and engine temperature,’ the authors note. Use enough indicators so you know you are moving in the right direction.
  • Drop obsolete measures. That is hard to do, but as the organization changes, you need to scrap what isn’t essential to your scorecard.

The next stage is connect – managers have to take the scorecards they have devised for their work group and repeat the process with their team members, so everyone in the work force has an individual performance plan. That will require helping them to understand the overall picture, and their role in it. They will also need coaching sessions throughout the year, as the measures evaluating p