Have you ever asked yourself the question: “How would I evaluate the Quality Management System (QMS) in my organization?” The answer really depends on why you are doing the evaluation. To answer this you need to engage stakeholders, know something about evaluation and preferably a little about Quality Management.
A quality management system (QMS) can be expressed as the organizational structure, procedures, processes and resources needed to implement Quality Management or Business Excellence.The QMS description starts with a statement of need describing the problem or opportunity that the QMS addresses and the results expected. The linkage to organizational mission, vision and strategic outcomes should be identified. The description should discuss what the QMS must accomplish to be considered successful, that is, the vision and objectives of the QMS, and its capacity to effect change, its stage of development, and how it fits into the larger organizational priorities. Resource descriptions should include the people, money and technology used to conduct QMS activities.
As decisions in the evaluation will be based on the description of the QMS, it should identify the activities that the system does to effect change and should display them in a logical model.
The evaluation cycle begins by engaging the people having a vested interest in what will be learned from an evaluation and what will be done with the knowledge. These stakeholders include: those responsible for the functional management of the QMS; those involved in using the QMS to improve program delivery; (e.g., managers, employees, administrators, partners, suppliers); those served or affected by the QMS (e.g., employees, clients, citizens, the government, central agencies, advocacy groups); and, primary users of the evaluation (e.g., the specific persons who are in a position to act on recommendations regarding the program).
Purpose of evaluation
The purpose of the evaluation must be aligned with the issues of greatest concern to stakeholders. Stakeholders need to know what they will do as a result of the evaluation. For example, they may want to understand, verify or increase impact of the QMS on products or services to satisfy clients/citizens. Or they may want to improve delivery processes to be more efficient – and there are a host of other possible decisions that can be reached.
Audience(s) for the evaluation
The stakeholders will also decide who the audiences are for the information obtained from the evaluation. The audience could be made up of clients/citizens, senior management, the Board of Directors, Treasury Board Secretariat or internal employees.
Information needed for evaluation
After the first two decisions are made by stakeholders, the kinds of information needed support decision-making and/or inform the intended audiences can be determined. You may want information to understand: the product/ service delivery process (its inputs, activities and outputs); benefits to clients/citizens (results); is the organization spending more than necessary on its QMS and if yes, why?; and, are there opportunities for improving the Quality Management System design and delivery (cost-effectiveness)?
Type of evaluation
Based on the purpose of the evaluation and the kinds of information needed, the stakeholders can state what type of evaluation should be planned. It may be developmental – to identify areas for improvement early in the program, organizational excellence-based, or perhaps comprehensive – a mixture of several types.
Sources of information and evidence
Stakeholders should be involved in defining data that they would find pertinent, so that they will more likely accept the evaluation’s conclusions and recommendations. Evaluators should collect this information / evidence which are perceived by stakeholders as believable and relevant for answering their questions. Such decisions depend on the evaluation questions being posed and the reasons for asking them. There are many potential sources of information, including staff/employees, clients/citizens, Management Accountability Framework (MAF) assessments, Program Evaluation and Auditor General Reports.
Tools for collecting information
It is essential that the information be collected in a manner that facilitates collection, manipulation and analysis. The techniques for gathering evidence in an evaluation must be aligned with the organizational culture and data-collection procedures should be examined carefully to ensure that confidentiality of information and sources are ensured. The evaluator will likely use a number of tools such as a written survey or questionnaire (e.g. handout, telephone, fax, mail, e-mail, or Internet), observation of clients/citizens/managers/employees, conducting focus groups among various stakeholders and others, so that there can be corroboration and multiple approaches to enhance credibility.
Evaluation conclusions are warranted when they are supported by the evidence gathered and assessed against criteria or standards agreed to by the stakeholders. Before stakeholders will agree to the recommendations, they must believe that conclusions are sound. Justifying conclusions involves the following steps:
1. Analysis of an evaluation’s findings to determine important findings and synthesis of various sources of information to reach a larger understanding.
2. Interpretation of the findings to make sense of the evidence so as to appreciate the relevance of what has been learned.
3. Judgments of the QMS by comparing the findings and interpretations against agreed criteria and standards.
4. Recommendations for continuing, improving, expanding, or terminating the QMS are separate from judgments, but based on them.
Ensuring evaluation recommendations are implemented
Having initiated the evaluation with the stakeholders and then continued with their engagement throughout the evaluation will go a long way to ensure that the evaluation recommendations are implemented. The following are critical elements for ensuring implementation of the evaluation recommendations:
· Consult with the stakeholders as necessary to ensure their understanding and acceptance of the findings, conclusions and recommendations.
· Provide the responsible managers with advice/interpretations/information in developing their Action Plan to deal with the recommendations.
· Present the Evaluation Report, and have the responsible managers present their Action Plan at the same time, to the “Evaluation Board” of senior managers and external people who provide an oversight to the management of the department’s programmes.
· Ensure commitments in the Action Plan are incorporated, as appropriate, in the Strategic Plan, Business Plans and Performance Management Agreements.
· Make the evaluation report public, in particular management’s action plan and commitment to address the recommendations.
John Thomas is president of John F. Thomas Management Consulting and a director of the Canadian Public Sector Quality Association (CPSQA). Don Wilson is the executive director for the National Capital Region for the National Quality Institute (NQI).