

December 2016 //
Canadian Government Executive /
11
Program Evaluation
Data that is Reliable,
Relevant, and Robust
Data collection was viewed as a resource
intensive process, requiring time from
staff external to program and service de-
partments to collect data, and internal
staff to enter the data into databases. This
is particularly challenging for a large de-
partment that works with many clients.
The time contributed by volunteers and
students was viewed as a valuable re-
source in this regard. However, since re-
sources will always represent a limiting
factor, it is all the more important that
collected data yields sufficiently valid and
timely information.
Reliability of data was identified as a
development area for effective program
planning and service delivery and to
maintain relevance for current client and/
or target demographics. Departments also
prioritize data that is informative from a
variety of perspectives. For example, the
decision to “hire five more staff” is ide-
ally informed by data that can address
multiple service planning questions such
as: “What languages should they speak?”,
“How many clients are they serving?”, and
“What populations are we serving?” Final-
ly, effective organizational processes for
data collection can ensure data quality. For
example, one program described how reli-
able feedback from service providers can
help to develop a universal evaluation tool
useable for multiple programs.
Building a Culture of
Evaluation
During discussion around the organiza-
tion’s approach and expectations regard-
ing evaluation, respondents were often
confused about a number of topics, in-
cluding: “Should entire departments be
evaluated, or should individual programs
be evaluated with less intensity?” At this
time, the Program Planning and Evalua-
tion Policy is supported by guidelines that
outline very general practices for program
managers and directors. An effectively
communicated framework or pathway
of explicit guidelines would remove any
confusion on behalf of staff regarding the
expectations of the organization’s posi-
tion statement for undertaking evaluation,
namely: ‘What to evaluate?’ (i.e. individual
services or programs vs. the department as
a whole, what are the indicators of inter-
est); ‘How to evaluate?’ (i.e. methodology
and tools); ‘How often to evaluate?’; ‘How
much should be spent?’ (i.e. an explicit,
dedicated, and communicated budget for
program evaluation activities); ‘Who is the
audience?’ (i.e. clarity on how the findings
will be used, and who will be impacted);
and ‘What is our part?’ (i.e. clarity around
the role of the organization’s evaluation
team in facilitating the evaluation).
Departmental staff suggested using a
longitudinal program evaluation calendar
over several years allowing for sufficient
time to effectively measure any changes
put in place. Departmental staff also ac-
knowledged the need to have the evalua-
tion team fully integrated into all stages of
the program cycle. Therefore, expanding
existing policy to include explicit program
evaluation expectations (frequency, meth-
ods, etc.), as well as an articulation of the
level of support provided by the evalua-
tion department, would be ideal.
Leadership
Clearly, navigating evaluation activity at
the program level requires expertise and
support by evaluation staff. Indeed, pro-
gram teams expressed a call for technical
guidance around the interpretation of sta-
tistics and evaluation methodology, and to
learn more about evaluation tools, statisti-
cal software, and interpreting statistical
tests. Respondents also expressed an inter-
est in better interpreting and understand-
ing the variety of data sources drawn upon
by departments (e.g. administrative data,
the annual Client Experience Survey, and
individual program evaluations) in terms
of their potential to intersect and conflict.
They asked: “How do we synthesize results
from a variety of stories into a cohesive sto-
ry?” Therefore, both technical and abstract
support are needed on behalf of the evalu-
ation team in the form capacity-building
or training sessions for managers and di-
rectors, as well as through individual con-
sultations and meetings with departments.
The Canadian Centre for Accreditation
made the following statement during their
evaluation of the organization: “(Access Al-
liance) is very strong in the area of Quality
Improvement and evaluation”. The quality
and rigour of the organization’s evaluation
process and data management methodol-
ogy provides the evaluation team the op-
portunity to play a dynamic role in sharing
knowledge, policy advocacy, as well as in
securing and advocating for organization-
al resources. Once established in a position
of leadership, the evaluation team may
even assist other organizations in building
their capacity, simultaneously providing
external support and strengthening part-
nerships.
Although scope and budget represent a
source of risk for the sustainability of an
organization’s evaluation practices, the
systemic auditing of organization activi-
ties can not only improve the quality of all
programs and services, but can also pro-
vide evidence for how resources can be re-
alistically and optimally leveraged for the
benefit of all stakeholders.
M
iranda
S
aroli
is a Research Assistant
in the Community-Based Research De-
partment at Access Alliance Multicul-
tural Health and Community Services.
AKM A
lamgir
is Manager, Quality
and Accountability Systems of Access
Alliance.
M
orris
B
eckford
is the Director of
Community Health and Wellness De-
partment of Access Alliance Multicul-
tural Health and Community Services.
S
onja
N
erad
is an independent consul-
tant and the Managing Director of SN
Management.
A
xelle
J
anczur
is the Executive Direc-
tor of Access Alliance Multicultural
Health and Community Services.
Reliability of data
was identified as a
development area
for effective
program planning
and service delivery
and to maintain
relevance for current
client and/or target
demographics.