A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions
- “evaluation is all about asking and answering questions that matter” (p. 3)
- “evaluation can be an important strategic tool for measuring:
- the extent to which a program or initiative’s goals are being met
- the ways in which a program or initiative’s goals are being met
- how the program or initiative might be contributing to the organization’s mission” 1slightly paraphrased to make it a bulleted list (p. 3)
- “develop a set of evaluation questions that reflect the perspectives, experiences, and insights of as many relevant individuals, groups, organizations, and communities as possible” –> relevant and useful evaluation (p. 3)
- “evaluations should be conducted in ways that increase the likelihood that the findings will be used for learning, decision-making, and taking action” (p. 6)
- “Good evaluation questions:
- establish boundaries and scope of an evaluation […]
- are broad, overarching questions that the evaluation will seek to answer […]
- reflect diverse perspectives and experiences
- are aligned with clearly articulated goals and objectives
- can be answered through data collection and analysis” (p. 8)
- benefits of engaging stakeholders in designing evaluation questions:
- “increases quality, scope, and depth of questions” (p. 10)
- “ensures transparency” (p. 10)
- “ensures that the evaluation questions have been thorough vetted and thoughtfully crafted and that they are the right question to be asking” (p. 10)
- “raises awareness of the evaluation itself and may contribution to building an audience for the eventual findings” (p. 10)
- “communicates a commitment to being inclusive (vs. exclusive), outward looking (vs. inward looking), and expansive (vs. insular). Stakeholders not only help navigate the political waters more effectively, but also serve to positions the evaluation so that findings are perceived to be useful, relevant, and credible and are morel likely to be used” (p. 10)
- “builds evaluation capacity” (p. 11)
- “fostering relationships and collaboration” among the stakeholders (p. 11)
- 5 step process:
- prepare (understand the program being evaluated)
- identify potential stakeholders
- you need people with: expertise, different perspectives/experiences, responsibility for the program, position of influence, interest in the issues, proponents of evaluation
- people can fulfill more than one of the above roles
- internal and external
- prioritize the list of stakeholders (vital/important/nice to have; may help you see if you missed anyone)
- consider potential stakeholder’s motivation to participate (commitment to goals of program, personal stake, professional development, compensation)
- select an engagement strategy
- criteria for selecting: time, budget, geographic location of stakeholders, range of perspectives, extent of existing relationships in the group, availability of stakeholders, number of stakeholders, their familiarity with evaluation, degree of complexity of the evaluand
- can use more than one strategy (e.g., one strategy with some stakeholders and different strategy with others; two-part process (one strategy, followed by another)
- page 23 of the book shows a table that helps match your answers the above criteria with best strategies
- strategies include: one-on-one meetings, group meetings, logic modelling, mind mapping, Appreciative Inquiry, role playing, brainstorming, nominal group technique, discussion of article/presentation, moderated discussions, surveys, Delphi technique
- things to consider when developing evaluation questions:
- “what does success of your program look like?”
- “what would we need to know to explore the extent to which the program is effective or successful?”
- “what questions seem to come up repeatedly, in conversations with others, or in your own work, concerning the effectiveness, impact, and/or success of this program or initiative?” (p. 29)
Source: Preskill, H., & Jones, N. (2009). A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions. Princeton, NJ: Robert Wood Johnson Foundation Evaluation Series. “
Evaluation Models & Evaluation Use
- conducted a systematic review on “collective-level knowledge exchange”, where “collective-level” refers to “interventions occurring at the organizational level or in policy-making arenas, as distinct from interventions targeting modification of individual behaviours.” (p. 62) – this made me think of how we defined “patient engagement” in my work on the AWESOME model (i.e., engaging patients on health services and systems level planning, as opposed to engaging patients in decision making about their own care).
- collective-level systems are “characterized by high levels of interdependency and interconnectedness among participants” and “all participants receive information from various sources, make sense of it, modify it and produce new information aimed at others” (p. 62) – that is, people don’t just make decisions based on the scientific evidence alone
- the review “found no credible, empirical data showing any positive link between level of use and information’s interval validity or the conformity of its production process with scientific procedures” (p. 65)
- knowledge use depends on:
- sense-making
- coalition building
- persuasion
- rhetoric
- “action proposals” are “assertions that employ rhetoric to embed information into arguments to support a causal link between a given course of action and anticipated consequences” (p. 63) – e.g., we should do X, which the evidence suggests will lead to Y, generally isn’t sufficient to convince people
- “collective-level knowledge use” = “the process by which users incorporate specific information into action proposals to influence others’ thought, practice, and collective action rules” (p. 63) – this definition “dissociates knowledge use from actual practices or outcomes” (p. 63) – this is, this definition refers to using knowledge to form recommendations (or guidelines, etc.), but does not include the next step of those recommendations/guidelines/etc. actually being put into practice (as a person has less influence over the latter (as other influences also come into play)
- information –> recommendations based on that info –> recommendations put into practice
- two core dimensions of the context of use:
- issue polarization & ideology
- when information is contrary to what someone already believes, they tend to ignore it or at least subject it to stronger skepticism (than if it fits within their current beliefs)
- in a given situation, there will be multiple people who may have different beliefs/perceptions about a given piece of information
- low issue polarization = when potential knowledge users agree
- that there is a problem
- the problem is important (relative to other potential issues)
- on criteria that should be used to judge potential solutions
- cost-sharing equilibrium in knowledge exchange systems
- “knowledge has both a cost an a value” (p. 64)
- someone has to pay the cost of the knowledge exchange
- people will pay the cost of knowledge exchange to the extent they see it as providing value
- in a knowledge exchange there are:
- users: those “who hold institutionally sanctioned positions that allow them to intervene in the practices, rules and functioning of organizational, political or social systems” (p. 64)
- producers: those “who contribute to legitimate knowledge production institutes without having capacity to put the knowledge developed to use.” (p. 64) – e.g., academic researchers or evaluators who generate new knowledge that could be useful to inform health services, but don’t have a role in a health services organization
- intermediaries: other stakeholders/lobbies who “will want to have their say and will contribute to the information flow” (p. 64)
- there is a cost-sharing equilibrium between users and producers/intermediaries
- users:
- have a finite amount of attention
- have to balance the different pieces of information they receive
- “use of knowledge is influenced by its:
- relevance (timeliness, salience and actionability)
- credibility
- accessibility” (p. 65)
- pre-existing opinions influence the perception of both (A) relevance and (B) credibility
- issue polarization & ideology
- the authors created a framework for classifying different approaches to evaluation based on the two dimensions of level of issue polarization and cost-sharing equilibrium
cost-sharing equilibrium rests mostly on… |
users |
|
||
producers |
|
|||
low | high | |||
level of issue polarization |
- [note: I added in the colours because my blog isn’t allowing me to show horizontal gridlines in a table.]
- the idea of the framework is not to definitively place evaluation models/approaches on the grid but “to offer some insights into the relationship between use, models, and contexts” (p. 66)
- the showed some examples of where they would place different models on the grid (some things crossed over to multiple quadrants):
cost-sharing equilibrium rests mostly on… |
users |
UFE EE |
UFE
|
|
producers |
RE
|
RE Democratic E |
||
low | high | |||
level of issue polarization |
- UFE = utilization-focused evaluation, RE = realistic evaluation, EE = empowerment evaluation, Democratic E = democratic evaluation
- and then synthesized to this:
cost-sharing equilibrium rests mostly on… |
users | The utilization paradise |
..UP
LZ |
|
producers | The knowledge-driven swamp | The lobbying zone |
||
low | high | |||
level of issue polarization |
- where the utilization paradise (UP) and lobbying zone (LZ) overlap into the upper right quadrant
- when users are willing to bear the costs of knowledge exchange, the likelihood of use increases
- when evaluators/researchers bear most of the costs of knowledge exchange, the likelihood of use is low
- the authors consider strategic or symbolic use of knowledge better than no use (as long as the information is “not totally erroneous” (p. 70)
- while some argue that whether evaluation findings get used is the result of the evaluator being able to “influence and encourage use”, “contextual characteristics explain in large part the level and nature of results use” (p. 71)
- evaluators (where they have the freedom to) should choose the evaluation model that best fits the context
- sometimes evaluators just need to realize that “they are working in a context where a high level of instrumental use is unlikely” (p. 71)some implications they draw out from this framework:
- evaluators can work to influence the context (e.g., work to reduce polarization)
Source: Contandriopoulos, D., & Brouselle, A. (2012). Evaluation models and evaluation use. Evaluation. 18(1): 61-77. (Abstract)
Footnotes
↑1 | slightly paraphrased to make it a bulleted list |
---|