I’ve been thinking about how I’m going to address the many layers of “context” in the evaluation that I’m currently working on and found a few articles that I found quite useful. Here are my notes from reading those articles.
- We often talk about the importance of context in evaluation, “yet the various contextual factors that influence evaluation are rarely considered in much depth in the evaluation literature” (Fitzpatrick, 2012, p. 8).
- “There is not a unified understanding of context or a comprehensive theory that guides our work” (Rog, 2012, p. 26) – and context and culture are often used interchangeably
To: | Context Is… |
quantitative evaluators | … a “source of influence to be controlled” |
realist & theory-oriented evaluators | …a “source of explanation” |
qualitative evaluators | …an “inseparable element embedded in program experiences and outcomes” and “there is not a unified understanding of context or a comprehensive theory that guides our work” (Rog, 2012, p. 26) |
Definitions of context
There isn’t a single, universally accepted definition of context. Some definitions include:
- Context: “the circumstances that form the setting for an event, statement, or idea, and in terms of which it can fully understood or assessed” (Oxford England Language Dictionary (OELD), cited in Fitzpatrick, 2012)
- Context (specific to evaluation:) “the setting within which the evaluand […] and thus the evaluation are situated. Context is the site, location, environment, or milieu for a given evaluand.” (Greene, cited in Fitzpatrick, 2012) – most evaluands are situated in multiple contexts that have several layers dimensions (Rog, 2012, p. 27)
- 5 dimensions of context:
- “demographic characteristics of the setting and people in it
- material and economic features
- institutional and organizational climate
- interpersonal dimensions or typical means of interaction and norms for relationships in the setting
- political dynamics” (Greene, cited in Fitzpatrick, 2012)
- Context (again, specific to evaluation): “the combination of factors (including culture) accompanying the implementation and evaluation of a project that might influence its results, including geographical location, timing, political and social climate, economic conditions, and other things going on at the same time as the project. It includes the totality of the environment in which the project takes place.” (Thomas , cited in Fitzpatrick, 2012. Emphasis mine)
- 5 dimensions of context:
- “Out of context” = without the surrounding words or circumstances and so not fully understandable” (OELD, cited in Fitzpatrick, 2012. Emphasis mine.)
- Patton discusses context as a “sensitizing concept” (i.e., something that, rather thinking needing to be “operationalized”, is used to “provide some initial direction to a study as one inquires into how the concept is given meaning in a particular place or set of circumstances” (Patton, 2007, p. 102)
- “Systems thinkers posited that system boundaries are inherently arbitrary, so defining what is within the immediate scope of an evaluation versus what is within its surrounding context is inevitably arbitrary, but the distinction is still useful. Indeed, being intentional about deciding what is in the immediate realm of action of an evaluation and what is in the enveloping context can be an illuminating exercise—and stakeholders might well differ in their perspective” (Patton, 2007, p. 102)
Why Consider Context in Evaluation?
- to increase evaluation use
- to give voice to local issues
- to explain program effects – to “identify those contextual elements that prompt a program to succeed or fail” (Fitzpatrick, 2012, p. 13)
- context “affects the implementation and outcomes of the interventions that we study” (Rog, 2012, p. 25)
- context can help us to choose an appropriate evaluation approach ( a “context-first approach”, as opposed to the “methods-first” orientation [i.e., the old “when you have a hammer, every problem looks like a nail])
- “Much like the question we strive to answer in our evaluations, “What works best for whom under what conditions?” context-sensitive evaluation practice asks “what evaluation approach provides the highest quality and most actionable evidence in which contexts?” (Mark, 2001, cited in Rog, 2012, p. 26)
- Context can help “provide more direction in replication and generalizability of findings” (Rog, 2012, p. 37)
What Happens When We Don’t Consider Context/Culture?
- Evaluators have a “responsibilit[y] to:
- attend to cultures different from their own or different from the dominant cultures
- seek knowledge and understanding of different cultures
- involve stakeholders from participating cultures in the planning, conduct, interpretation, and reporting of the evaluation” (Fitzpatrick, 2012, p. 14)
- One’s own “personal contexts and values influence how they see, or fail to see, other cultures” (Fitzpatrick, 2012, p. 14)
- If we don’t consider context/culture, we risk:
- “identifying the wrong questions to frame the evaluation
- ignoring key stakeholders who are potentially strong users of evaluation
- misinterpreting stakeholder priorities or even program goals
- collecting data with the use of words or nonverbal cues that have different meanings than to the audience
- failing to describe the program accurately
- failing to understand [the program’s] outcomes because the evaluator is unable to notice nuances or subtleties of the culture
- reporting results in means only accessible by the dominant culture or those in positions of power” (Fitzpatrick, 2012, p. 14)
Ways to Attend to Context/Culture
- “careful examination of one’s own values, assumptions, and cultural contexts” (Fitzpatrick, 2012, p. 14)
- “inclusion of community members and program participants in evaluation planning and other phases” (Fitzpatrick, 2012, p. 14)
- engaging those who use the program provides “an unparalleled perspective” and can help “guide designs that are more feasible, measurement that is more focused, and interpretations that offer new insights” (Rog, 2012, p. 32)
- can also “foster transparency of methods, reveal flaws and suggest study qualifications, and in turn help to promote study findings as credible and having integrity” (Rog, 2012, p. 32)
- “careful observation and respective interactions and reflection on what has been learned” (Fitzpatrick, 2012, p. 14)
- “training the evaluation team to be culturally responsive” (Fitzpatrick, 2012, p. 14)
- note that “cultural competence is needed in every evaluation, not just those where cultural norms “hit us in the face” as being different from our own. Every group of participants has, or develops, a culture and that culture influences the program and the evaluation” (Fitzpatrick, 2012, p. 16)
- using a “range of methods to accommodate and incorporate context”
- use “strategies to rule in or rule out alternative explanations” (Rog, 2012, p. 33)
- can conduct “systematic plausibility analysis of threats to validity” – i.e. collecting data on the intervention’s theory of change, but also on the “plausibility of rival explanations” (Rog, 2012, p. 33)
- conduct an evaluability assessment – in addition to helping to ensure a program is ready for evaluation/deciding what type of evaluation would be appropriate for the program/ensuring the program has an “internal logic that is implemented with integrity” (Rog, 2012, p. 34), an evaluability assessment can help to understand :
- “how a program fits within the broader environment
- [what] features of the environment may moderate the effects of the program
- how the evaluation may need to be structured to be maximally sensitive to this area of context” (Rog, 2012, p. 34)
- often using multiple methods (qual & quant) can help us “go beyond determining whether a program works or not to explore why the outcomes occur or fail to occur. this can incude elaborating our theories to include potential mediators in a program that link activities with outcomes and testing for them in our analyses.” (Rog, 2012, p. 35)
- “much like we have opened the black box of programs and provided useful data on the role that program mechanisms can have in triggering outcomes, I hope we can also navigate and explore the black hole of context and determine what aspects and areas of context have a role in determining the success or failures of interventions” (Rog, 2012, p. 35)
- some analytic tools that can be useful:
- social network analysis
- systems thinking approach (e.g., instead of just a logic model of the program – include “other influences that are assumed to be operating” (Rog, 2012, p.36)
- looking at “distributions of outcomes and […] patterns of change”, rather than just “measures of central tendency […] that may not be sensitive to the differences that often result from complex, dynamic interventions” (Rog, 2012, p. 36)
- e.g. identifying subgroups – as sometimes programs work better for some subgroups than others (e.g., if elements of context are influencing outcomes)
- multisite studies can help us to “measure the influences of the broader environment on programs” (because we get to look at the outcomes that result from the same program in different contexts or look at how programs are adapted to different contexts and this can help us understand why they work or don’t work)
- Conduct a Context Assessment (see next section)
- Note: need to balance context, stakeholder needs, and rigour when designing and executing an evalatuion.
Context Assessment: Areas of Context That Affect Evaluation Practice
Rog (2012) proposes a framework that includes 5 areas where context can affect evaluation practice, each of which have 7 possible dimensions 1There are also subdimensions that may be applicable, including “demographic issues of gender, race, and language, as well as issues of power differences, class, other denominators of equity, and sociopolitical status” (Conner, 212, p. 90)..
- Context of the problem/Phenomenon Being Studied
- what’s already known about the problem that the program addresses?
- what kinds of studies have already been done?
- what tools are available?
- how it affects evaluation: if not a lot is already known about the issue, it’s less likely that you can “have a controlled understanding of the intervention and its effects” (Rog, 2012, p. 29), may need to be more descriptive in your study to understand the problem better
- Context of the Intervention
- structure, complexity, dynamics of the intervention
- e.g., stage in the project lifecycle has implications for how you evaluate (e.g., wouldn’t do an outcome evaluation on a program that is currently being developed – you might plan for one long-term, but wouldn’t be expecting to see outcomes immediately)
- “how dynamic and evolving a program is, how complex with respect to its theory of change, and the extent to which it blurs with the setting itself” (Rog, 2012, p. 29), has implications for how you choose to design the evaluation
- interventions that blur with their broader environment = makes it “difficult to make attributions of change to the intervention because of the number of confounding externalities” (Rog, 2012, p. 29), “hard to trace the exact causal mechanism(s)” (Rog, 2012, p. 29)
- how it affects evaluation: for highly dynamic/complex interventions, may “need to have multiple indicators, multiple methods and […] need to examine multiple pathways to see if and how change occurs” (Rog, 2012, p. 29); when it is expected that it will take a long time to see outcomes, may “require interim measures sensitive to showing that the intervention is making changes in the short-run that indicate it is on the right track”(Rog, 2012, p. 29)
- Broader Environment/Setting of the Intervention
- this is what evaluators tend to most commonly think of when they think about “context”
- often multilayered (e.g., a school setting, which is in a school district, a broader community, a state)
- programs often blur with their contexts (e.g., a community change initiative aims to change the community in which it sits)
- how it affects evaluation:
- if an intervention is being rolled out in different communities, can look at how it is adapted in those communities (e.g., is the original theory of change intact? what factors in the context influenced implementation and outcomes?)
- need to “understand the ways in which the broader environment affects the ability of an intervention to achieve outcomes” (Rog, 2012, p. 30) – this is “critical to understanding the generalizability of the evaluation findings to other contexts or situations” (Rog, 2012, p. 30)
- Parameters of the Evaluation
- the method(s) you choose are influenced by the available “budget, time, and data” (Rog, 2012, p. 30)
- how it affects evaluation: evaluations often come with constrained budgets, timelines, and available data, so this will constrain you in your choice of methods (e.g., if there’s no baseline data available, you can’t do a regular pre-post test, so might need to creative in coming up with data you can use to see if things have improved; with limited resources, you have to decide which of many possible things you could measure will actually get measured; if the timeline for the evaluation is shorter than when you can reasonably expect outcomes to occur, you may need to design an evaluation that looks at if you are on track to achieve those outcomes down the road).
- Broader Decision-making Context
- need to “understand who the decision makers are, the types of decisions they need to make, the standards of rigor they expect, and the level of confidence that is needed to make the decisions, as well as other structural and cultural factors that influence their behavior” (Rog, 2012, p. 32)
- how it affects evaluation: understanding the decision making context allows the evaluator to design an evaluation that is more likely to get used.
Conner et al (2012) took Rog’s framework and created a process they called “Context Assessment“, which they advocate be used to “plac[e] context among the primary considerations” (p. 89) in an evaluation.
Context Assessment (CA):
- “prompt[s] evaluators to consider context more explicitly and carefully” (Conner, 2012, p. 93)
- “prompt[s] evaluators to consider which elements of context might be most important for the evaluator to consider at different stages of a particular evaluation” (Conner, 2012, p. 93)
- doesn’t require you to “catalogue all elements of context, but instead focus on those [you] identify as most relevant” (Conner, 2012, p. 93)
- helps to “shape the focus of their evaluation, their means of data collection, analysis, and interpretations, and their methods of dissemination based on their understanding of the critical elements of the context.” (Conner, 2012, p. 94)
- instead of being a “confounding condition”, a contextual issue “becomes a useful key to understanding what makes a program work” (Conner, 2012, p. 103)
- since context is changing, CA involves:
- an initial, intensive assessment
- briefer check-ins during the evaluation process
- 3 steps, based on the “three main evaluation steps: planning, implementation, and use/decision making” (Conner, 2012, p. 93)
- Evaluation Planning
- CA conducted during this phase to understand relevant aspects of context to inform the evaluation plan
- template for conducting CA for evaluation planning (adapted from Table 6.1 in Conner et al, 2012):
Area |
Guiding Questions |
Answers to Guiding Questions |
Implications for Evaluation |
General phenomenon/problem |
What is the problem the program is addressing? |
||
How did it emerge? How long has it existed? |
|||
What groups prompted concern about it? |
|||
What is already known about it? |
|||
What are the dominant methods used for understanding the phenomenon/problem? |
|||
What tools exist for measuring change? |
|||
Intervention |
Where is the program in its life cycle? |
||
How is the program structured? |
|||
What are the different components and how do they fit in the broader environment? |
|||
Who does the program serve? |
|||
What are their characteristics, beliefs, culture, needs, and desired outcomes? |
|||
Broader environment around the intervention |
What are the different layers of environment the intervention that affect and can be affected by the intervention? |
||
What aspects of these different climates are affecting the design and operation of the program? |
|||
What are important historical, social, and cultural elements of the community in which the program is conducted? |
|||
Are there political or social views that affect perspectives on the program, its clients, or decision makers? |
|||
Parameters of the evaluation |
What are the primary and secondary evaluation questions and their implications for possible methodology and design choices? |
||
What resources are available to support the evaluation (e.g., budget, time frame, local evaluation capacity, evaluation ethos)? |
|||
Decision-making arena |
Who are the main decision makers/users of the evaluation information? |
||
What are their views, values, and history about the program, and about evaluation? |
|||
What is the larger political culture in which they work? |
|||
What are the expectations of their organization? |
|||
What are the expectations of citizens they serve regarding government programs, and about evaluation? |
|||
What are the political expectations for evaluation? |
- Evaluation Implementation
- build in some periodic re-assessments of the context during the implementation of the evaluation to check if anything has changed that could affect the evaluation
- can be quick, but should be done explicitly
- puts the evaluator in ” abetter position to adjust measurements to pick up changes and to capitalized on new design opportunities to detect program-related changes better” (Conner et al, 2012)
- template for conducting CA for evaluation planning (adapted from Table 6.2 in Conner et al, 2012):
Area |
Guiding Questions |
Answers to Guiding Questions |
Implications for Evaluation |
General phenomenon/problem |
Have new aspects related to the phenomenon/ problem been identified or arisen? |
||
Have we learned more about the phenomenonor the problem that may influence our approach? |
|||
Has new knowledge been gathered throughother research and evaluation that may have a bearing on this evaluation or on stakeholders’ receptivity to findings? |
|||
Intervention |
Have new intervention components been added/modified/eliminated that affect the intervention? |
||
Has the level of intensity of the interventionchanged because of funding increases or decreases? |
|||
Broader environment around the intervention |
Have new relevant events, people, or issuesintervention arisen in the general environment in which the intervention is anchored? |
||
Do these new factors have implications for the intervention and/or its evaluation? |
|||
Parameters of the evaluation |
Do the main evaluation components continue to be responsive to the relevant contextual factors? |
||
Have the budget, time, and so on changed in any way? |
|||
Decision-making arena |
Have new organizations or individuals, with different perspectives, entered the decision-making arena, and do these new factors need to be addressed? |
||
Have the needs of decision makers changed inany way that might impact the evaluation or receptivity to the findings? |
- Decision Making
- CA now limited to 2 of the 5 areas (broad environment and decision-making arena; though may also consider phenomenon/problem re: making recommendations)
Area |
Guiding Questions |
Answers to Guiding Questions |
Implications for Evaluation |
Broader environment around the intervention |
Are the original stakeholders still relevant? |
||
What new stakeholders need to be added? |
|||
Related to the content of recommendations that might be made, is the infrastructure in place and are resources (staff, materials, support) available to provide the actions and services that will be recommended? |
|||
Might these resources be drawn awayfrom other, unrelated programs, possibly jeopardizing them? |
|||
Decision-making arena | How has the arena changed since the outset of evaluation planning? | ||
Should other important stakeholders be included? | |||
How are decision makers responding to the evaluation? | |||
How are they using it? | |||
What elements receive the most attention from various stakeholders and decision makers? | |||
How do their values, position, or history affect their use of the information? | |||
Are there other dissemination or communication strategies that might increase their use? | |||
General phenomenon/problem | Have we learned more about the phenomenon or the problem that may influence our recommendations? |
- some limitations and challenges to CA:
- CA “cannot be rigidly defined and requires subjective judgements” (but “sharing the results of a context assessment with the primary stakeholders for an intervention can help check out counter inherent evaluator biases” (Conner, 2012, p. 104)
- “does not guarantee that all relevant factors will be identified” (but certainly more will be identified that if you do not undertake an explicit context assessment) (Conner, 2012, p. 104)
- requires extra time/energy (but the benefits of CA should make it worth that time/energy) (Conner, 2012, p. 104)
Final Thoughts:
- “Context will influence the evaluators’ views and the evaluation will influence the context” (Conner, 2012, p. 93)
- “Most evaluators have moved from early experimental, “hands off” tradition where they were concerned that their involvement might change the program or threaten the perceived neutrality of the evaluation to one in which evaluators are immersed in the program, [giving] evaluators the potential to consider and learn about context” (Fitzpatrick 2012, p. 13)
References:
Conner, R.F., Fitzpatrick, J.L., & Rog, D.J. (2012). A first step forward: Context assessment. New Directions for Evaluation. 135: 89-105.
Fitzpatrick, J.L. (2012). An introduction to context and its role in evaluation practice. New Directions for Evaluation. 135: 7–24.
Rog, D.J. (2012). When background becomes foreground: Toward context-sensitive evaluation practice. New Directions for Evaluation. 135: 25-40.
Footnotes
↑1 | There are also subdimensions that may be applicable, including “demographic issues of gender, race, and language, as well as issues of power differences, class, other denominators of equity, and sociopolitical status” (Conner, 212, p. 90). |
---|