Process Use of Evaluation

Just noticed this in my drafts folder – some notes on process use evaluation from some of the papers I’d been reading on the topic. Figured I should actually publish it.

Definition of process use:

  • “the utility to stakeholders of being involved in the planning and implementation of an evaluation” (Forss et al, 2002, p. 30)
  • Patton describes “process use” as “changes resulting from engagement in the evaluation process and learning to think evaluatively. Process use occurs when those involved in the evaluation learn from the evaluation process itself or make program changes based on the evaluation process rather than findings. Process use also includes the effects of evaluation procedures and operation, for example, the premise that “what gets measured gets done”, so establishing measurements and setting targets affects program operations and management focus.” (Patton, 2008, p. 122) or “individual changes in thinking, attitudes, and behavior, and program or organizational changes in procedures and culture that occur among those involved in evaluation as a result of learning that occurs during the evaluation process.” (Patton, 2008, p. 155)
  • 6 types of process use (pp. 158-9):
    • infusing evaluative thinking into organizational culture
    • enhancing shared understanding
    • supporting and reinforcing program intervention – “the primary principle of intervention-oriented evaluation is to build a program delivery model that logically and meaningfully interjects data collection in ways that enhance achievement of program outcomes, while also meeting evaluation information needs” – while traditional research would view measurement that affects the outcome as contamination, if evaluation is part of the intervention, for the purposes of the evaluation of the program “it does not matter […] how much of the measured changed is due to [the data collection] vs actual [program] activities, or both, as long as the instrument items are valid indicators of desired outcomes” (Patton, 2008, p. 166). “A program is an intervention in the sense that it is aimed at changing something. The evaluation becomes part of the programmatic intervention to the extent that the way it is conducted supports and reinforces accomplishing desired program goals” (Patton, 2008, p. 166)
    • instrumentation effects and reactivity
    • increasing engagement, self-determination, and ownership
    • program and organizational development
  • In the very interesting article “Process Use as a Usefulism”, Patton (2007) describes how he thinks of process use as a “sensitizing concept”
  • sensitizing concept (Patton, 2007, p. 102-103):
    • “can provide some initial direction to a study as one inquires into how the concept is given meaning in a particular place or set of circumstances”
    • “Such an approach recognizes that although the specific manifestations of social phenomena vary by time, space, and circumstance, the sensitizing concept is a container for capturing, holding, and examining these manifestations to better understand patterns and implications”
    • raises consciousness about something and alerts us to watch out for it within a specific context. This is what the concept of process use does. It says things are happening to people and changes are taking place in programs and organizations as evaluation takes place, especially when stakeholders are involved in the process. Watch out for those things. Pay attention. Something important may be happening.”

Types of Use of Evaluation

  • symbolic use (a.k.a., strategic use or persuasive use):
    • “evaluation use to convince others of a political position” (Peck & Gorzalski, 2009, p. 141 )
    • “use of knowledge as ammunition in the attainment of power or profit”(Straus et al, 2010)
  • conceptual use:
    • “to change levels of knowledge, understanding, and attitude” (Peck & Gorzalski, 2009, p. 141)
    • process use: “knowledge gained through the course of conducting  program evaluation” (Peck & Gorzalski, 2009, p. 141)
  • instrumental use:
    • “direct use of evaluation’s findings in decision making or problem solving” (Peck & Gorzalski, 2009, p. 141)
    • “to change behaviour or practice” (Straus et al, 2010)
  • Forss et al (2002) cite Verdung (1997)  as identifying 7 ways that evalautions can be used: “instrumentally, conceptually, legitimizing, interactively, tactically, ritually, and as a process” (p. 31)
  • Forss et al identify 5 different types of process use:
    • learning to learn
      • “Patton (1998) wrote that the evaluation field has its own particular culture, building on norms and values that evaluators take for granted,but which may be quite alien to people embedded in the culture of another profession. Patton (1998: 226) suggests that these values include ‘clarity, specificity and focusing, being systematic and making assumptions explicit, operationalising programme concepts, ideas and goals, separating statement of fact from interpretations and judgments’.” (Forss et al, 2002, p. 33, emphasis mine)
        • I checked out the original source on this – the direct quotation is: “that evaluation constitutes a culture, of sorts. We, as evaluators, have our own values, our own ways of thinking, our own language, our own hierarchy, and our own reward system. When we engage other people in the evaluation process, we are providing them with a cross-cultural experience. They often experience evaluators as imperialistic, that is, as imposing the evaluation culture on top of their own values and culture—or they may find the cross cultural experience stimulating and friendly. In either case, and all the spaces in between, it is a cross-cultural interaction […] This culture of evaluation, which we as evaluators take for granted in our own way of thinking, is quite alien to many of the people with whom we work at program levels. Examples of the values of evaluation include: clarity, specificity and focusing; being systematic and making assumptions explicit; operationalizing program concepts, ideas and goals; distinguishing inputs and processes from outcomes; valuing empirical evidence; and separating statements of fact from interpretations and judgements. These values constitute ways of thinking that are not natural to people and that are quite alien to many” (Patton, 1998, pp. 225-6, emphasis mine)
      • values of evaluation include “enquiry”, “a structured way of thinking about reality and generating knowledge” (Forss et al, 2002, p. 33)
      • “to engage in evaluation is thus also a way of learning how to learn” (Forss et al, 2002, p. 33)
    • developing networks – evaluation activities can bring together people who don’t usually work together
    • creating shared understanding
      • working together “help[s] people understand each other’s motives, and to some extend also to respect the differences” (Forss et al, 2002, p. 35)
      • note that “the usefulness of evaluation hinges directly upon the quality of the communication in evaluation exercises”  (Forss et al, 2002, p. 35)
    • strengthening the project
      • when the evaluator works to understand the program, it helps stakeholders themselves to get a “clearer understanding of the project and possibly with a new resolve to achieve the project’s aims” (Forss et al, 2002, p. 36)
      • “Patton (1998) calls this ‘evaluation as an intervention’; the evaluation becomes an intentional intervention supporting programme outcomes.” (Forss et al, 2002, p. 36)
      • “The way the team formulates questions, discusses activities and listens to experiences, may influence activities at the project level.” (Forss et al, 2002, p. 36)
    • boosting morale
      • “reminds them of the purposes they work for, and allows them to explore the relationship between their own organization and the […] impact that is expected” (Forss et al, 2002, p. 37)
      • “the fact that attention is shown, the project is investigated, viewpoints are listened to and data are collected could presumably given rise to similar positive effects as […] Hawthorne”  (Forss et al, 2002, p. 38) [though I would note that in some organizations, evaluations are only conducted when a program is seen to be failing/in trouble and the evaluator is sent it to figure out why or to decide if the program should be closed – this could de-motivate people. Also, my experience has been that if data is collected by people from whom the data was collected don’t see what is done with it, they don’t feel listened to and feel like they’ve been asked to do work (of data collection) for no reason – and that’s demotivating. So it’s really about the organization’s approach to evaluation and how they communicate]

 

  • because process use means that the evaluation is having an effect on the stakeholders, “an evaluation may become part of the treatment, rather than just being an independent assessment of effects” (Forss et al, 2002, p. 30)
  • “an evaluation is not neutral, it will reinforce and strengthen some aspects of the organization, presumably at an opportunity cost of time and money” (Forss et al, 2002, pp. 38-9)
  • “the report itself will normally provide little new insight. Most discoveries and new knowledge have been consumed and used during the evaluation process. The report merely marks the end of the evaluation process.”(Forss et al, 2002, p. 40)
  • The “merit” of evaluation “lies […] in discovering unknown meanings, which help stakeholders to develop a new self-awareness, and in implementing new connections between people, actions, and thoughts” (Bezzi, 2006, cited in Fletcher & Dyson, 2013)
  • Fletcher & Dyson (2013) describing an evaluation that one of them had done: “The first evaluation challenge facing the first author was in helping the project’s diverse range of partners to develop a shared understanding of what the project would be. As is so often the case in project development, there had been a primary focus on securing funding and not on the real-life details of the project itself. The project logic, its conceptualization of culture change processes and, most importantly, the why and how of this logic and concept, had not been articulated – despite the fact that articulation of such project logic and culture change conceptual framework would, in turn, affect the overall defined aim and anticipated outcomes. As argued by Weiss (1995), when interventions do not make such things clear (either to themselves, or to others), the evaluation task becomes considerably more challenging. Given the already discussed nature of the collaborative research approach, it was fitting for the evaluator to assist in such articulation in order to ensure that the evaluation plan was both coherent with and relevant to such logic and conceptualization.” (p. 425)

References

  • Fletcher, G., Dyson S. (2013). Evaluation as a work in progress: stories of shared learning and development. Evaluation. 19(4): 419-30.
  • Forss, K, Rebien, C. C., Carlsson, J. (2002). Process use of evaluations: Types of use that precede lessons learned and feedback. Evaluation. 8(1):29-45.
  • Patton, M.Q. (1998). Discovering process use. Evaluation. 4(2):225-233.
  • Patton, M.Q. (2008). Utilization-focused evaluation, 4th edition. Thousand Oaks, CA: Sage.
  • Peck, L. R., Gorzalski, L. M. (2009). An evaluation use framework and empirical assessment. Journal of Multidisciplinary Evaluation. 6(12): 139-156.
  •  Straus, S. E., Tetroe, J., Graham, I. D., Zwarenstein, M., Bhattacharyya, O., Leung, E. (2010). Section 3.6.1: Monitoring Knowledge Use and Evaluating Outcomes of Knowledge Use in Knowledge translation and commercialization. Retrieved from http://www.cihr-irsc.gc.ca/e/41945.html
This entry was posted in evaluation and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *