This year, the American Evaluation Association live streamed a bunch of their conference sessions for free! I didn’t get to watch as many as I would have liked 1Work meetings got in the way of some of them, some of the sessions I wanted to attend ended up not being live streamed because presenters decided they didn’t want to be live streamed after all, and some of the sessions started at 5 am this morning and though I did wake up, I didn’t manage to stay awake enough to pay attention. but I did see a few and here are some summary notes that I took from the sessions.
Jane Davidson
- distinguished between non-evaluative vs. evaluative questions
Non-evaluative questions | Evaluative questions |
How many people received the program? | How good was the program reach? (e.g., did it reach the peopel it should have? did it reach underrepresented people?) |
Was the program implmeneted as intended? | How well was the program implemented? (Shouldn’t just assume the plan was good and that implementing the plan as intended is the best thing to do) (How well was the program contextualized? Did the program adapt appropriately in response to what occurred as the program continued) |
What effect did the program have on its participants? | How substantial and valuable were the effects on participants? |
- The “non-evaluative questions” are merely describing factual evidence, but evidence alone cannot answer an evaluative question. (If you are asking the non-evaluative questions, then you are doing empirical research, not evaluation)
- Indicators don’t give you answers to evaluative questions
non-evaluative facts + definitions of quality and value –> evaluative conclusions
- “Evaluative rubtics paint a picture of what the evidence should look like a differnet levels of performance”
- What will the constellation of evidence looks like for exemplary, good, or bad results?
- creation of rubrics is generally done in a participatory way – stakeholder brings their various types of expertise (content, politics, experience, etc.), evaluator brings evaluation expertise, evaluator unpacks the ideas from the stakeholders and guides them to make the rubric
Exemplary Uses of Theory in Evaluation Practice
- theories of programs
- theories of evaluation
- reductionism (transdisciplinary) – breaking system down to components
- e.g., goal attainment : intervention –> outcome (and evalution focuses on determining if intervention causes outcome; “experimental evalution approach”
- pros: rigour of evaluation; scientific reputation
- cons: neglect context, assumes efficacy = effectiveness (in real world)
- e.g., goal attainment : intervention –> outcome (and evalution focuses on determining if intervention causes outcome; “experimental evalution approach”
- systems thinking (transdisciplinary) “viewing the sitaution holistically, as opposed to reductionistically”
- pros: better explains how a program works, accounts for synergeies/emergent behvaiours
- cons: information overload, difficulties in data analysis
- pragmatic synthesis:
- the above two are extremes of theoretical spectrum
- most real world programs are in the middle
- middle-ground programs differ in complexity from both reudctionism and system thinking:
- theories of program: action model/change model (in Chen’s book)
- Discussant: Mel Mark: makes sense to be “multi-lingual” when it comes to evaluation theory – understand a variety of them and ask what makes senes in a given case
Closing Session
The closing session was a series of speakers summarizing their key learnings. Here are the ones that jumped out at me:
- the importance of evaluating what policies actually do and not just their rhetoric (noted that programs and policies can allow racism to continue and even promote racism) and the importance of being away of our own biases
- evaluation can have an impact because:
- it improves the program
- it has an impact itself
- when a program has a weak theory, often evaluation becomes an intervention, but when a program has a strong theory, evaluation serves more as a facilitation
- when you do an evaluation, do you position your work within the context of the evaluation community?
- evaluation standards
- evaluator competencies
- cultural competencies
- MQ Patton – when he introduces himself in the context of his work, he doesn’t say “I’m an evaluator” or talk just about his skills, but that “I am a member of an international evaluation community”
- International Year of Evaluation has allowed a platform for promoting evaluation as a “global force for good”
- “Act as if what you do matters. It does” – William James
Footnotes
↑1 | Work meetings got in the way of some of them, some of the sessions I wanted to attend ended up not being live streamed because presenters decided they didn’t want to be live streamed after all, and some of the sessions started at 5 am this morning and though I did wake up, I didn’t manage to stay awake enough to pay attention. |
---|