Since it’s been a while since I last wrote a blog psoting in this series, and since I stopped in the middle of the “technical competencies” domain, let’s review where we are at. The first competency in the “Technical Domain” was about figuring out what the purpose and scope of an evaluation – what is the evaluation trying to do and what ground is it going to cover (and what is it not going to cover). The next competency was about figuring out if a program is in a state in which it is ready to be evaluated and the third competency was about making program theories explicit. This brings us to the fourth competency in the technical domain:
2.4 Frames evaluation topics and questions
People often get confused when we say “evaluation questions”, thinking that we are referring a question you might ask in an interview or survey (like “were you satisfied with the services you received?”). But the “evaluation question” we are referring to here (sometimes referred to a “Key Evaluation Questions” (KEQ)) are a higher-level than that; they are an overarching question (or a few questions) that guide the development of the evaluation.
An important thing to remember about evaluation questions is that they should be evaluative. Not just “what happened as a result of this program?” but “how “good” were the things that happened from the program?” (where “good” needs to be fleshed out – e.g., what do we consider “good”? how “good” is good enough to be considered “good?”).
The Better Evaluation website gives us some useful tips on developing KEQs:
- they should be open-ended (not something that you can answer with “yes” or “no”)
- they should be “specific enough to help focus the evaluation, but broad enough to be broken down into more detailed questions to guide data collection”
- they should relate to the intended purpose of the evaluation
- 7 +/- 2 is a good number to have
- you should work your stakeholders to development them
I think it’s really important to think about who gets to decide on what the evaluation questions are. Since the rest of the evaluation will be built based on the questions, whoever gets to decide on the questions holds a lot of power. This could be a whole blog posting topic on its own, but in the interest of actually getting this posted, I think I will leave that for another day.
A nice resource on working with your stakeholders to develop evaluation questions is Preskill & Jones’ A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions. The CDC’s Good Evaluation Questions Checklist can also be helpful in thinking through/improving your evaluation questions.
Image source: Posted on Flickr with a Creative Commons license.