Presenter: Lyn Shulha, PhD
Canadian Evaluation Society webinar
date: 11 Dec 2013
- editor of latest edition evaluation standards book
- evaluation vs. research
- evaluation: systematic inquiry
- systematic in that it’s logical
- inquiry – asking questions, resolving doubt
- evaluation is often perceived as risky & threatening to participants
- research – how is my question situated within what is already known
- most evaluators have their routes in research (they are “accidental evaluators”)
- both research & evaluation lead to the creation of knowledge
- “research & evaluation are cousins” – same decision making processes (data collection & analysis)
- who “owns the question? and how will the findings be used? are two questions that help differentiate evaluation and research [though I think the line blurs when you start talking about participatory/community-based research]
- but create different kinds of knowledge: research is about uncovering previously unknown information, but evaluation is about “making judgments about the merit (intrinsic value of the program), worth (getting money’s worth?) and signficance (valued by users) of a program, project or proram component” – ,
- evaluation: systematic inquiry
- 3 indicators of high quality research
- fidelity to methodology (reliable & valid for quant and explicit, dependability, transparency, adequacy and trustworthiness for qual)
- positive peer review (publications) (do others judge your research to be high quality?)
- history of continued funding (do people judge your proposed research to be worthwhile?)
- evaluation standards (currently a book in its 3rd edition) are intended to support design and accountability in evaluative inquiry
- also there are CES evaluator designation of skill set and other principles for evaluations
- Joint Committee on Standards for Educational Evaluation (JCSEE) – coalition of stakeholders
- sponsoring members include CES and AEA (were in fact founding members)
- standards have evolved over the 3 editions
- first one was focused on methods, now includes culture, context, and attention to and communication with stakeholders – also evaluation accountability was added in this edition
- intended to inform those who commission, conduct or use evaluations (helps to demonstrate rigour of evaluation process)
- 5 dimensions of quality:
- utility – 8 standards associated
- feasibility – 4 standards associated
- propriety – 7 standards associated
- accuracy – 8 standards associated
- accountability – 3 standards associated
- standards are:
- consensus statement (norms for professional practice)
- operationalize the attribute so quality
- provide guidance in evaluation decision making
- suitable for use across program contexts
- standards are available online for free – in the book, there’s more detail and background
- establishing and maintaining the evaluator’s credibility is important
- note that just having experience in a program or field does not make you able to evaluate it – you need to have the skills and competencies of evaluation
- there is overlap between the attributes (e.g., work we do to promote accuracy can also promote utility)
- some of the standards are at odds with each other (need to figure out how to balance them)
- standards are not a checklist (using more standards is not necessarily better)
- optimizing one standard may reduce emphasis on another
- quality evaluations develop an argumentfor the use of the standards
- you need to consider which standards are most appropriate to do quality work for a given project in its given context
- the functional table of standards
- evalaution is not a linear process – sometimes you have to revisit previous decisions as you go along (e.g., negotiating and developing evaluation purposes and questions) – many programs are working in adaptive environments – what they are doing changes as the evaluation goes along
- growing use of participatory methods suggests there might need to be a new standard around “meaning making”
- many ways to describe a program (and the functional table illustrates which standards help with that) – e.g., logic model, systems map, fuzzy logic model
- can use the functional table for meta-evaluation (evaluating your evaluation)
- takeaways:
- clients owns the evaluation question
- focuses on information needs of clients
- uses research methods most appropriate to clients’ questions (don’t have to be an expert in all methods – working in teams allows you to put the right people with methods skills on a given project)
- focuses on generating processes, findings, and judgments that are useful to clients
- standards are practical – communicate the rigour associated with evaluation practice; makes our criteria for professional practice explicit (allows us to be accountable), encourage us to monitor and be accountable for our decision making)
- Information from Question Period:
- Katherine Donnelly just did a PhD dissertation related to evaluation & knowledge translation
- Joint Committee’s mandate is to review standards every 5 years, updated version every 10 years