This past Friday, the BC & Yukon chapter of the Canadian Evaluation Society hosted a day long conference on “Health Evaluation: Evaluation as a Learning Process“. Being an evaluator in the health sector, this was relevant to my interests. And I didn’t just attend – I was a presenter as well. My session was a panel on three related projects from my previous job:
Title | Authors |
Complexity, Collaboration, and Iteration: An Evaluation Framework for Build Healthy Communities at Fraser Health | Dr. M. Elizabeth Snow, Tatsiana Dudkina, Samantha Tong, Dr. Victoria Lee |
Using Partnership Evaluation to Increase Impact of Healthier Community Partnerships in Fraser Health | Judith Eigenbrod, Marina Irick, Christiana Wall, Lynn Nowoselski, Onyinye Adibe, Judi Mussenden, Dr. Helena Swinkels, Dr. Malcolm Steinberg |
Using Developmental Evaluation in a Complex Partnership-based Health Promotion Initiative | Judi Mussenden, Richelle Foulkes, Christiana Wall, Dr. M. Elizabeth Snow, Dr. Helena Swinkels |
We got a really great turnout for our session and lots of great comments and questions! Since I’m no longer at the job I was at when I worked on those projects, I won’t get to continue on in this work, but I’m very interested to see where my former co-workers take it (in particular, the first of those three presentations was on an evaluation framework that I created along with a Masters of Public Health student who did her practicum placement with me this past summer, but I left that job just after we finished creating the framework, so seeing the framework turned into evaluation plans and then those plans implemented is something that I’ll have to watch from afar).
In addition to giving our presentations, I also got to attend some other great sessions. Here are my notes!
Messy Elegance: Evaluation as a Learning Process by Ben Kadel & April Struthers (Keynote Presentation)
- wabi sabi: Japanese concept about things of beauty being “imperfect, impermanent, and incomplete” [evaluations take place in the real world, not tightly controlled lab settings, so we need to embrace the “messiness” of the world we do]
- psychological inoculation: like a vaccine, you expose people to a small amount of something so they can prepare for exposure to a bigger amount of it. E.g., introduce stakeholders to the idea that there will be some discomfort with an evaluation (or with a change) and that this doesn’t mean something is wrong (as we naturally associate discomfort with danger), but it is just a normal part of the process. You can also tell them “I can take you where you want to go, but it’s going to cost you a little bit of discomfort”.
SMART FUND – Using Common Outcomes Measurement in Health Promotion by Jolene Landsdowne & Dr. Marina Niks, Vancouver Coastal Health
- Smart Fund
- created single set of indicators for all the programs they fund, so that they can aggregate their data across programs to look at overall impact of their funding, wanted indicators based on the research evidence, and wanted to provide support to non-profits who previously had to develop own indicators (even though this is not typically the skill set of non-profits)
Closing the Loop: Promoting Uptake of Evaluation Recommendations by Derek Wilson, Dr. Chris Lovato, and Tamiza Abji, Faculty of Medicine, University of British Columbia
- interested in ensuring that recommendations coming from evaluations get implemented (and that evaluation reports don’t just sit on a shelf)
- each evaluation has an “accountable committee” – recommendations are presented to that committee and a formal motion to accept the recommendations is made; committee assigns the recommendations to a specific person or committee to be responsible for each of them
- date set to follow up on the recommendations
- built a database into which all recommendations coming from evaluations done by their unit go, including:
- the recommendation
- who is responsible to implement it
- when will the evaluator follow up with them on it
- importance of the recommendation (e.g., critical, important, would be nice)
- status of the recommendation (e.g., completed, partially completed, not completed)
- comments (e.g., could comment on why a recommendation has not been implemented – is there a barrier to implementing it? has the situation changed such that the recommendation is no longer relevant?)
- heatmapping of both importance (so you can see if you have a lot of critical recommendations) and status (so you can see if you have a lot of overdue recommendations that you haven’t acted on)
- people will know that others will see if they haven’t implemented their recommendations
- an oversight committee (Program Evaluation and Program Improvement (PEPI) Committee) to whom issues can be escalated (but people responsible for recommendations always approached first if a recommendation is not acted on)
- one audience member talked about a process they use where evaluation findings are used to identify “opportunities” (rather than “recommendations”) and these opportunities are what’s taken to the accountable committee to assess (costs, benefits, risks) and then decide on recommendations
- [I think I can use a lot of these ideas in the project I’m currently working on – in my evaluation database, I will build a section for tracking (and heatmapping!) recommendations; I think I will also use the evaluation findings –> opportunities –> assess opportunities –> recommendations process]
Capturing System Transformation at Island Health: A Blueprint for Evaluating the Movement towards Integrated Community-Based Health and Care in a Complex System by Shelley Tice, Sherry Gill, Kate Harris, Island Health
- lots of knowledge translation tools, toolkit of methods to help working groups understand how to do evaluations
- Tiki Toki – software to create graphic timelines
Learning Across Multiple Evaluations (Panel Discussion)
- Anne Baldwin of Canada Health Infoway (CHI) noted that they recommend doing benefits evaluation 18-24 months post-go live (as it takes longer than you’d think to see the expected benefits)
- CHI Benefits Evaluation Technical Indicators Report, v. 2.0 – an excellent resource listed a bunch of indicators that have been used across the country
- CHI did a “practice challenge” – asked physicians with and without electronic medical records (EMRs) to look up a type of patient (e.g., get a list of all your patients with diabetes, or all your patients on a certain drug, etc.) and saw that those with EMRs were able to do this 30x faster than those without (also, it was hard to recruit those without EMRs to even do the challenge, since it’s so time consuming with paper records)
- Michael Smith Foundation for Health Research (MSFHR) has recently completed an evaluation strategy for their foundation