1.4 Considers the well-being of human and natural systems in evaluation practice.
For this competency, I would say I’ve focused much more on the “human” than the “natural”. I see some overlap with the stuff I talked about last week, as considering the well-being of humans includes ethical concerns like, for example, maintaining confidentiality for participants in the evaluation.
But looking at this competency, it’s gotten me thinking about the well-being of human systems – which leads me back to my thoughts on learning more about equity, which I also mentioned last week. Within human systems, there are some groups that disproportionately receive benefits of programs/services/initiatives/systems than others – and similarly, some groups that disproportionately are harmed by programs/services/initiatives/systems. Coincidentally, in the AEA eStudy webinar last week, Jonny talked about how in systems there are often disproportionate distributions of benefits and that’s something that we as evaluators need to pay attention to and think about the values underlying our evaluations when we thinking about how we decide if a program/initiative is a “success”. I think Michael Quinn Patton also talked a bit about this on the Eval Cafe podcast episode that just came out this week on Principles-Focused Evaluation (PFE) , where he was talking about the difference between rules and principles using the example of “do no harm” – but if we are interested in making systems more equitable, then aren’t we technically “harming” the more advantaged by taking away some of the power/benefits that they currently have in order to distribute benefits more equitably. I think that if you care about equity, you’d say that in in the interest of fairness/justice, that’s OK and the rule of “do not harm” is actually too rigid 1It’s entirely possible that I’m misremembering that was where I heard that example – I’m also in the middle of reading the PFE book, and having been reading/listening to a bunch of other stuff, so I may be conflating things. My apologies to all if I’ve mixed that up.. Is having some people benefit good enough to say a program is successful, or should we be looking at who is benefiting -and who is not – and who is being harmed – and who is not – across the whole human system? And how do we include that in our evaluations?
Similarly, I think that paying attention to the unintended consequences of a program/initiative is a really important part of an evaluation. If we are only looking for the ways in which the program designers hoped that the program would be beneficial, but didn’t hold space in our evaluations to look out for ways that the program may cause harm, we aren’t really doing a very comprehensive evaluation.
As for considering the well-being of natural systems, this is an area that I have not done a lot of work. Like with the equity stuff I talked about last week, I think the types of evaluations that I do (in the healthcare sector), don’t have an obvious link to environmental issues like they would if I were doing evaluation work with organizations that are working directly in the environmental sector. But every program/initiative exists within, and interacts with, the natural world, and we are in the middle of a climate crisis. So I think it requires some time to reflect on how I can better consider the well-being of natural systems in my evaluation practice.
There are definitely ways that I try to do environmentally-responsive things in my day-to-day work – taking transit to the office instead of driving, not printing things unnecessarily, using a travel mug for my coffee every day, recycling and composting, not drinking bottled water. But honestly, these things are easy to do, and I don’t know how much impact my individual actions in these regards really have. And there are other things that I do that I know are harmful to the environment – flying to conferences and to sites to collect data, for example, have a huge carbon footprint.
And then there is the idea of how to incorporate considerations of the environmental impact of the programs/services I evaluate. For example, in the project I’m evaluating on switching from paper patient charts to electronic patient charts, one thing that could be evaluated is the saving of paper by going electronic vs. the vast energy costs of the server space required to go electronic. Is that something I could include in an evaluation, especially considering that isn’t really within the scope of evaluation per se? And how would you compare those two impacts? Clearly, this is a space where I have lots of room to grow.
It’s entirely possible that I’m misremembering that was where I heard that example – I’m also in the middle of reading the PFE book, and having been reading/listening to a bunch of other stuff, so I may be conflating things. My apologies to all if I’ve mixed that up.
My notes from part 2 of the American Evaluation Association (AEA) eStudy course being facilitated by Jonny Morell.
one person commented that “measuring collective impact of small effects of multiple programs is almost always shot down by stakeholders in favour of measuring process outcomes” and Jonny talked about how evaluators are typically engaged to evaluate a single program, so if we tried to measure outcomes from other programs, we’d get shot down for wasting resources and going outside of the scope that we were hired for
this got me thinking about boundaries (something I’ve been reflecting on a lot lately as part of a group that I’m working with). The “scope” of a project is a boundary and it makes sense for a program to bound this scope to the work that their program is doing. They have limited funds and don’t want to spend them evaluating the broader system in which they are working. But the organization operates within that system and if change really does happen by a bunch of program contributing little bits of outcomes that all accumulate – how would we ever see that?
There are “collective impact” initiatives and I’ve seen evaluations of those, but those I’ve seen tend to be when a single funder is funding a bunch of programs and want to evaluate across all these programs that are setting out to improve some thing across all the programs.
But what about programs that aren’t linked through a collective impact project – what about just the way that all sorts of programs running in the world that affect similar things?
[Crazy idea: what if someone (like a philanthropic foundation) funded an evaluation of the impact of an entire system that relates to some issue – say, poverty, for example – with the freedom to go and investigate whatever programs/services/initiatives the evaluative process uncovers. Is anything doing something like this?]
Timing of Effects
we don’t often talk about “how long will it take?” for these effect in a program model to happen. So even if we say something is an “intermediate effect”, how long does that actually mean? Often effects don’t happen as soon as people expect, or soon as they would like.
also, sometimes things need to hit a tipping point, so you might not see effects for a long time, and then you see a big effect. This challenges people’s “common sense” feeling that things will be linear (you put in a bit of work, you get a bit of effect, you put in more work, you get a bit more effect).
“Success may mean that the rich get richer. In a very successful program, benefits may not be symmetrically distributed. evaluation methodology, straightforward. The politics and values? Not so much”
example: an agriculture program that leads to increase crop yield is expected to improve family standard of living. But does that get evenly distributed?
Looking at distributions is important!
Three ways to use complexity
e.g. thinking about a program as an organism evolving in an environment
instrumentally – would need to do a lot of math and specific data
conceptually – how might this program change? does it become more or less adaptable to its environment? does this program compete for resources from the environment with other programs? thinking about the program in this way changes how I think about the program
metaphorically – e.g. chaos has a very technical meaning, but it’s not useful in evaluatoin beacusae we never let chaos happen – we never let feedback loops go on uncontrolled. We intervene when things start to go off the rails. But the notion of chaos of repeated patterns that can’t be controlled or predicted
[I don’t understand that difference between conceptual and metaphorical use – going to post a question about this on the workshop discussion site]
Cross-cutting themes in Evaluation
whenever you are thinking about complexity, need to think about:
how change happens
without thinking about compleixty, 3 ways we think about change:
from the outside: take a systems view; events in a program’s environment makes a difference
expected causal relationships: identified model content in terms of elements and relationships
traditional social science theories: usual paradigmatic stuff depending on your background (e.g., economics, sociology)
when you add complexity to the mix:
emergence – change cannot be explained by the behaviour of a system’s parts
sensitive dependence – small (sometimes random) changes can affect an entire trajectory over time. We usually think of a linear model – we only care about groups, we want a large n, we don’t want to see those little other things
limitations of models – models simplify, causal dynamics are going on that are unknown. (explicit or implicit models, quant or qual). Remember “all models are wrong but some are useful”. They help us identify things we care about and come up with methods, but need to remember that they aren’t perfect
evolutionary dynamics – think of programs as organisms evolving in a diverse ecosystem. Helps him to think of this as a metaphor
preferential attachment – on a random basis, “larger” becomes a larger attractor.
Jonny thinks its useful to think about each of these in an evaluation – you may not necessarily need to use them, but worth thinking about whether they could be useful
You can use simple methods to evaluate in situations of complexity
e.g., attendees of a program may affect their (non-attendee) friends. And their friends may also know each other. And the attendees may affect one another too. And maybe there are community-wide effects too. And maybe those effects might feedback and change the program too.
you could track the program over time (to see if there is a feedback loop from community to the program)
you could interview staff about their perceptions of needs
there are unpredictable changes in the community – you could do a content analysis of community social media; you could do open-ended interviews of community members
program theory – you can specify desired outcomes (and you can measure them); you can’t specify the path to the desired outcomes in the beginning -but you can track stuff and look at it post hoc
or you may decide that it is worth using fancy tools (such as agent-based or system dynamic modelling; formal network analysis)
network structures can tell us a lot about relationships
even without doing fancy calculations, sometimes just looking at a network structure can be revealing
some evaluations, it is worth doing network analysis
example of a healthcare – primary, secondary, tertiary health care
primary clinics feed into secondary, secondary clinic feeds into a tertiary system – if the link between the secondary and teritary clinics breaks, the whole thing falls apart
fractal structure: unless you know the scale, you can’t tell how close or far away you are from it (e.g., snowflake, vascular system of the human body)
leads to robustness – if you only have one link (e.g., the only way to get into the tertiary clinic is referral from one secondary clinic, if that links breaks, the whole system is wrecked)
Competing Program Theories
we can have different, competing program theories
e.g. one theory might be that increasing air pollution controls and increased use of clean fuel sources –> decreased air pollution and increased economic growth (which is a theory that those who endorse more air pollution controls and promoting the use of more clean fuels might suggest)
but another theory might be that air pollution controls –> decrease air pollution, but increasing clean fuel sources –> increased cost of doing business –> slowed economic growth (which is a theory that those who opposed more air pollution controls and promoting the use of more clean fuels might suggest)
what would it take to activate one or the other program theory? it might be (a) small change(s). And it’s not really knowable/predictable what the events will tip the balance
in complex systems, small changes can lead to big results
simple programs can exhibit complex behaviours
so it’s always worth thinking about “might there be complex behaviours going on?”
How much do you need to know about complexity?
his argument by analogy:
how much do you know about a t-test?
if you know what it is appropriate for, that most people accept that p <0.05 as a level of significance, you can probably use the t-test reasonably – you can probably make sense of it
but there is lots more to know about the t-test – things like the distribution of data, underlying theory, there’s a whole argument about whether the level of 0.05 is really appropriate, central limit theorem, definition of degrees of freedom etc., etc.
do we need to know all of that deeper stuff to do a decent job of using a t-test? probably not.We’d be better off at doing it if we knew all the underlying stuff, but there’s not magical amount of stuff that we can say we “need” to know
he thinks it’s similar with complexity – knowing more is better, but hard to say how much is “enough”
“feedback loops can produce nonlinear behaviour”
but the nature of those feedback loops matters – things like how long the lag for a feedback is (shorter lag = quicker loops)
it was very interesting to see lags added into the a program logic model and see how that affected the overall timeline
1.3 Integrates the Canadian Evaluation Society’s stated ethics in professional practice and ensures that ethical oversight is maintained throughout the evaluation.
Like many evaluators, a lot of my knowledge of ethics comes from the research world. I recently completed the latest version of the TriCouncil’s online Course on Research Ethics, which was required by the organization I work for as the course has been updated since I originally took their ethics training a long, long time ago. A lot of the concepts from research ethics – informed consent of participants, do no harm, justice, etc. – are applicable to evaluation as well.
As for how I integrate ethics into my work and ensure that ethic oversight is maintained through the evaluation, a few things that I do include:
create systems to protect the privacy of data that my team and I collect, such as only storing data on secure networks and using passwords to protect data
discussing ethical considerations, such as confidentiality, conducting rigorous evaluations, and reporting findings accurately and completely (just to name a few), with my team throughout the evaluation process
holding strong on my commitment to do my work ethically, even when it is challenging. I consider my integrity to be a very important part of being an evaluator. Without integrity, there would be no point to doing the work that I do.
One area of ethical considerations that I’m seeking to learn more about is equity in evaluation. Since I don’t work in an area where there is an obvious equity lens – such as there would be working with a non-profit that explicitly focuses on equity, for example – I find it challenging to see how my work links with equity. But inequities often stem from institutions and systems where power imbalances and institutionalized racism/sexism/ableism/and many other -isms are so embedded and are often difficult for someone with a lot of privilege (such as a straight, white, cis person such as myself) to see. So I figure that this is an area that I need to learn more about so that I can do better. Two great resources that I’ve heard about recently for learning more about equity and evaluation are Equitable Evaluation and We All Count.
The CES ethics statement is currently under review, as it is about 20 years old. I went to a session at the CES 2018 conference where they were consulting with evaluators to see if the statement needed some tweaks, or a complete overhaul. The group I was in felt it was the latter and I know there is a committee that is hard at work at revising that statement. I’m actually quite looking forward to seeing what they come up with – and I’m sure I’ll write a blog posting on it once it comes out – now that I’m on such a roll with writing here!
When I joined the Australasian Evaluation Society 1The year I went to their conference – it’s cheaper to join the society and pay the member conference registration fee than to just pay the non-member registration fee, so I joined., I had to attest to the fact that I would adhere to their ethical guidelines. I’m interested to see if CES will do the same with their new ethics statement when it’s released.
Given my interest in complexity and evaluation, I decided to take the American Evaluation Association (AEA) eStudy course being facilitated by Jonny Morell. I’ve seen Jonny speak at conferences before and have learned some useful things, so figured I could learn a few things from him in this more extended session. Sadly, the live presentation of the eStudy conflicts with other meetings that I have, so I’m only going to be able to see part of the presentations live and will have to watch the other parts of the presentations after the fact from the session recording.
Here’s one quote from his posting on complexity having awkward implications for evaluators that jumped out at me:
Contrast an automobile engine [not complex] with a beehive, a traffic jam, or an economy [complex]. I could identify each part of the engine, explain its construction, discuss how an internal combustion engine works, and what role that part plays in the operation of the engine. The whole engine may be greater than the sum of its parts, but the unique role of each part remains. The contribution of each individual part does not exist with beehives, traffic jams, or economies. With these, it may be possible to identify the rules of interaction that have to be in place for emergence to manifest, but it would still be impossible to identify the unique contribution of each part.
This really is a challenge for evaluators! Imagine being hired to evaluate a program – your job is to answer “what happens as a result of this program?”, but you know that your program is just one part of a larger, complex system, so you can never really definitively say “this program, and this program alone, caused X, Y, and Z”, as you know that outcomes are affected by so many things that are outside of the control of the program. That is the situation that we evaluators find ourselves in all the time. That’s not to say that we can’t do anything, but just that we need to be thoughtful in how we try to determine what results from a program in the context of everything else in the system. Learning about complexity and systems thinking can help us do that.
I had a conflicting meeting during the first session, held on July 9, 2019, so I watched the recording afterwards. Here are my notes:
people seems to think complexity is “mysterious” and “magic” – Jonny feels it is not
he feels that “most of the time you won’t have to use it at all”
if you learned thematic analysis or regression, you’d say “cool method, I’ll use it when it is needed and I won’t use it when I don’t need it”. He thinks complexity should be the same – use when it’s needed.
you might use complexity instead of another method (like you might say “thematic analysis is better than how I’ve been analyzing open ended survey data. I will use thematic analysis instead of what I was doing before”)
but you could also thinking about it like this: it can help you change how you conceptualize the problem, the data analysis strategies – “you begin to think differently about the world”
people seems to think that you need to use new fancy tools to apply complexity – and sometimes you do, but often you don’t – you can use familiar methods while applying complexity concepts
there’s no agreed upon definition of complexity – but he doesn’t worry about that
“systems” is a huge area (but he’s not that interested in it – though he did plug the AEA Systems TIG)
“complexity” also a huge area – and he thinks lots of the concepts are useful to evaluators
“I don’t know what complex systems are, but I know what complex systems do. I can work with that” – we can use that to make practical decisions on models, on methods, data interpretation, how to conceptualize the program.
He thinks that complexity is popular in evaluation today because there is a sense that programs aren’t successful and evaluators are the messenger (and people are shooting the messenger). And people think that maybe complexity can help explain why programs aren’t working.
“The fact that everything is connected to everything else is true, but useless.” He wants to help us learn the “art” of getting a sense of what connections are worth dealing with and which aren’t. We need to “discern meaning within the fact that everything is connected to everything else.”
Cross cutting themes in complexity science
predictability – what can we predict and how well can we predict it
how change happens –
Complex behaviours that might be useful in evaluation ((Not everything that you’ll read about when you read about complexity is useful in evaluation:
unpredictable outcome chains
network effects among outcomes
joint optimization of uncorrelated outcomes
It’s hard to talk to people (like evaluation stakeholders) about complexity
if we show people a logic model or theory of change, they can understand how things they do in their program are believed to lead to outcomes they are interested in
but talking about things like a program might benefit a few people a lot and most people not at all, or network effects – these are things we aren’t used to talking to evaluation stakeholders about
it’s difficult to say to people that we might not be able to show “intermediate outcomes” on the way to long-term outcomes (because results aren’t so linear)
your program may have negative effects in the broader system (programs are siloed, so you are only working within your own scope and aren’t concerned (or incentivized to be concerned) about stuff outside of your program. If we throw all of our financial and intellectual resources into HIV, we’d make a lot of improvements with respect to HIV. But that pulls the resources away from prenatal care, palliative care, primary care, etc., etc., etc. You are “impoverishing” the environment for every other program – and those programs will have to adapt to that.
preferential attractors – e.g., snowflakes – the odds of a molecule attaching to a big clump is more than a little clump; same thing with business – you are more likely to attach to a bigger centre of money than a small one
emergence is NOT “the whole is greater than the sum of the parts” – it’s about the WAY that the whole is greater than the sum of the parts. An engine is greater than the sum of its parts. But I could explain what the contribution of each of the parts is to the engine. That’s not the same for complex systems (like traffic jams, beehives, or economies) – you can’t explain the whole economy based on the contribution of each of its parts. Not just because we haven’t studied these enough – but because it is “theoretically impossible” to do so.
“Ignoring complexity can be rational, adaptive behaviour”
stovepipes are efficient ways to get things done
different programs have different time horizons
different organizations have different cultures
it takes resources to coordinate different programs/systems/organizations
Even if our stakeholders don’t buy into complexity, it’s still important for evaluators to think about and deal with
“if program designers build models that do not incorporate complex behaviour, they will:
miss important relationships
not be able to advocate effectively
not be effective in making changes to improve their programs
misunderstand how programs operate and what they may accomplish
these problems cannot be fixed in an evaluation, but it is still possible to evaluate the complex behaviours in their models”
e.g., he showed a logic model and talked about if you have a bunch of arrows leading into an outcome, are those “AND” or are they “OR” (i.e., do you need all of the outputs to happen to lead to that outcome, or do you only need one? Or only need some combo? He also added unintended consequences and about network effects.
the evaluator can still look at these complex behaviours – look for the data to support it. You can superimpose a complex model on top of the traditional logic model. You can do this even if the program stakeholders only see the logic model. You can show them the data interpreted based on their logic model, and then also show them how the data relates to the model that includes complexity (that might be what it takes to incorporate it).
He thinks more unintended consequences are undesirable and there are methods for measuring unintended consequences and they can be measured within the scope of an evaluation.
Jonny hates the “butterfly effect” because, in his world, he doesn’t see big changes happening super easily. He sees people making lots of policy/program changes, but the outcomes don’t change! His take on sensitivity to initial conditions is that you can run the same program multiple times and get different results each time because there are difference in the context of where its implemented and so you can’t necessarily replicate the outcome chain. But if the program is oeprating within an attractor, you might be able to get to the same ultimate outcome.
E.g., if you roll a boulder down a hill, you won’t be able to predict it’s exact path (e.g., might hit a pebble, wind might move it), but we know it will end up at the bottom of the hill because there is an attractor (gravity).
He’s not arguing to not measure intermediate outcomes, but we should think about these concepts [and maybe not be too overconfident in what we think we know about the outcome chain?]
The standards are categorized under five categories:
In each of these categories, there are several standard statements that describe what high quality evaluations should do. For example, under the category of “utility”, there are 8 statements of what evaluations should do to be useful and under the “propriety” category, there are 7 statements of what evaluation should to be ethical, just, and fair.
As I reviewed the standard statements for this blog posting, I noticed that both the CES and AEA, which both list the statements on their websites, include the following note:”Authors wishing to reproduce the standard names and standard statements with attribution to the JCSEE may do so after notifying the JCSEE of the specific publication or reproduction.” So, since I haven’t notified the JCSEE that I would like to reproduce the statements here on my blog, I can’t do so. You can read them over on the CES website though.
But the standards are more than just the statements. There’s a whole book published by the JSCEE that describes the standard statements in detail, explaining where the standards come from and how they can be applied.
It should also be noted that, despite the “should” wording of the standards, they aren’t meant to be slavishly followed, but to be applied in context and with nuance.
The standards also exist in tension with each other and you have to figure out the right balance. For example, there is a standard that says you should use resources efficiently, but another standard that says you should include the full range of groups and people who are affected by the program being evaluated. Evaluators need to find the balance between being thorough, but also been efficient in our use of resources.
In terms of my own practice, I think I can be more explicit in my use of the program standards. I’ve been an evaluator for a decade and I’ve integrated a lot of the standards into my work such that it’s just second nature (things like being efficient in my use of resources, using effective project management practices, using reliable and valid information, and being transparent). But there are other standards for which, as I read them I think “I could probably do better” (e.g., being more explicit about my evaluation reasoning or encouraging external meta-evaluation).
The evaluation standards is such a big topic, I’m barely scratching the surface here. So once I’m done this blog series on evaluator competencies, my next series is going to be on the evaluation standards! I think that will be a good way to get me to spend a bit of time reflecting on each of the standards and thinking about how I can improve my practice related to each one. And I’ll be sure to contact the JSCEE to let them know I’d like to reproduce the standard statements here on my blog!
So I had an idea. As I ease my way into blogging in a more reflective way, I thought that perhaps I could do a blog series about the Canadian Evaluation Society (CES) evaluator competencies, where each post I reflect on one of the competences. The competencies have been recently revised, so it seems like a good time to do this. Plus, having a series will give me ideas for topics – and hey, let’s make it every Sunday, so that I’ll have a deadline as well. This seems like a good way to get me into the habit of writing here.
What Are Evaluator Competencies?
Competencies are defined as “the background, knowledge, skills, and dispositions program evaluators need to achieve standards that constitute sound evaluations.” (Stevahn et al, 2005)
The competencies were created as part of the program for the Credentialed Evaluator (CE) designation. To get the designation, one has to demonstrate that they have education and/or experience related to 70% of the competencies in each of the five domains. I got my CE under the original set of competencies, but anyone applying now would use the new set. It was a few years ago that I did my CE application, so it’s another reason why it’s a good time for me to reflect on where I am now with respect to the competencies.
The five competency domains are:
“Reflective Practice competencies focus on the evaluator’s knowledge of evaluation theory and practice; application of evaluation standards, guidelines, and ethics; and awareness of self, including reflection on one’s practice and the need for continuous learning and professional growth.” (Source)
1.1 Knows evaluation theories, models, methods and tools and stays informed about new thinking and best practices.
I have taught Program Planning and Evaluation at both SFU (in the Masters of Public Health program) and UBC (in the Masters of Health Administration program) in the past couple of years, and I find that teaching is a great way to both deepen my own understanding of evaluation theories, models, methods, and tools and to stay informed about new thinking and best practices. In deciding what to include in a course, and how best to present it, and coming up (whenever possible) with activities the class can do to learn it, I learn more every time I prepare, update, and deliver a class. Also, students ask great questions (sometimes even after a class has ended and they’ve gone on to work in places where they are involved in evaluation) and sometimes it’s things that I’m not familiar with and I have to go and do some research to find out more.
I think my main reflection related to this area is that I am a firm believer that there is no one “right” way to do evaluation, and that it is best to start with what the purpose of an evaluation is and then figure out what approach, design, and methods will best help you achieve the purpose. Oftentimes, those requesting an evaluation come to it with assumptions about methods or design – like “I need you to do a survey of the program clients” or “how can I set up a randomized controlled trial to evaluate my program?” So I often find myself saying things like “Let’s begin at the beginning. Why do you want an evaluation? What do you want to know? What will you do with that information once you have it?”
Given that I think it’s important to find the best fit of approach, design, and methods to the purpose of an evaluation, it means that I need to be familiar with lots of different theories, models, methods and tools!
I attend evaluation conferences and pick sessions where I can learn about new things – and deepen my understanding of things I’m familiar with. For example, at the most recent CES conference, I took a workshop on reflective practice to deepen my skills in that area (which I’m now actively working on integrating into my life), I attended a session on rubrics to learn more about those (next step there is to try applying rubrics to an evaluation!), and I attended a session on a realist evaluation (next step there is to have the presenter come to my class as a guest speaker so that I and my students can learn more!)
I include a section in my course on “hot topics” in evaluation, which gives me the opportunity to explore the latest thinking in evaluation with my students. Recently, I’ve included complexity and systems thinking, and indigenous evaluation 2Except to read more about indigenous evaluation when I get to competency 3.7 in this blog series.. I also try to demonstrate reflective practice and humility to my students by telling them that I am exploring new areas, so I’m not an expert in these topics (especially indigenous evaluation), but that I’m sharing my learning journey with them.
I recently had coffee with a new friend and fellow evaluator, Meagan Sutton. We were introduced by a mutual friend who knew that Meagan was interested in chatting with evaluators who write blogs and that I am an evaluator who writes a blog! We had a great chat and it got me thinking about why I have this blog and how I might grow what I do with it.
I originally started this blog as a place to keep notes of work-related stuff I was reading. I have a pretty terrible memory and I find my personal blog a great way to remember stuff that I did – it’s easy to search through and accessible anywhere with an Internet connection – so I figured rather than having notes in various notebooks and jotted down in the margins of printed copies of journal articles, I could use this blog as my brain dump for various things I learn 1I briefly co-opted this blog for blog postings I was required to do during an Internet marketing class that I took in my MBA, but then switched it back to stuff related to my work.. So whenever I went to a conference, attended a webinar, or read a book or article where I wanted to record what I was learning, I dumped it on this blog. I am an external processor, so it helps me to remember and understand things when I write them down. For webinars I tend to take notes directly into my blog and publish that, but for conferences, I usually write notes on paper during the conference – partially because that helps keep me awake and attentive during conference sessions and partially because I don’t like lugging my laptop around during a conference – but also because I find it helpful to look at all the notes I’ve taken and sort or synthesize them together for the whole conference and if type my notes during the conference, I find it harder to remove the superfluous stuff, whereas if I’m deciding what it’s worth typing out from a bunch of handwritten notes, I find it easier to be more succinct as I’ll select just the main points to blog about. The downside is that it often takes me quite a while to do that, and I can end up posting my conference summary blog posting many months later 2Though I made it a priority to do it more quickly from the last conference I attended and actually got it posted just two weeks after the conference instead of months and months later.
Meagan asked me how I promote this blog and honestly, I don’t. Since I saw the blog as mostly just an externalization of my memory, I didn’t think anyone else would ever want to read it. I have had a few people contact me after reading something on my blog that they found through Google – and actually have had some interesting conversations result – but it’s pretty rare.
Occasionally, I add some reflection into these blog postings – like thoughts about how what I was reading or learning at a conference might relate to work that I do, but that’s been pretty minimal.
At the same time, I’ve been working on improving my reflective practice, mostly through reflective writing that I’m doing privately rather than in a public forum like this. Part of that is because the reflections I’ve been writing are part of the data I am using in the evaluation I’m working on, so I need it documented where the rest of the data (including my team’s reflections) are. And part of it is because some of what I write about is confidentialorpoliticallysensitive, so is not for sharing publicly.
And this is where blogging as an evaluator can get sticky. Sometimes there are things you want to reflect on and process, and maybe even start a conversation with fellow evaluators about, but that you aren’t able to make anonymous for discussion in a public forum. Or you have conflicts with clients that you want to reflect on, but can’t do that publicly either. How does one navigate this? I honestly don’t know the answer, but as I think about expanding this blog to become more reflective, it’s something I’ll need to think more about.
I guess the flip side of this is: why do I want to put my reflections out into the world? I guess because I see it as an opportunity to engage with others. As I mentioned above, without even sharing my blog postings beyond just posting them here, I’ve had some interesting interactions with other evaluators who stumbled on my blog – imagine what could happen if I tweeted out these blog postings (like I do my personal blog postings with my personal Twitter account) and actually wrote some reflective stuff – things I’m thinking about/struggling with/wanting to know more about? Perhaps I could connect with others facing similar issues and get different perspectives on the things I’m thinking about.
Coffee – posted on Flickr by Jen with a Creative Commons license
Speakers: -Tammy Heinz, Program Officer, Hogg Foundation -Hayling Price, Senior Consultant, FSG -Darrell Scott, Founder, PushBlack -Julie Sweetland, Vice President for Strategy and Innovation, Frameworks Institute -Rick Ybarra, Program Officer, Hogg Foundation
“Systems change is about shifting conditions that are holding a problem in place”
“It’s not about getting more young people to beat the odds. It’s about changign the odds”
6 conditions of systems change
structural change (policies, practices, resource flows (who gets funding and why? how are human resources allocated) [explicit – easiest to find and to change]
relationships & connections (not just having someone on your LinkedIn, but actually engaging), power dynamics (who is getting funded and why? some people have a leg up, some people are dealing with a history of oppression) [semi-explicit]
transformative change (mental models) [implicit]
mental models: deeply held beliefs, assumptions, etc.
the policies, practice, resource flows are not handed to us by nature – they are created by humans based on our mental models
PushBlack – nation’s largest nonprofit media platform for Black people
4 millions subscribers with emotionally-driven stories about Black history, culture, and current events
through Facebook Messenger – meeting people where they are at
Go to Facebook Messenger and search “PushBlack” to sign up!
ran the largest get-out-the-vote campaign on social media in history in 2018
got subscribers to contact their friends (relates to relationships and connections part of the conditions of system change)
giving subscribers tools to work at the local level (e.g., to be heard when Black people are killed by police, to free innocent Black people)
test their messages with small subset of audience before sending out only the best performing messages to the broader audience)
uses the phrase “cultural models”, which is similar concept from anthropology
“cultural models are cognitive short cuts created through years of experience and expectation. They are largely automatic assumptions, and can be implicit”
People rely on cultural models to interpret, organize and make meaning out of all sorts of stimuli, from daily experiences to social issues”
believe that understanding mental/cultural models helps you to understand what are the mental models that are holding a problem in place
e.g., Google image search “ocean” and the top hits are pictures of “beautiful blue expanse” – this is a mental model that Americans hold of the ocean – this holds implications for policy:
people think it is so big, that it’s invincible
people think it’s water and think about the surface – not thinking about what’s underneath, about how it’s an ecosystem, it produces oxygen, it affects weather, etc.
it’s not that the ocean isn’t blue or isn’t big, but that’s just a piece of the picture
e.g., some people’s mental model of “teenager”, is about “risk and rebellion” – people defying expectations from adults. Again, not a complete picture.
3 models are consistently barriers to productive conversations on social issues (especially in American context, but they’ve also seen them internationally):
individualism: assumption that problems, solutions, and consequences happen at the personal level
us vs. them: assumption that another social group is distinct, different, and problematic (beyond people – can be human vs. animals; environment vs. economy)
fatalism: assumption that social problems are too big, too bad, or too difficult to fix
there are also mental models that are specific to a given situation, but the above three tend to show up in lots of areas
one thing that doesn’t work: correcting their mistakes
“myth busters” – they don’t work! A study of myth-fact structure found: people misremembered the myths as true, got worse over time, and they attributed the false information to the CDC (Skumik et al (2005), JAMA)
mental models are there because we’ve heard it so many times. When you restate a “bad” mental model, you reinforce it (e.g., if you state: Myth: Flu vaccines cause the flu, you reinforce their mental model that flu vaccines cause the flu (doesn’t matter that you said it was a “myth”))
never remind people of things you wish they’d forget
another thing that doesn’t work: giving people more information
isn’t not that you shouldn’t use facts
but if people have a particular mental model, stacking data on top does not change their mental model
you need to help them build a new mental model
another thing that doesn’t work: leaving causation to the public imagination doesn’t work
leaving people with their bad mental models won’t help
instead of trying to rebut people’s misunderstanding – try to redirect attention to what is true and how things do work
Tammy Heinz and Rick Ybarra
Hogg Foundation for Mental Health
historically funded lots of program and research
Mental Health has been focused on diagnosis and treatments, with end goal of symptom reduction
now moving their work upstream
traditionally, there has been a medical/disease model of health
in the 1970s, people started thinking about if mental health was really chronic or could people get better from this
shifting a mental model is not something that can happen quickly
in the past 20 years, there’s been some deliberate work to shift the thinking around mental health
huge shift towards peers helping in mental health care teams
thinking about “recovery” – it’s not an expectation of only symptom control
there are multiple mental models on an issue – you can call up a more productive mental model (e.g., maybe “fatalism” if the first thing that comes to mind, but you can call up a more productive mental model)
how do you figure out what mental models people are using?
Hayling: we are constantly testing out models through our work
Julie: ask people “what are ideas you wish you’d never hear again?” and you’ll get a pretty good idea of the mental models that are being a problem
how do you change mental models around emotionally charged issues?
Rick: listening. Figure out what mental models are driving things. Really learn and understand where people are coming from.
Tammy: being clear about where you want to go
Hayling: make things plain
Julie: call people in rather than calling them out
This year’s conference was in Halifax and, as always, it was a wonderful opportunity to reconnect with my evaluation friends, make some wonderful new friends, to pause and reflect on my practice, and to learn a thing or two. And I think this is quite possibly the fastest I’ve ever put together my post-conference recap here on ye old blog! (The conference ended on May 29 and I’m posting this on June 14!)
Student Case Competition
The highlight of the conference for me this year was the Student Case Competition finals. In this competition, student teams from around the country, each coached by an experienced evaluator, compete in round 1 where they have 5 hours to review a case (typically a nonprofit organization or program) and then complete an evaluation plan for that program. Judges review all the submissions and the top 3 teams from round 1 move on to the finals, where they get to compete live at the conference. They are given a different case and have 5 hours to come up with a plan, which they then present to an audience of conference goers, including representatives from the organization and three judges. After all three teams present, the judges deliberate and a winning team is announced!
I had the honour of coaching a team of amazing students from Simon Fraser University. The competition rules do not allow teams to talk to their coaches when they are actually working on the cases, so my role was to work with them before the round, talking about strategies for approaching the work, as well as chatting with them about evaluation in general. Most of the students on the team had not yet taken an evaluation course, so I also provided some resources that I use when I teach evaluation.
I will admit that I was a bit nervous watching the presentations – not because I didn’t think my team would do well, as I know they worked really hard and are all exceptionally intelligent, enthusiastic and passionate, but because it’s huge challenge to come up with a solid evaluation plan and a presentation in such a short period of time, and because they were competing among the best in the country!
But I need not have been worried. They came up with such a well thought through, appropriate to the organization, and professional plan and presented it with all the enthusiasm, professionalism, grace, and passion that I have come to know they possess. I was definitely one proud evaluation mama watching my team do that presentation and so very, very proud of them when they won! Congratulations to Kathy, Damien, Stephanie, Manal, and Cassandra! And to Dasha, who was part of the team that won round 1, but wasn’t able to join us in Halifax for the finals.
Kudos also go to the two other teams who competed in the finals – students from École nationale d’administration publique (ENAP) and Memorial University of Newfoundland (MUN). Great competitors and, as I had the pleasure of learning when we all went out to the pub afterwards, as well as chatting at the kitchen party the next night, all very lovely people!
As usual, I took a tonne of notes throughout the conference and, as usual for my post-conference recaps, I will:
summarize some of my insights, by topic (in alphabetical order) rather than by session as I went to some different sessions that covered similar things
where possible, include the names of people who said the brilliant things that I took note of, because I think it is important to give credit where credit is due. Sometimes I missed names (e.g., if an audience member asked a question or made a statement, as audience members don’t always state their name or I don’t catch it)
apologize in advance if my paraphrasing of what people said is not as elegant as the way that people actually said them.
Anything in [square brackets] is my thoughts that I’ve added upon reflection on what the presenter was talking about.
every time I go to CES, I find I learn a little bit more about how the federal government works (since so many evaluators work there!). This time I learned that Canada Revenue Agency (CRA) doesn’t report up to Treasury Board – they report to Finance
the indigenous welcome to the conference was fantastic and it was given by a man named Jude. I didn’t catch his full name and I couldn’t find his name in the conference program or on Twitter. [Note to self: I need to do better at catching and remembering names so I can properly give credit where credit is due]. He talked about how racism, sexism, ableism, transphobia, and other forms of oppression are at play in the world today. He also talk about about how there is a difference between guilt and responsibility. We need to take responsibility for making things better now, not just feel guilty about the way things are.
Nan Wehipeihana talked about an evaluation of sports participation program and how they moved from sports participation “by” Māori to sports participation “as” Māori. They talked about what it would look like to participate “as” Māori (e.g., using Māori language, Māori structures (tribal, subtribal, kin groups) are embedded in the activity, activities occur in places that are meaningful to Māori people (e.g., kayaking on our rivers, activities on our mountains). Developed a rubric in the shape of a five-point star (took a year to develop).
I went to a Lightning Roundtable session hosted by Larry Bremner, Nicole Bowmanm, and Andrealisa Belizer where they were leading a discussion on Connecting to Reconciliation through our Profession and Practice. One of the things that Larry mentioned that struck me was the importance of not just indigenous approaches to evaluation, but indigenous approaches to program development. It doesn’t make sense to design a program without indigenous communities as equal partners and then to say you are going to take an indigenous approach to evaluation – the horse has left the barn by that point.
They also talked about how evaluators are culpable for the harm that is still happening because we haven’t done right in our work. They talked about how the CES needs to keep the government’s feet to the fire on the Truth and Reconciliation Commission’s (TRC) Calls to Action. Really, after there Commission, there should have been a TRC implementation committee who could go around the country and help get the Calls to Action implemented (Larry Bremner).
I also went to a concurrent session where the panelists were discussing the TRC Calls to action. They pointed out that CBC has a website where they are tracking progress on the 94 Calls to Action: Beyond 94.
CES added a competency about indigenous evaluation in its recent updating of the CES competencies:
3.7 Uses evaluation processes and practices that support reconciliation and build stronger relationships among Indigenous and non-Indigenous peoples.
Many evaluators saw this new competency and said “I don’t work with indigenous populations, so how can I relate to this competency?” [I will admit, I had that thought as well when the new competencies were announced. Not that I don’t think this is an important competency for evaluators to have – but more that I didn’t know how to apply it in the work I am currently doing or where to start in figuring out what I should do.]. The CES is trying to provide examples to support evaluators. (Linda Lee) E.g.:
I also learned that EvalIndigenous is open to indigenous and non-indigenous people – anyone who wants to move forward indigenous worldviews and want indigenous communities to have control of their own evaluations. So I joined their Facebook group! (Nicole Bowman and Larry Bremner)
Evaluators typically use a Western European approach and many use an “extractive” evaluation process, where they take stuff out of the community and leave (I can’t remember if this slide was from Larry Bremner or Linda Lee).
I also found this discussion of indigenous self-identification helpful (Larry Bremner):
There is still so much work to do and so much harm being inflicted on indigenous people:
there are more indigenous kids in care today than were in residential schools – this is the new residential schools. (Larry Bremner)
During the discussion with the audience, some audience members mentioned “trauma tourism” – that it can be re-traumatizing for indigenous people to share traumas they have experienced and non-indigenous people, in their attempts to learn more about the experiences of indigenous people need to be mindful of this and not further burden indigenous people.
If you google “indigenous women”, all the results you get are about missing and murdered indigenous women and girls. Where is the focus on the strengths in the community?
evaluators are learners (Barrington)
Bloom’s Taxonomy is a hierarchy of cognitive processes that we go through when we do an evaluation – notice that evaluation is at the top – it’s the hardest part (Gail Barrington)
single loop learning is where you repeat the same process over and over again, without every questioning the problem you are trying to fix (sort of like the PDSA cycle). There’s no room for growth or transformation. (Gail Barrington)
in contrast, double loop learning allows you to question if you are really tackling the correct problem (sometimes the way that the problem is defined is causing problems/making things difficult to solve) and the decision making rules you are using, allowing for innovation/transformation/growth. (Gail Barrington)
“Pattern matching is the underlying logic of theory-based evaluation” – specify a theory, collected data based on that, see if they match (Sebastian Lemire)
Trochim wrote about both verification AND falsification, but in practice most people just come up with a theory and try to find evidence to support it (confirmation bias) (Sebastian Lemire)
humans are wired to see patterns, even when they aren’t there and we tend to focus on evidence in support of the patterns (Sebastian Lemire)
having more data is not the solution! (Sebastian Lemire)
e.g., when people were given more information on horses and then made bet, they didn’t get any more accurate in their bets, but they did get more confident in their bets
evaluators need to do reflective practice – e.g., to look for our biases (Sebastian Lemire)
structural analytic techniques (see slide) below – not a recipe, but a structure process (Sebastian Lemire)
pay attention to alternative explanations – in the context of comissioned evaluations, it can be hard to get commissioners to agree to you spending time on looking at alternative explanations and we often go into an evaluation assuming that the program is the cause (bias) (Sebastian Lemire)
falsification: specify what data you would expect to see if your hypothesis was wrong (Sebastian Lemire)
Power and Privilege
since we have under-served, under-represented, and under-privileged people, we must also have over-served, over-represented, and over-privileged people (Jude, who gave the indigenous welcome. I didn’t catch his last name and I can’t find it on the conference website)
recognize your power and privilege, recognize your biases and think about where they come from and work to prevent your biases from affecting your work (Jude, who gave the indigenous welcome. I didn’t catch his last name and I can’t find it on the conference website)
and speaking of power and privilege, the opening plenary on the Tuesday morning was a manel. For the uninitiated, a “manel” is a panel of speakers who are all male. It’s an example of bias – men being more often recognized as experts and given a platform as experts when there are many, many qualified women. I called it out on Twitter:
a friend of mine who is a male re-tweeted this saying he was glad to see that someone called it out and when I spoke to him later, he told me that people were giving him kudos for calling it out and he had to point out that it was actually a woman who called it out. So another great example of women being made invisible and men getting credit.
I do regret, however, that I neglected to point out that it was a “white manel” specifically. There’s so much more to diversity than just “men” and “women”!
Michelle Naimi (who I know from the BC evaluation scene) gave a great presentation on a realist evaluation project she’s been working on related to violence prevention training in emergency departments. My notes on realist evaluation don’t do it justice, but I think my main learning here is that this is an approach that I can learn more about. I’m definitely inviting her as a guest speaker the next time I teach evaluation!
I took a pre-conference workshop, led by Gail Barrington, on reflective practice. This is an area that I’ve identified that I want to improve in my own work and life, and a pre-conference workshop where I got to learn some techniques and actually try them out seemed like a perfect opportunity for professional development.
Gail talked about:
how she doesn’t see her work and her self as separate – they are seamless
if you don’t record your thoughts, they don’t endure. (How many great ideas have you had and lost?) [I’d add, how many great ideas have you had, forgotten about, and then been reminded of later when you read something you wrote?]
evaluators are always serving others – we need to take care of ourselves too
The best part of the workshop was that we got to try out some techniques for reflective practice as we learned them
Warm up activity: In this activity, we took a few minutes to answer the following questions:
-Who am I? -What do I hope to get out of this workshop? -To get the most out of this workshop, I need to ____
Then we re-read what we wrote and answered this:
-As I read this, I am aware that __________
and that is an example of reflection!
[Just had an idea! I could use that at the start of class to introduce the notion of reflective practice from the beginning of class. If I turn my class into more of a flipped classroom approach, I could have more in-class time to do fun, experiential things like this than listening to lecture 🙂 ]
Resistance Exercise: Another quick writing exercise:
-What are the personal barriers that hold me back from reflection? -What are the lifestyle/family barriers that hold me back from reflection? -What barriers at work are holding me back from being transformative?
Then we re-read what you wrote and answer this:
-As I read this, I am aware that __________
The Morning Pages:
Write three pages of stream of consciousness first thing in the morning in a journal that you like writing in. Before you’ve done anything else – and before your inner critic has woken up. If you can’t think of anything to write, just write “I can’t think of anything to write” over and over again until something comes to you.
All sorts of things will pop up – might be ideas for a project you are working on, or “to do” items to add to your list. You can annotate in margins, transfer things to your main to do list later, or some of it might not be useful to you now and you don’t have to look at it again.
Gail said it’s very different writing first ting in the morning compared to later in the day. I know that I’m unlikely to get up an extra half hour earlier than I already do, but I could give this a try on weekend morning when I’m not feeling rushed to get to work to see if it’s different for me too.
Start Now Activity:
-The thoughts/ideas that prevent me from journaling now ____
Then re-read what you wrote and answered this:
-As I read this, I am aware that __________
for some people, writing is not for them. An alternative is using a voice memo app. We gave it a try in the workshop and I was kind of meh on it, but I used it two more times during the conference when I had a quick thought I wanted to capture. I think the challenge will be that if I want to retrieve those ideas, I’ll need to listen to the recordings, which seems like a big time sync, depending on how much I say (as I can be verbose).
we also talked about meditation and went out on a meditative walk ((Gail put up the quotation “solvitur ambulando”, citing St. Augustine, and noting that it is Latin for “solved by walking”. But when I just googled it, it turns out that it was actually from the philosopher Diogenes, and actually refers to something that is solved by a practical experiment). For our walk, we set an intention (to think about one thing that I’ll chnage at my work), then forget about it and go for a mindful walk – paying attention to the sensations of walking (e.g., the feeling of your feet on the ground as you step, the colours and shapes and sounds and smells you encounter). It was a rainy day, but I was definitely struck with all the beauty around me, and was reminded about how beneficial mindfulness can be.
My take home from all my reflections in this workshop was:
taking time to do things like reflective practice and mindfulness meditation is a choice. I say that I don’t have enough time to do these things, but it’s actually that I have been choosing not to spend my time doing these things. There are a variety of reasons for those choices (which I did reflect on and got some valuable insights about). Remembering that this is a choice – and being more mindful of what choices I’m making – is going to be my intention as I return back to work after my conference/holiday.
I’ve been to sessions on Rubrics by Kate McKegg, Nan Wehipeihana, and their colleagues at a number of conferences and I always learn useful things. This year was no exception. The stuff in this section is all from McKegg and Wehipeihana (and they had a couple of collaborators who weren’t there but “presented” via video.
rubrics are a way to make our evaluation reason explicit
just evaluating on if goals are met is not enough. Rubrics can help us with situations like:
what counts as “meeting targets”? (e.g., what if you meet an unimportant target but don’t meet an important one? Or you way exceed one target and miss another by a little bit? etc.)
what if you meet targets but there are some large unintended negative consequences?
do the ends justify the means? (what if you meet targets but only but doing unethical things?)
whose values do you use?
3 core parts of a rubric:
criteria (e.g., reach of a program, educational outcomes, etc.)
levels (standards) (e.g., bad, poor, good, excellent; could also include “harmful”)
some people don’t like to see “harmful” as a level, but e.g., when we saw inequities, we needed a way to be able to say that it was beyond poor and actually causing harm
importance of each criteria (e.g., weighting)
sometimes all criteria are equally important and sometimes not
rubrics can be used to evaluate emerging strategies:
evaluation can be used in situations of complexity to track evolving understanding
in all systems change, there is no final “there”
in situations of complexity, cause-and-effect are only really coherent in retrospect [this are not predictable] and do not necessarily repeat
we only know things in hindsight and our knowledge is only partial – we must be humble
need to be looking out continually for what emerges
in complexity thinking, we are only starting to see what indigenous communities have long known
our reality if created in relation, interpretive
Western knowledge dismissed this
need to bring things together to make sense of multiple lines of evidence
“weaving diverse strands of evidence together” in the sensemaking process
we have to make judgments and decisions about what to do next with limited/patchy information. Rubrics give us a traceable method to make our reasoning explicit
having agreed on values at the start helps to navigate complexity
break-even analysis flips return-on-investment:
when you can’t do a full cost-benefit analysis (e.g., don’t have information on ALL costs and ALL benefits), can see if the benefits are at least greater than costs
think about how rubrics are presented – e.g., minirubrics with red/yellow/green
but that might not be appropriate in some contexts – e.g., if a program is just developing an it’s unreasonable to expect that certain criteria would be at a good level yet
a growing flower as a metaphor for different stages of different parts of a program may be more appropriate to a development program. May also be more appropriate in an indigenous context
it’s important to talk about how the criteria relate to each other (not in isolation)
they do each analysis separately (e.g., analyze the survey; analyze the interviews)
then map that to the rubric
then take that to the stakeholders for sensemaking; stakehodlers can help you understand why you saw what you saw (e.g., when you see what might seem like conflicting results)
like with other evaluation stuff, might not say “we are building a rubric” to stakeholders at the start (it’s jargon). Instead, ask questions like “what is important to you?” or “If you were participating *as* Māori , what would that look/sound/feel like to you?”
Theory of Change
to be a theory of change (TOC) requires a “causal explanation” (i.e., a logic model on its own is not a TOC – we need to talk about why those arrows would lead to those outcomes) (John Mayne) [This also came up as a question to my case competition team – and my team gave a great answer! Did I mention I’m so proud of them?]
complexity affects the notion of causation – in complexity, there isn’t “a” cause, there are many causes (John Mayne)
people assume you have to have a TOC that can fit on one page – but that doesn’t always work – can do nested TOCs (John Mayne)
interventions are aimed at changing the behaviour of groups/institutions, so TOCs should reflect that (John Mayne)
there is lots of research on behaviour change, such as on Bennett’s hierarchy, or the COM-B model (John Mayne):
causal link assumptions – what conditions are needed for that link to work? (John Mayne) (e.g., could label the arrows on a logic model with these assumptions – Andrew Koleros)
As with pretty much any conference I go to, I came home with a reading list:
Concurrent session: Steering Evaluation Through Uncharted Waters by M. Elizabeth Snow, Alec Balasescu, Abdul Kadernani, Sheila Matano, Stephanie Parent, Monika Viktorin, Shadi Mahmoodi, Sandra Wu [this was my presentation!]
Some colleagues and I are presenting a poster at the Centre for Health Services & Policy Research conference on March 7-8, 2019. Rather than cluttering up our poster with a reference list, we are putting our references online here and our poster will have a QR code linked to this page. So if you’ve come looking for the references from our poster, you’ve come to the right place!
Cook, P. F., & Lowe, N. K. (2012). Differentiating the Scientific Endeavors of Research, Program Evaluation, and Quality Improvement Studies. Journal of obstetric, gynecologic, and neonatal nursing, 41(1), 1-3.
Hedges, C. (2009). Pulling It All Together: QI, EBP, and Research. Nursing management. Nursing management, 40(4), 10-12.
Hill, S. L., & Small, N. (2006). Differentiating Between Research, Audit and Quality Improvement: Governance Implications. Clinical Governance: An International Journal, 11(2), 98-10
Naidoo, N. (2011). What is Research? A Conceptual Understanding. African Journal of Emergency Medicine, 1(1), 47-48.
Newhouse, R. P., Pettit, J. C., Poe, S., & Rocco, L. (2006). The Slippery Slope: Differentiating between Quality Improvement and Research. Journal of Nursing Administration, 36(4), 211-219.
Shirey, M. R., Hauck, S. L., Embree, J. L., Kinner, T. J., Schaar, G. L., Phillips, L. A., . . . McCool, I. A. (2011). Showcasing Differences Between Quality Improvement, Evidence-Based Practice, and Research. The Journal of Continuing Education in Nursing, 42(2), 57-68
United States Government Accountability Office. (2011, May). Performance Measurement and Evaluation: Definitions and Relationships. Retrieved March 3, 2017, from Program Performance Assessment: http://www.gao.gov/assets/80/77277.pdf