Webinar Notes: The “Coin Model of Privilege and Critical Allyship”

Title: The “Coin Model of Privilege and Critical Allyship”: Orienting Ourselves for Accountable Action on Equity

Speaker: Dr. Stephanie Nixon, University of Toronto

Hosted by: Simon Fraser University, Faculty of Health Sciences

  • Dr. Nixon asked us to jot down our thoughts on the following three questions:

What are new insights?

  • the coin model = privilege (unearned advantages) and oppression (unearned disadvantages)
  • we have words for those people whose health is affected by oppressions: “marginalized”, “vulnerable”, “at risk”, “target population” – but we don’t have any words for those people who are on the other side of the coin. We frame equity as solely around those on the bottom of the coin – and we thus limit our thinking of possible solutions to these “problem” of the bottom of the coin – we disappear those on the “top of the coin” – we disappear the coin altogether
  • we frame the privileged as neutral instead of as complicit in the oppression
  • when is EDI used to avoid actually dealing with oppression?

What feels important but is still muddy?

What do I feel as I lean into reflecting on privilege? body, emotions. (“We cannot think our way out of oppression.”)

Other notes:

  • I’ve seen the original version of this experiment, and appreciated this updated version. When they did the reveal, I felt my stomach fall – I missed something that should be so obvious again! I also appreciated Dr. Nixon’s use of this as a metaphor for privilege: e.g., those who don’t experience oppression not only don’t see it, they don’t believe it when others tell them that they experience it and gaslight them by saying that what they have experienced did not happen.
  • strengths that helped me get to my level of education: parents who supported me to pursue higher education, availability of student loans; barriers: cost of tuition and living as a student without an income, not having role models in my family who had done higher education before
  • the people on the “bottom” of the coin are the experts on how oppression affects them – those on the privileged side of the coin can’t see the ways in which they are privileged (it’s like the gorilla!)
  • white supremacy – the view that white is “normal”, the “default”
  • people on one side of the coin are not homogeneous – e.g., if we think about colonialism, the people on the oppressed side are indigenous, and there are many different indigenous groups; similarly, the group on the privileged side of the coin of colonialism are settlers and they are also not homogenous
  • education on antiracism, anti-oppression is not enough – it doesn’t change the material conditions that people experience, it doesn’t dismantle the systems of oppression
  • what is my work to do on “EDI”?
    • when you are on the top of a coin, you need to work in solidary with the people who are experiencing the oppression
    • it is not about the person with privilege “saving” or “fixing” the populations experiencing the oppression
    • when privilege is unchecked it leads to an irrational sense of neutrality
    • when you are on “top” of the coin, you need to understand your position as having unearned privilege (and even recognizing there is a coin) and that you are not the expert

Dr. Nixon’s article on this model: https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-7884-9

Posted in Uncategorized | Leave a comment

Webinar Notes: Ethical Storytelling

Panelists: Amy Costello & Frederica Boswell

Hosted by: Nonprofit Quarterly

Video of the webinar can be viewed here.

  • Tiny Spark podcast
  • Sophie Otiende, Activist and Advocate, HAART Kenya:
    • non-profits “parade and exploit” the people they are claiming to help
    • e.g., asking someone who has been assisted by an NPO to share their story – the organization holds power over the victim – can that survivor give proper consent about telling their stories?
    • “survivor porn” – why do we need a person to come and tell us that these horrible things are bad?
    • people don’t talk to survivors about the risks and impacts of telling your story. People live in an ideal world where they think that if they tell their story, people will be compassionate. But that’s not true – some people will abuse those who tell their stories, or we just forget about the person and move onto to getting the next survivor’s stories
  • we are interested in the whole person -not just their trauma
  • not everyone wants to be called “survivor” or “person who formerly experienced homelessness” or “recovering addict” – how does the person whose story is being told want to be represented?
  • the person whose story it is should be a full partner in the storytelling
    • ensure they are in the loop at all developments in the storytelling and being extra sure at every step that they are comfortable with any details that are shared
    • never want to surprise someone with details about their story being made public
  • don’t want to engage in trauma porn – just sharing the trauma in isolation
    • figure out what the message is – e.g., in a story on the Me Too movement in the charitable sector, the message was that serial predators are hiding in the charitable sector and their institutions are protecting them
    • figure out what the purpose of telling the story is – things like holding organizations to account or highlighting resilience
  • when conducting interviews, establish trust and intimacy
    • be fully present in the interview
    • ask follow up questions, based on really listening to them, rather than just following the interview guide in order
    • don’t drive the interview – the interviewee should have autonomy and control. The story is hers, not the interviewer’s
    • interviewer’s job is to help the interviewee feel safe
  • we should let people know what their rights are – that they can say “no” to answering our questions
  • interviewing “experts” (e.g., professors who study a topic)
    • isn’t someone who has years of experience living with homelessness an expert on the subject?
    • “professional” “experts” are often well rehearsed when you interview them – you have to push them to be real, rather than just being on auto-pilot
  • think about the stereotypes you may be perpetuating with your storytelling

Posted in event notes, webinar notes | Tagged | Leave a comment

Webinar Notes: Beyond the Board Statement: How Can Boards Join the Movement for Racial Justice?

Sheila Matano, who is the VP of the board of the BC Chapter of the Canadian Evaluation Society (CESBC), who is also the chair of our Diversity, Equity, and Inclusion (DEI) committee, told me about this two-part webinar series. Like many boards, we are wanting to do better when it comes to doing our work in an inclusive way, and we didn’t want to just put out a board statement that says “Black Lives Matters” but then just go on operating the way that we always have. So I was excited to check out this series for some concrete ideas about how we can do this well. And I was not disappointed!

Panelists: Robin Stacia (RS) https://sageconsultingnetwork.com/meet-our-ceo/ and  Vernetta Walker (VW).

Hosted by: Nonprofit Quarterly

Part 1: Date: June 22, 2020

Watch part 1 here. Watch part 2 here.

Here are my notes from the webinars.

My takeaways:

  • board statements need to state a commitment to what you are going to do
  • it’s not about waiting out the uprising until you can go “back to business”
  • how can boards use their influence in a way that aligns with their mission?
  • the work needs to be done by the whole board – it’s not to be on the one Black person on your board to own this work. It can be retraumatizing for them. And Black people are tired from fighting for centuries – white people need to step up.
  • look at your board composition – we need a diverse board and a coalition of all of us

Understanding our history:

  • we are a post-colonial society – there was a narrative that “natives” were “savage” –> white supremacy –> allowed white people to enslave Black people
  • slavery did not end – it just evolved
  • there is still a presumption of danger re: Black and brown people
  • truth and reconcilitation/justice/reparation are sequential – the truth must come first
  • as boards, we need to tell the truth about what we’ve ignored, overlooked, and benefitted from

Debunking Myths

Myth: “It’s just a few bad actors”

  • RS: this myth “minimizes the centuries long struggled that Black, brown, indigenous people have experienced”
  • it is a system of racism:
    • restricts every aspect of life for Black, brown, and indigenous people (healthcare, criminal justice, politics, education, wealth – everything)
    • institutional policies/practices/laws/regulations designed to benefit and create advantages for white people and oppress and disadvantage Black, brown, and indigenous people
    • exists no matter your age, location, socioeconomic status
  • VW: we have a lot to unlearn
    • we’ve been socialized to not talk about race
    • boards should talk about why they are so uncomfortable to talk about race
    • boards should learn about unconscious bias
    • do you have authentic relationships with Black and brown people? Because we’ve been separated
    • COVID-19 and this uprising = perfect storm, because people had time to reflect and feel the pain
    • we can’t show up effectively for the board work if you haven’t done the individual work

Myth: People try to replace “Black Lives Matter” with “All Lives Matter”

  • VW: saying “Black Lives Matter” is not saying “only Black Lives Matter” – it’s saying “Black Lives Matter too”
  • there is violence against Black bodies, often by state actors
  • lots of people have heard that “race is a social construct”, but they don’t get it. They think there are differences between the races that justify the violence, but there are not.
  • “waking up Black” has a level of stress that is measurable – decreased life expectancy, gaps in educational acheivement, maternal mortality, criminal justice system involvement – bias and systemic racism leads to all of this
  • RS: people misunderstand “racial equity” – it means the state where my racial identity doesn’t have an impact on me -e.g., I can go to the bank or go birdwatching and my racial identity does not dictate the outcome

A board statement alone is not enough

  • When they polled the webinar audience, about 3/4 said that their board had issues a statement in the wake of the BLM protests, but only 1/4 said that their board had an indepth conversation about the issues
  • VM: some statements just say something to the effect of “we stand with you”, but nothing about what they will actually do
    • good statements will say what they are doing and what they commit to doing
    • there was a backlash if you didn’t put out a statement, and there was also a backlash if your statement didn’t have any teeth – it shows that people are paying attention
    • put putting out a statement for the sake of public perception is not good

Questions to ask when if and when you do speak out:

These are taken verbatim from their slide:

  1. How does your statement acknowledge the historical injustices of structural and systemic racism?
  2. How do you use the document to bring about awareness concerning systemic and structural racism to your audiences?
  3. How does the statement align with your organization’s mission?
  4. Is your organization willing to be an ally in supporting the work? If so, how?
  5. What is the call to action and committment to the work? Examples can include:
    1. How do you plan to alleviate barriers and create access to opportunities to bring about equitable and just outcomes?
    2. How do you plan to leverage the various forms of capital that are at your disposal to address the issues?

Source: Robert L. Dortch, Jr. Vice President, Programs & Innovation, Robins Foundation

As I look at these questions, I think that not only are they useful for our work on the CESBCY board, but they can also be helpful for me to think about how I do my teaching.


Posted in evaluation, event notes, webinar notes | Tagged , , , | Leave a comment

Evaluator Competencies Series: Evaluation Topics and Questions

Since it’s been a while since I last wrote a blog psoting in this series, and since I stopped in the middle of the “technical competencies” domain, let’s review where we are at. The first competency in the “Technical Domain” was about figuring out what the purpose and scope of an evaluation – what is the evaluation trying to do and what ground is it going to cover (and what is it not going to cover). The next competency was about figuring out if a program is in a state in which it is ready to be evaluated and the third competency was about making program theories explicit. This brings us to the fourth competency in the technical domain:

2.4 Frames evaluation topics and questions


People often get confused when we say “evaluation questions”, thinking that we are referring a question you might ask in an interview or survey (like “were you satisfied with the services you received?”). But the “evaluation question” we are referring to here (sometimes referred to a “Key Evaluation Questions” (KEQ)) are a higher-level than that; they are an overarching question (or a few questions) that guide the development of the evaluation.

An important thing to remember about evaluation questions is that they should be evaluative. Not just “what happened as a result of this program?” but “how “good” were the things that happened from the program?” (where “good” needs to be fleshed out – e.g., what do we consider “good”? how “good” is good enough to be considered “good?”).

The Better Evaluation website gives us some useful tips on developing KEQs:

  • they should be open-ended (not something that you can answer with “yes” or “no”)
  • they should be “specific enough to help focus the evaluation, but broad enough to be broken down into more detailed questions to guide data collection”
  • they should relate to the intended purpose of the evaluation
  • 7 +/- 2 is a good number to have
  • you should work your stakeholders to development them

I think it’s really important to think about who gets to decide on what the evaluation questions are. Since the rest of the evaluation will be built based on the questions, whoever gets to decide on the questions holds a lot of power. This could be a whole blog posting topic on its own, but in the interest of actually getting this posted, I think I will leave that for another day.


A nice resource on working with your stakeholders to develop evaluation questions is Preskill & Jones’ A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions. The CDC’s Good Evaluation Questions Checklist can also be helpful in thinking through/improving your evaluation questions.

Image source: Posted on Flickr with a Creative Commons license.

Posted in evaluation, evaluator competencies | Tagged | Leave a comment

I’m back to blogging

Over on my personal blog, I’ve decided to try blogging every day in the spirit of November as National Blog Posting Month (NaBloPoMo) – that was a thing years ago when blogs were more popular. The idea is to blog every single day during the month of November. That got me thinking that it had been a while since I blogged here… and it turns out that has been more than a year!

I remembered that I had been doing a series on evaluator competencies where I wrote one blog posting a week on each of the Canadian Evaluation Society (CES) evaluator competencies and that I had decided to take a “short break” when stuff was getting busy with the courses I was teaching. So “short” may have not been the right word there. In my defence, the world was turned rather upside down for most of that time, what with a global pandemic and reckoning on racism.

My other issue with actually getting things up on here is my battle with perfectionism. During these pandemic-y times I’ve been doing a fair bit of professional development 1As presentations and workshops have had to move online in response to the pandemic, it’s resulted in a lot of events that otherwise might have been just held locally being available anywhere in the world. And with a reckoning on racism bringing more attention to the work of BIPOC (Black, Indigenous, and people of colour) activists, scholars, and organizations, webinars on anti-racism and reconciliation have been amplified. and from the various webinars and online workshops I’ve attended, I’ve started many, many blog postings as a way to capture notes from these events. But then I think “Oh, I need to summarize this better/come up with a good conclusion/figure out what actions I should take from what I’ve learned/find a good Creative Commons licenced photo to go with this/provide links to the webinar recording/etc./etc.” and then it sits in my drafts folder for ever and ever.

So here’s my new plan. I’m going to re-start on my evaluator competency series – I’ll post once a week on that. And I’m going work through my drafts folder and actually get my notes from each of these events in a reasonable, but not perfect, shape, and post those too. Or I’ll decide that I didn’t get enough value from a given webinar or workshop and hit the “delete” button. I won’t blog every day – but I’m going to aim for two blog postings per week in addition to my evaluator competency one, for the month of November.

Progress, not perfection
Sticky note I have above my desk in my home office. Though in these the pandemic-y times, some days it’s more survival than progress that I hope to achieve.


1 As presentations and workshops have had to move online in response to the pandemic, it’s resulted in a lot of events that otherwise might have been just held locally being available anywhere in the world. And with a reckoning on racism bringing more attention to the work of BIPOC (Black, Indigenous, and people of colour) activists, scholars, and organizations, webinars on anti-racism and reconciliation have been amplified.
Posted in blogging, me, reflection | 2 Comments

CES Webinar Notes: Retrospective Pretest Survey

These are my rough notes from today’s CES webinar.

Speaker: Evan Poncelet

  • was asked “are retrospective post test (RPTs) legit?”, so it did some research on them
  • you can’t always do a pre-test (e.g., evaluator brought on after program has started; providing a crisis service, you can’t ask someone to do a pre-test first)
  • “response shift bias” – “you don’t know what you don’t know”. Respondents have a different understanding of the survey topic before and after an intervention. So they might rate their knowledge high before an intervention, then they learn more about the topic during the intervention and realize that they didn’t actually know as much as they thought they did. So afterwards, you rate your knowledge lower (or rate it as the same as before the intervention, but only because while you learned a lot of stuff, you also know more about the topics that you still don’t know). So you have a different internal standard before and after the intervention that you are judging yourself against.
  • a brief history of RPTs
    • emerge in the literature in 1950s (not much research on them – more “if you can’t do pre/post, do RPT”)
    • 1963 – suggested as an alternative to pre/post or a supplement (if you do both pre test and an RPT, you can detect historical effects)
    • 1970s-80s – suggested as a supplement to pre-test; research on RPTs (as a way to detect response shift bias)
    • now – typically used in place of pre-test; common in proD workshops (e.g., a one-day workshop)
  • what do they look like?
  • e.g., give a survey after a webinar:
NowBefore the
I’m confident in designing RPT Agree
  • But if you have the pre next to post on the same survey, very easy to give a socially desirable answer or to have answer affected by effort justification (i.e., people say there was an improvement to justify the time they spent taking part in the program)
  • give separate surveys for pre and post (to reduce the social desirability bias)
  • research shows that separate surveys does show reduced bias, more validity
  • another option: perceived change:
NowRate your improvement
attributable to webinar
Your confidence in designing
A little
A lot
  • research shows this option shows this is subject to social desirable bias
  • not a lot of research (could probably use more research)
  • advantages of RPTs
    • addresses response shift bias
    • provides a baseline (e.g., if missing pre-data)
    • research supports validity and reliability (e.g., an objective test of skill is compared with results of these surveys)
    • can be anonymous (don’t have to match pre- and post-surveys via an ID)
    • convenient and feasible
  • disadvantages of RPTs
    • motivation biases (e.g., social desirability bias, effort justification bias, implicit theory of change (you expect a chance to happen, so you report a change has happened)
    • can use a “lie scale” (e.g., include an item in your survey that has nothing to do with the intervention and see if people say they got better at that thing that wasn’t even in your intervention – detect people over inflating the effect of the workshop)
    • memory recall (so be very specific in your questions – e.g., “since you began the program in September…”). If you have long interventions, may be really high to recall
  • program attrition – missing data from dropouts (could actively try to collect data from the dropouts)
  • methodological preferences of the audience (what will your audience consider credible. RPTs are not well known and some may not consider them a credible source)

Other Considerations

  • triangulate data with other methods and sources (a good general principle!)
  • do post-test first, followed by RPT (research shows this gives respondents an easier frame of reference – it’s easier to rate how they are now, and then think about before)
  • type of information being collected:
    • if you want to see absolute change (frequency, occurrence) – do traditional pre/post test (it can be hard to remember specific counts of things later)
    • changes in perception (emotions, opinions, perceived knowledge) – do RPT

Slides and recording from this webinar will be posted (accessible to CES members only) at https://evaluationcanada.ca/webinars)

Posted in evaluation, evaluation tools, surveys, webinar notes | Tagged , | Leave a comment

Evaluator Competencies Series: Taking a Short Break!

It’s that busy time in the semester when the marking for the courses that I’m teaching is piling up and I am also working furiously on a couple of online courses (in addition to my day job). So I’ve decided that I’m going to take a wee break from writing my (nearly) weekly blog series on evaluator competencies until I get through this backlog of other work.

Recognizing our limitations and being about to prioritize are important competencies for an evaluator, right?

Posted in evaluator competencies, me | Tagged , | 1 Comment

Evaluator Competencies Series: Program Theory

2.3 Clarifies the program theory.

I really like helping programs figure out what their theory of change is. Early in my career as an evaluator, I was surprised how often I would work with a program that had no idea what its theory was. Like, you’d sit down with them and ask questions about what they were trying to achieve and how what they thought what they were doing was going to help them achieve it – and they didn’t know. They had never really thought about it. The program was the way it was by some combination of it having been started by someone in some way at some time for some reason and then it had been adapted over the years in response to funding cuts/new funding opportunities/new leadership/new research/[enter all sort of other possible factors here]. While talking about this with my class this weekend (I’m teaching a Program Planning & Evaluation course in a Masters of Health Administration program), one student described the programs that she’s worked on as having been MacGyvered and I absolutely love that description!

Perhaps way back when a program started there had been an idea of a program theory – or possibly not – but it’s been MacGyvered over the years and often there us no record of any original program theory. And so I discovered that an important part of work as an evaluator is often to help the program make explicit the theory of why they think the program will result in changes to achieve whatever it is trying to achieve. Because even if a program doesn’t have an explicit program theory, there is some implicit theory underneath.

Colors are changing

And there are many benefits about making your program ‘s theory of change explicit. As an evaluator, I want to know what the program’s theory is so I can design an evaluation to test the theory. But it can also be quite helpful to the program itself – helping them to get everyone on the same page about what the program is actually trying to achieve and getting them to think about whether what their program does is likely to get them there. Also, sometimes mapping out a program theory helps a program to identify that it is doing activities that are not likely to help them achieve their goals. It’s surprising how often programs do things because “we’ve always done these things”, even though they may no longer be needed or relevant. Working through a program theory can help identify those things.

Oftentimes, I work with those involved in the program to clarify the theory by developing a logic model together. There is a debate about whether a logic model is or is not a program’s theory of change. According to Michael Quinn Patton (2012), a logic model is simply a description of a logical sequence, but “specifying the causal mechanisms transforms a logic model into a theory of change”, i.e., you need to “explicitly add the change mechanism” to make it a theory of change. I like this explanation because it reminds us that a logic model on its own isn’t quite enough to be a “theory of change” so we need to think about what is the actual mechanism that is believed to lead to the change.

Thinking about how I do the work of clarifying program theory, I think my tips would be:

  • however you choose to clarify a program’s theory of change, do it collaboratively with as many people who have an interest in the program as possible. This is important because:
    • different people bring different perspectives and thus can help us to more fully understand how the program operates and the effects it can have
    • a lot of the value of clarifying a program theory comes from the process. Finding out that people aren’t on the same page as one another about what the program is doing and why, identifying gaps in your program’s logic, surfacing assumptions that people involved in the program have – all of this can lead to rich conversations and shared understanding of the program among those involved and you just don’t get that by handing someone a description of a program theory that was created by just one or two people.
  • a program theory should be thought of as a living thing. You can’t just map out a program theory once and think “well, that’s done!” Programs change, contexts change, people change… and our theories of change need to change to keep up with all of that!

This topic is also a good time to plug the free online logic modelling software that my sister, her partner, and I created: Dylomo (short for DYnamic LOgic MOdels). You can sign up for free and play around with it. Apologies in advance for any bugs – we created it off the side of our desks, so haven’t had time to add all the features we would like. If you do have any issues with it – or feedback about it – do get in touch!


Patton, M. Q., (2012). Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: Sage.

Image Source

  • Photo of leaves was opsted on Flickr by Mehul Antani with a Creative Commons licence. Again, I couldn’t find a good free-to-use image for what I was searching for (program theory, theory of change, logic model), but while searching for “change” I found that image of leaves changing colour and thought it was beautiful.

  • Dylomo logo was designed by my amazing sister, Nancy Snow.

Posted in evaluation, evaluator competencies | Tagged , , , | 3 Comments

Evaluator Competencies Series: Program Evaluability

2.2 Assesses program evaluability.

I can’t remember exactly when or where this was, but at some point during my career as an evaluator, I saw the phrase “Evaluability Assessment” and thought “what’s that?” I know I wasn’t a brand new evaluator, as when I looked it up and learned what it was I thought, “Oh! I’ve been doing that at the start of every evaluation that I’ve done. I didn’t know there was a name for it!”

An evaluability assessment is, much like the name suggests, assessing “the extent to which an activity or project can be evaluated in a reliable and credible fashion” (OECD-DAC 2010; p.21 cited on Better Evaluation). As I learned to be an evaluator, this seemed to be a thing that I naturally needed to do.

An example would be where a client says “I want an evaluation that tells me if the program is achieving its goals”. The first question one would ask is: “What are the program goals?” because I certainty can’t tell you to what extent you are achieving your goals if you don’t have any goals. Similarly, a program may ask an evaluator to conduct an evaluation on whether the program has improved some particular thing for their program participants (e.g., their health, their knowledge of a topic, their social connectedness – whatever thing the program is trying to help its participants improve). In that case, one would naturally ask “how were the clients doing on that something before they started the program?” (i.e., do you have any baseline data we can use to compare to?). Or perhaps it’s a case where the client says “I want to know if my program is working?”, in which case I would ask “What does the program “working” mean to you?”. And that might lead to some work around developing a program theory, or figuring out if they want to know what outcomes are achieved, or if they want to know if their processes are efficient, or whether they are concerned about negative unintended consequences (or maybe all of the above). In my experience as an evaluator, when I ran into situations like this, my first course of action would be to work with the clients to figure out how to get their program into a state in which they are evaluable.

What I didn’t realize in my early years as an evalulator is that in some cases, an evaluability assessment could be a project unto itself. (Check out the Better Evaluation page on Evaluability Assessment if you want to read more about it.)

This may be due to the fact that I’ve always been an internal evaluator (or, as I think of myself on the program I’m currently working on, an external evaluator who is embedded in the program for the long term). So I’ve always had the luxury to be able to work with my “clients” to get them into an evaluable state as part of the work I do with them. Perhaps if I were an external evaluator, I may have come across stand alone evaluability assessments as potential projects.

I couldn’t find any images online that I felt represented “evaluability assessment” (probably not surprising… spell check doesn’t even believe that “evaluability” is a word!). So instead I give you this picture of my cats:

Watson & Crick in a Costco box
Watson (the tabby) and Crick (the grey and white cat).
Posted in evaluation, evaluator competencies, reflection | Tagged , , , | Leave a comment

Evaluator Competencies Series: Clarifying Purpose and Scope

The next domain of competence is technical practice.

2. Technical Practice competencies focus on the strategic, methodological, and interpretive decisions required to conduct an evaluation.

And the first competency in this domain is:

2.1 Clarifies the purpose and scope of the evaluation.

Let’s begin at the beginning. That may seem trite, but I think it’s such a common saying because so often, people want to start somewhere other than the beginning. I cannot tell you how many times I’ve been consulted about an evaluation and the person seeking my advice starts with something like:

  • I have a set of indicators and I need to do an evaluation using them.
  • I want to do an evaluation of my program but I can’t figure out how to make it into a randomized controlled trial (RCT) because the program is already run.
  • I need your help to create a survey to evaluate my program.
  • I need to do a developmental evaluation [or whatever the latest trend in evaluation is at the time] of my program.

These are all examples of not beginning at the beginning. Many people seem to think that an evaluation requires a specific method (e.g., a survey) or a specific design (e.g., an RCT). Or they think that whatever the latest trend in evaluation must be the best approach, because it’s new. Or they have data already and they want to use it 1I just noticed that I‘ve written about this before, more than 4 years ago! Past Beth would be sad to hear that I’m still experiencing this!. But where an evaluation needs to start is with its purpose. Why, exactly, do you want an evaluation? What will you use the findings of the evaluation for? These are the types of questions that I will ask (usually preceded by me saying “Let’s back up a second!”). Because the purpose of the evaluation will guide the choice of approach, design, and methods. For example, if you are interested in an evaluation that will help you to determine to what extent you’ve achieved your goals, and none of your current indicators relate to your goals, then starting with “I have a set of indicators and I need to do an evaluation using them” is not going to get you where you want to be. Similarly, if you want an evaluation that will help surface unanticipated consequences (and I tend to think that evaluations should usually be on the look out for them), then having a set of pre-defined indicators is not going to be what you need (after all, to create an indicator, you have to have been anticipating that it might be affected by the program!). If the purpose of your evaluation is not a developmental one, then developmental evaluation might not be the best approach for you. So clarifying the purpose (or purposes) of an evaluation is something that I do at the start of every evaluation – and something that I check in on during the evaluation, both to see if what we are doing in the evaluation is helping to meet its purpose and to see if the purpose changes (or new purposes emerge) along the way.

Clarifying the scope of an evaluation is also really important, and something that I struggle with. I’m am an infinitely curious person and I want to know all the things! But there just isn’t enough time and resources to look at every possible thing in any given evaluation, so it’s important to be able to clarify what the scope of any given project is. Like purpose, it’s important to clarify the scope of the evaluation with your client at the start, and to keep tabs on it throughout the evaluation. If you don’t have a clear scope, it’s very easy to fall into the trap of the dreaded “scope creep” – where extra things get added to the project that weren’t initially agreed to and then either the costs go up – or the timeline gets extended. It’s not to say that the scope can’t change during an evaluation, but just that any changes to scope should be done mindfully and in agreement between the client and the evaluator.

Working in a large organization like I do, I also find it useful to understand the scope of other departments that do similar work to evaluation (like quality improvement and performance management). This is helpful in ensuring that we aren’t duplicating efforts of other teams, and also that we aren’t stepping on anyone else’s toes. Also, I’ve had the experience of taking on work that really should have been done by another team (i.e., the dreaded scope creep!) and had we not figured this out by clarifying scope, it would have really impaired our ability to deliver on the work that we needed to deliver on.

My team and I have done some work on clarifying what the scope of evaluation is relative to these other groups and I was about to say “and that’s a topic for another blog posting”, but then I remembered that I’m presenting a webinar (based on a conference presentation I gave last year) on that in a couple of weeks! So here’s my shameless plug: if you want to hear me pontificate on the similarities, differences, and overlaps between evaluation and other related approaches to assessing programs and services, register for my webinar, hosted by the Canadian Evaluation Society’s BC Chapter on Friday, September 13 (that’s right, Friday the 13th!) at 12 pm Pacific Time.


1 I just noticed that I‘ve written about this before, more than 4 years ago! Past Beth would be sad to hear that I’m still experiencing this!
Posted in evaluation, evaluator competencies | Tagged , , | Leave a comment